Margaret Boden public lecture: June 19th

The Centre for the Study of Existential Risk is delighted to host Professor Margaret Boden (Research Professor of Cognitive Science at the Department of Informatics, University of Sussex) for a public lecture on Friday 19th June 2015.

The event is free and open to everyone, but due to expected demand, booking will be necessary. Book here.

Venue: GR06/07 Faculty of English, 9 West Road, Cambridge, CB3 9DP

The event will be followed by a wine reception.

Abstract:
Human-level (“general”) AI is more difficult to achieve than most people think. One key obstacle is relevance, a conceptual version of the frame problem. Another is lack of the semantic web. Yet another is the difficulty of computer vision. So artificial general intelligence (AGI) isn’t on the horizon. Possibly, it may never be achieved. No AGI means no Singularity. Even so, there’s already plenty to worry about—and future AI advances will add more. Areas of concern include unemployment, computer companions, and autonomous robots (some, military). Worries about the (illusory) Singularity have had the good effect of waking up the AI community (and others) to these dangers. At last, they are being taken seriously.

Professor Margaret Boden is a world-leading academic in the study of intelligence, both artificial and otherwise. She is is Research Professor of Cognitive Science at the Department of Informatics at the University of Sussex, where her work embraces the fields of artificial intelligence, psychology, philosophy, cognitive and computer science. She was the founding-Dean of Sussex University’s School of Cognitive and Computing Sciences, a pioneering centre for research into intelligence and the mechanisms underlying it — in humans, other animals, or machines. The School’s teaching and research involves an unusual combination of the humanities, science, and technology.

Professor Boden has also been an important participant in the recent international discussions over the long-term impacts of AI. She was a member of the AAAI’s 08/09 Presidential Panel on long-term AI futures (http://www.aaai.org/Organization/presidential-panel.php), and also took part in the recent Puerto Rico conference on the Future of AI, co-organised by CSER (http://futureoflife.org/misc/ai_conference); she is therefore uniquely well-placed to discuss near- and long-term prospects in AI.

Stuart Russell Lecture Success

Thank you to all who attended the lecture given by Professor Stuart Russell at the Winstanley Lecture Theatre on Friday afternoon.

The event was a great success with a full to capacity room and a clear and thought provoking presentation by Professor Russell.  The weather was also kind to CSER, and combined with the beautiful setting ‘Under the Wren’ at Trinity College, connections were made and many a lively discussion took place during the post-lecture reception.

We will shortly post a video of the complete talk on this website, but in the meantime you can read Callum Chace’s excellent review of the event at his blog http://pandoras-brain.com/.

 

IMG_1769 B&W

CSER Public Lecture: Michael Osborne on Technology at Work

Followers of CSER’s work may also be interested in a forthcoming public lecture by Michael Osborne (Engineering Science, Oxford).

Michael will be giving a public lecture ‘Technology at Work: The Future of Innovation and Employment‘ on Tuesday 12th May 2015, 14.00 – 16.00 at CRASSH in Cambridge.  This is part of a series of lectures by the Technology and Democracy project.

Abstract:

For decades economists, technologists, policy-makers and politicians have argued about whether automation destroys or creates jobs.  And up to now the general consensus has been that while some jobs are eliminated by automation, more new jobs have, in general, been created.  But recently, advances in computing power, machine learning and AI, software, sensor technology and data analytics have brought the “automation” question to the fore again.  People are asking if a radical disruption is under way.  Are we heading into a “second machine age” in which advanced robotics and intelligent computing make occupational categories that were hitherto reserved for humans vulnerable to automation?  One of the most penetrating attempts to answer this question was the research conducted by Oxford scholars Michael Osborne and Carl Frey which resulted in a path-breaking report arguing that 47 per cent of US job categories might be vulnerable to computerisation in the next two decades.

In this Seminar, the first in the new Technology & Democracy project’s series, Michael Osborne discusses his research and its implications.

Michael A Osborne is an expert in the development of machine intelligence in sympathy with societal needs. His work on robust and scalable inference algorithms in Machine Learning has been successfully applied in diverse and challenging contexts, from aiding the detection of planets in distant solar systems to enabling self-driving cars to determine when their maps may have changed due to roadworks. Dr Osborne also has deep interests in the broader societal consequences of machine learning and robotics, and has analysed how intelligent algorithms might soon substitute for human workers.

Dr Osborne is an Associate Professor in Machine Learning, a co-director of the Oxford Martin programme on Technology and Employment, an Official Fellow of Exeter College, and a Faculty Member of the Oxford-Man Institute for Quantitative Finance, all at the University of Oxford.

For further details, visit the CRASSH website.

Stuart Russell Public Lecture

Tickets for Stuart Russell’s public lecture on Friday 15th May 2015 have currently sold out.

There may be a small release of further tickets during the week beginning 11th May 2015. To be considered for these, please add your name to the waiting list via Eventbrite, and you will be contacted should tickets become available.

May we also ask that if you are no longer able to attend the event, that you cancel your Eventbrite booking in order to free up tickets for others who would like to attend.

Thank you for your interest!

Partha Dasgupta authors climate change paper for the Vatican

Partha Dasgupta, one of CSER’s founding advisors, has acted as the first author for the Vatican workshop statement, Climate Change and the Common Good: A Statement Of The Problem And The Demand For Transformative Solutions. This statement summarizes scientific agreement from the recent Protect The Earth, Dignify Humanity summit. Its authors recommend that the Catholic Church can help by mobilising public opinion and public funds to meet the energy needs of the world’s poorest.

CSER’s co-founder Martin Rees was an author to the paper, alongside Jeffrey Sachs, Veerabhadran Ramananthan, and several others. Two dozen other climate scientists, pontifical academics and religious leaders are signatories.

Partha Dasgupta and Martin Rees at recent Vatican meeting

Partha Dasgupta and Martin Rees of CSER both spoke at a the Pontifical Summit just yesterday. At this summit, entitled Protect The Earth, Dignify Humanity, they both spoke to the need for us to include our environmental impact in our measurement of progress.

Sir Dasgupta, an advisor to CSER, argued that we should move on from considering GDP an ultimate measure of economic success:

“GDP is a disgraceful index because it does not count depreciation of our assets – including damage to Mother Nature, the most fundamental asset we have.”

These discussions may influence the papal Encyclic on climate change, forthcoming in June this year.

CSER Public Lecture: Stuart Russell on Long-Term Future of (Artificial) Intelligence

We’re delighted to announce that Professor Stuart Russell (Berkeley) will be giving a CSER public lecture on May 15th.

The lecture is free and open to everyone, but demand is expected to be high so pre-registration is necessary. Registration and details are available here.

The Long-Term Future of (Artificial) Intelligence

Abstract: The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.

Professor Stuart Russell (Berkeley) is one of the biggest names in modern artificial intelligence worldwide. His Artificial Intelligence: A Modern Approach (cowritten with Google’s head of research Peter Norvig) is a leading textbook in the field.

He is also one of the most prominent people thinking about the long-term impacts and future of AI. He has raised concerns about the potential future use of fully autonomous weapons in war. Thinking longer-term, he has posed the question “What if we succeed” in developing strong AI, and suggests that success in this might represent the biggest event in human history. He has organised a number of prominent workshops and meetings around this topic, and this January wrote an open letter calling for a realignment of the field of AI towards research on safe and beneficial development of AI, now signed by a who’s who of field leaders worldwide.

Other relevant articles on or by Professor Russell:
The long-term future of AI (from his own website)
Of myths and moonshine – his response to the Edge.org question on the myth of AI.
Concerns of an artificial intelligence pioneer Interview in Quanta

Nick Bostrom TED Talk

Today, a TED talk by FHI Director and CSER Advisor Professor Nick Bostrom went online. In his presentation, What Happens When Our Computers Get Smarter Than We Are? Bostrom reviewed the possible consequences of reaching human-level artificial intelligence, and some considerations for safety strategies.

bostrom

Here is an excerpt, in which he describes how hard he would expect it to be to reach different levels of intelligence:

“Most people, when people think of what is smart, and what is done, I think have in mind, a picture roughly like this. On one end, we have the village idiot and then far over at the other side, we have Ed Witten or Albert Einstein, or whoever your favourite guru is. But I think that from the point of view of artificial intelligence, the true picture is probably more like this. AI starts off at zero intelligence. And after many years of really hard work, maybe eventually we reach mouse-level intelligence, something that can navigate cluttered environments as well as a mouse can. And then, after many more years of really hard work, lots of investment, maybe we get to chimpanzee-level artificial intelligence. And then, after even more years of really hard work, we get to village idiot-artificial intelligence. And, a few moments later, we are beyond Ed Witten. The train doesn’t stop at humanville station. It’s likely, rather to swoosh right by.”

Despite his concern about a speedy transition, Bostrom conveys a relatively positive outlook:

I’m actually fairly optimistic that this problem can be solved. We wouldn’t have to write down a long list of everything we care about, or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I. that uses its intelligence to learn what we value, and its motivation system is constructed in such a way that it is motivated to pursue our values or to perform actions that it predicts we would approve of. We would thus leverage its intelligence as much as possible to solve the problem of value-loading.

This can happen, and the outcome could be very good for humanity. But it doesn’t happen automatically. The initial conditions for the intelligence explosion might need to be set up in just the right way if we are to have a controlled detonation. The values that the A.I. has need to match ours, not just in the familiar context, like where we can easily check how the A.I. behaves, but also in all novel contexts that the A.I. might encounter in the indefinite future.

And there are also some esoteric issues that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical uncertainty and so forth. So the technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.