Stuart Russell argues for a new approach to AI risk

Stuart Russell, Professor of Computer Science at the University of California, Berkeley, author (with Peter Norvig) of Artificial Intelligence: A New Approach, and CSER External Advisor, has warned of risks from Artificial Intelligence, and encouraged researchers to rethink the goals of their research.

Writing on, described by The Guardian as ‘an internet forum for the world’s most brilliant minds’, Russell noted that while it has not been proved that AI will be the end of the world, “there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility.” He notes that many unconvincing arguments have been refuted, but claims that the more substantial arguments proposed by Omohundro, Bostrom and others remain largely unchallenged.

Up to now improving decision quality has been the mainstream goal of AI research, an end towards which significant progress has been made in recent years. In Russell’s view AI research has been accelerating rapidly, and senior AI researcher express considerably more optimism over the field’s prospects than was the case even a few years ago, and that, as a result, we should be correspondingly more concerned about the field’s risks.

He dismisses fears that those raising the prospect of AI risk will call for regulation of basic research, an approach which would be misguided and misdirected given the potential benefits of AI for humanity. The right response, he writes, is to change the goals of the field itself from building a pure intelligence, to building an intelligence which is provably aligned with human values. In the same way as containment is seen as an intrinsic part of nuclear fusion research, such an approach would reliably and practically limit the risk of future catastrophe.

Read the entire discussion including contributions from, among others, George Church, Professor of Genetics at Harvard and CSER advisor, and Peter Diamandis, Chairman of the X Prize Foundation here.

Rees launches Asteroid Day

asteroid day
Yesterday, Martin Rees and other 100 other experts announced Asteroid Day. Asteroid day is an awareness movement dedicated to learning about asteroids and how to protect our planet. It will take place on the anniversary of the 1908 Siberian Tunguska event, starting on the 30th June 2015.

At the launch event, Martin Rees read the 100X Asteroid Declaration:

As scientists and citizens, we strive to solve humanity’s greatest challenges to safeguard our families and quality of life on Earth in the future.

Asteroids impact Earth: such events, without intervention, will cause great harm to our societies, communities and families around the globe. Unlike other natural disasters, we know how to prevent asteroid impacts.

There are a million asteroids in our solar system that have the potential to strike Earth and destroy a city, yet we have discovered less than 10,000 — just one percent — of them. We have the technology to change that situation.

Therefore, we, the undersigned, call for the following action:

Employ available technology to detect and track Near-Earth Asteroids that threaten human populations via governments and private and philanthropic organisations.

A rapid hundred-fold (100x) acceleration of the discovery and tracking of Near-Earth Asteroids to 100,000 per year within the next ten years.

Global adoption of Asteroid Day, heightening awareness of the asteroid hazard and our efforts to prevent impacts, on June 30, 2015.

I declare that I share the concerns of this esteemed community of astronauts, scientists, business leaders, artists and concerned citizens to raise awareness about protecting and preserving life on our planet by preventing future asteroid impacts.

Over 100 leaders from these areas have signed the declaration, including: Astronomer Royal Ed Lu, astrophysicist Kip Thorne, astronomer Carolyn Shoemaker, Nobel laureate Harold Kroto, Google’s Peter Norvig, science guy Bill Nye, science presenter Brian Cox, Queen guitarist and astrophysicist Brian May, and over 38 astronauts and cosmonauts.

This is a great group to have advocating for reducing the risk of catastrophe!

Read more at the Asteroid Day website.

Extreme technological risks in the Chief Scientific Advisor’s annual report

CSER’s Huw Price and Sean O hEigeartaigh had a case study on geoengineering featured on the UK Chief Scientific Advisor’s annual report to the UK government. The case study used the example of sulphate aerosol geoengineering to illustrate the policy challenges associated with a potentially beneficial technology that may come with a small (though difficult to assess) risk of catastrophic side effects. The case study complemented a chapter focused on this class of risks that was contributed by CSER collaborators Toby Ord and Nick Beckstead, which reviewed the work of CSER’s Martin Rees among others.

The publication can be seen here.

Martin Rees lecture in the New Statesman

A new lecture that Martin Rees recently gave at Harvard’s Program on Science, Technology and Society, has now been published online in The New Statesman. In his talk, Martin Rees encouraged scientists and policymakers to consider hazards that might curtail the future development of human civilisation. Here is a short excerpt:

In contrast, the hazards that are the focus of this talk are those that humans themselves engender – and they now loom far larger. And in discussing them I’m straying far from my ‘comfort zone’ of expertise. So I comment as a ‘citizen scientist’, and as a worried member of the human race. I’ll skate over a range of topics, in the hope of being controversial enough to provoke discussion.

Ten years ago I wrote a book that I entitled Our Final Century? My publisher deleted the question-mark. The American publishers changed the title to Our Final Hour (Americans seek instant gratification).

My theme was this. Earth is 45 million centuries old. But this century is the first when one species – ours – can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating setbacks. That’s because of unsustainable anthropogenic stresses to ecosystems, because there are more of us (world population is higher) and we’re all more demanding of resources. And – most important of all – because we’re empowered by new technology, which exposes us to novel vulnerabilities.

And we’ve had one lucky escape already.

At any time in the Cold War era – when armament levels escalated beyond all reason – the superpowers could have stumbled towards Armageddon through muddle and miscalculation. During the Cuba crisis I and my fellow-students participated anxiously in vigils and demonstrations. But we would have been even more scared had we then realised just how close we were to catastrophe. Kennedy was later quoted as having said at one stage that the odds were ‘between one in three and evens’. And only when he was long retired did Robert McNamara state frankly that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.” Be that as it may, we were surely at far greater hazard from nuclear catastrophe than from anything nature could do. Indeed the annual risk of thermonuclear destruction during the Cold War was about 10,000 times higher than from asteroid impact.

It is now conventionally asserted that nuclear deterrence worked. In a sense, it did. But that doesn’t mean it was a wise policy. If you play Russian roulette with one or two bullets in the barrel, you are more likely to survive than not, but the stakes would need to be astonishing high – or the value you place on your life inordinately low – for this to seem a wise gamble. But we were dragooned into just such a gamble throughout the Cold War era. It would be interesting to know what level of risk other leaders thought they were exposing us to, and what odds most European citizens would have accepted, if they’d been asked to give informed consent. For my part, I would not have chosen to risk a one in three – or even one in six – chance of a disaster that would have killed hundreds of millions and shattered the historic fabric of all our cities, even if the alternative were certain Soviet dominance of Western Europe. And of course the devastating consequences of thermonuclear war would have spread far beyond the countries that faced a direct threat especially if a nuclear winter were triggered…

Martin Rees speaks on Existential Risks at the Kennedy School of Government

This past week, CSER co-founder and emeritus Professor of Cosmology and Astrophysics at Cambridge, Lord Rees, spoke at a panel on Catastrophic Risks, organised by the Harvard Kennedy School’s Program on Science Technology and Society in Cambridge, MA.

Rees and the other panelists discussed risks ranging from artificial intelligence to climate change.

“We just don’t know the boundary of what may happen and what will remain science fiction,” Rees commented, arguing that “existential crises deserve more attention even though they are unlikely.” “The unfamiliar is not the same as the impossible,” he reminded the audience of students and academics.

The panel was chaired by Shiela Jasanoff, Pforzheimer Professor of Science and Technology Studies at Harvard and also featured Sven Beckert, the Laird Bell Professor of History; George Daley of the Children’s Hospital Boston and the Harvard Medical School; Jennifer Hochschild, the Henry LaBarre Jayne Professor of Government and Daniel Schrag, the Director of the Harvard University Center for the Environment.

Jennifer Hochschild spoke on the political aspects of mitigating existential risks, describing how scientific and technological advances have now become partisan matters. More than simply seeing the issue as one of right or left, she argued that for politicians the immediate and the parochial will always outweigh the distant and the global. “The right policy is the one that gets 50 percent plus one of the vote,” she commented, regardless of political orientation.

CSER features on The Science Network

The Science Network has uploaded a discussion on existential risk. Alongside CSER founder Martin Rees, this discussion features professor of supercomputing Larry Smarr and neuroscientists Roger Bingham and Terrence Sejnowski. In this discussion, they discuss some of the philosophy of CSER. It’s a good example of how eminent scientists from diverse fields are now assembling to discuss the importance of risks associated with accelerating technological change. Here are some of their transcribed comments.

Terrence Sejnowski: What we’re striving at is big data and the fact that we can now use computers that are so powerful to sift through all of the data that the world has produced and the internet and to extract from that regularities and knowledge, and that’s called learning in the case of biological systems. We’re taking in a huge amount of data. Every second is like a fire-hose and we’re sifting through that very efficiently and pick out the little bits of data and incorporate that into the very architecture and the very hardware of our brain and how that is done is something that neuroscientists today are just starting to get to the mechanisms underlying memory, where in the brain and how that’s done biochemically but once we’ve understood that, once we know how the brain does it, it’s going to be a matter of just a few years before that’s incorporated into the latest computer algorithms and already something called Deep Learning has taken over the searches at Google for example for speech recognition and object recognition and images. And I’m told that Google Maps is now using that to help you navigate in terms of where you are – what street you’re on. It’s already happening, and Larry’s right about that. It reminds me of a cartoon I once saw, A Disney movie about the sorcerer’s apprentice, and this was about an apprentice who was tasked with cleaning the sorcerer’s apartment. And he managed to get a magic wand which would create a broom that would help him. Unfortunately, he didn’t know how to turn it off, and so it kept doubling, until the room was flooded. And the problem is that we’re like the sorcerer’s apprentice in the sense that we’re creating the technology but at some point we will lose track of how to control it, or [it could] fall into dangerous hands. But I think the real immediate threat is just not knowing what the consequences are. Unintended consequences that are already happening that we’re beginning to glimmer for example the privacy issue of who has access to your data. That’s something that still hasn’t been settled yet. The problem is that it takes time to explore and settle these issues and make policies and it’s happening too quickly for us to be able to do that.

Larry Smarr: ‘I think that’s the real issue. The timescale. Everybody’s concentrating right now on the Ebola virus and its spread. And you think, we’ll come up with some way to stop it. Maybe it won’t be easily transmitted… we became aware of [HIV] in 1980, by 1990, we were spending a million dollars a year and many of the best scientists in the world started working on the problem. That’s 30 years ago. And the number of people with HIV is just now starting to peak out, 30 years later. Well, do we have 30 years with a billion dollars a year and all of the best brains on the planet to work on these projects? Do we have 30 years to get on top of some of these things, and that is what I’m concerned about, is that by the time we see some of these beginning to happen, which are, let’s just say, antisocial, either because of the inability of our government to come to any kind of decision about anything, or regulatory timescales… but what if the best minds on the planet, with the best will from society to get the solution to the problem, don’t have time enough to do it.

Martin Rees drew attention to how taking a longer perspective can affect one’s perception of these risks.

Martin Rees: As astronomers, we do bring one special perspective, which is that we’re aware of the far future. We’re not aware merely of the fact that it’s taken four billion years for us to evolve but that there are four billion years ahead of us on Earth, and so human life is not the culmination, more evolution will happen on Earth and far beyond, and so we should be concerned, not just with the effects, and the downsides that using these technologies will have on our children and grandchildren, as it were, but with the longer-term effects they have, and of course there’s a big ethical issue to what extent you should weigh in the welfare of people not yet born. Some people say that you should take that into account, and if that’s the case, then that’s an extra motive for worrying about all these long-term things like climate change which will have their worst effects more than a century from now. ‘If you’re a standard economist and you discount the future in the standard way, then anything more than 50 years into the future is discounted to zero and I think that we ought to ask ‘Is that the right way to think about future generations?’ You could perhaps instead have a different principle that we should not discriminate on the grounds of date of birth. We should value the welfare of a newborn baby as much as of someone who is middle aged. And if we take that line instead of straight economic criteria, we would surely be motivated to do more now to minimise these potential long-term threats.’

The full discussion is available here.

5th IPCC Synthesis Report released

The synthesis report from the IPCC’s fifth assessment period was released on Saturday.

This document is a collective assessment of climate change by governments, subject to agreement by representatives of 195 government members. It reports that anthropogenic climate change poses substantial global risks, stressing the importance of keeping warming below 2 degrees celcius. It states that the cost of intervention will be higher if intervention is delayed, and advocates for an integrated individual, governmental and corporate response, which may include changes to water, energy and land use, as well as renewable energy and carbon sequestration technologies. The synthesis report is available here.

Elon Musk warns of existential risk from AI

This past week, Elon Musk, CEO of Tesla and SpaceX, warned an audience of MIT students of the risks from artificial intelligence.

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

Musk has previously made headlines for explaining that his investments in Vicarious were intended to keep an eye on artificial intelligence risks, and for tweeting about Nick Bostrom’s book Superintelligence. He joined the advisory board of CSER several months ago.

Musk’s recent remarks on artificial intelligence and the remainder of his talk can be viewed here.

Margaret Boden to interview on AI this week

This Tuesday, Professor Margaret Boden, an advisor to CSER, will be interviewed on The Life Scientific on the topic of artificial intelligenceAt 9am, on BBC radio 4, she will discuss the potential of artificial intelligence, as well as the potential insight of a computational approach to understanding the mind. If you can’t catch the segment, it will be made available online shortly after broadcast.

CSER and German Government organise workshop on extreme technological risks

The Centre for the Study of Existential Risk is delighted to partner with the German government in organising a high-level workshop on existential and extreme technological risks, to take place on Friday September 19th.  The meeting will bring together leading German and UK research networks to focus on emerging technological threats, and will be hosted by the German Federal Foreign Office, together with the Ministry of Science and Education and the Ministry of Defence.

Ten of CSER’s leading academics and advisors will take part and present: Lord Martin Rees, Professor Huw Price, Professor William Sutherland, Professor Susan Owens, Mr Jaan Tallinn, Professor Nick Bostrom, Professor Stuart Russell, Professor Tim Lewens, Dr Anders Sandberg and Dr Sean O hEigeartaigh. They will be joined by leading experts from a range of Germany’s research networks, including the Max Planck Society, the Robert Koch Institute, the Center for Artificial Intelligence, the Fraunhofer Institute, the Hemholtz Association, as well as German universities. The attendance will be completed by members of a range of German governmental departments, the UK’s Foreign and Commonwealth Office, and senior representatives of the Volkswagen Foundation.

Topics to be discussed will include approaches for analysing high impact low probability risks from technology, horizon-scanning and foresight methods, policy challenges, and areas of potential synergy or collaboration between research networks. Specific sciences/technologies to be discussed include artificial intelligence, emerging capabilities in biotechnology, and pathogen research.

CSER is very grateful for the support of the German government, and the Federal Foreign Office in particular, in organising and funding the event and the travel of German participants, and for helping to bring this level of expertise to bear on questions of global importance. CSER is also extremely grateful for the financial support of cryptographer and software engineer Paul Crowley, who funded flights and accommodation for CSER academics, and without whose support the workshop could not have taken place.