Martin Rees speaks on Existential Risks at the Kennedy School of Government

This past week, CSER co-founder and emeritus Professor of Cosmology and Astrophysics at Cambridge, Lord Rees, spoke at a panel on Catastrophic Risks, organised by the Harvard Kennedy School’s Program on Science Technology and Society in Cambridge, MA.

Rees and the other panelists discussed risks ranging from artificial intelligence to climate change.

“We just don’t know the boundary of what may happen and what will remain science fiction,” Rees commented, arguing that “existential crises deserve more attention even though they are unlikely.” “The unfamiliar is not the same as the impossible,” he reminded the audience of students and academics.

The panel was chaired by Shiela Jasanoff, Pforzheimer Professor of Science and Technology Studies at Harvard and also featured Sven Beckert, the Laird Bell Professor of History; George Daley of the Children’s Hospital Boston and the Harvard Medical School; Jennifer Hochschild, the Henry LaBarre Jayne Professor of Government and Daniel Schrag, the Director of the Harvard University Center for the Environment.

Jennifer Hochschild spoke on the political aspects of mitigating existential risks, describing how scientific and technological advances have now become partisan matters. More than simply seeing the issue as one of right or left, she argued that for politicians the immediate and the parochial will always outweigh the distant and the global. “The right policy is the one that gets 50 percent plus one of the vote,” she commented, regardless of political orientation.

CSER features on The Science Network

The Science Network has uploaded a discussion on existential risk. Alongside CSER founder Martin Rees, this discussion features professor of supercomputing Larry Smarr and neuroscientists Roger Bingham and Terrence Sejnowski. In this discussion, they discuss some of the philosophy of CSER. It’s a good example of how eminent scientists from diverse fields are now assembling to discuss the importance of risks associated with accelerating technological change. Here are some of their transcribed comments.

Terrence Sejnowski: What we’re striving at is big data and the fact that we can now use computers that are so powerful to sift through all of the data that the world has produced and the internet and to extract from that regularities and knowledge, and that’s called learning in the case of biological systems. We’re taking in a huge amount of data. Every second is like a fire-hose and we’re sifting through that very efficiently and pick out the little bits of data and incorporate that into the very architecture and the very hardware of our brain and how that is done is something that neuroscientists today are just starting to get to the mechanisms underlying memory, where in the brain and how that’s done biochemically but once we’ve understood that, once we know how the brain does it, it’s going to be a matter of just a few years before that’s incorporated into the latest computer algorithms and already something called Deep Learning has taken over the searches at Google for example for speech recognition and object recognition and images. And I’m told that Google Maps is now using that to help you navigate in terms of where you are – what street you’re on. It’s already happening, and Larry’s right about that. It reminds me of a cartoon I once saw, A Disney movie about the sorcerer’s apprentice, and this was about an apprentice who was tasked with cleaning the sorcerer’s apartment. And he managed to get a magic wand which would create a broom that would help him. Unfortunately, he didn’t know how to turn it off, and so it kept doubling, until the room was flooded. And the problem is that we’re like the sorcerer’s apprentice in the sense that we’re creating the technology but at some point we will lose track of how to control it, or [it could] fall into dangerous hands. But I think the real immediate threat is just not knowing what the consequences are. Unintended consequences that are already happening that we’re beginning to glimmer for example the privacy issue of who has access to your data. That’s something that still hasn’t been settled yet. The problem is that it takes time to explore and settle these issues and make policies and it’s happening too quickly for us to be able to do that.

Larry Smarr: ‘I think that’s the real issue. The timescale. Everybody’s concentrating right now on the Ebola virus and its spread. And you think, we’ll come up with some way to stop it. Maybe it won’t be easily transmitted… we became aware of [HIV] in 1980, by 1990, we were spending a million dollars a year and many of the best scientists in the world started working on the problem. That’s 30 years ago. And the number of people with HIV is just now starting to peak out, 30 years later. Well, do we have 30 years with a billion dollars a year and all of the best brains on the planet to work on these projects? Do we have 30 years to get on top of some of these things, and that is what I’m concerned about, is that by the time we see some of these beginning to happen, which are, let’s just say, antisocial, either because of the inability of our government to come to any kind of decision about anything, or regulatory timescales… but what if the best minds on the planet, with the best will from society to get the solution to the problem, don’t have time enough to do it.

Martin Rees drew attention to how taking a longer perspective can affect one’s perception of these risks.

Martin Rees: As astronomers, we do bring one special perspective, which is that we’re aware of the far future. We’re not aware merely of the fact that it’s taken four billion years for us to evolve but that there are four billion years ahead of us on Earth, and so human life is not the culmination, more evolution will happen on Earth and far beyond, and so we should be concerned, not just with the effects, and the downsides that using these technologies will have on our children and grandchildren, as it were, but with the longer-term effects they have, and of course there’s a big ethical issue to what extent you should weigh in the welfare of people not yet born. Some people say that you should take that into account, and if that’s the case, then that’s an extra motive for worrying about all these long-term things like climate change which will have their worst effects more than a century from now. ‘If you’re a standard economist and you discount the future in the standard way, then anything more than 50 years into the future is discounted to zero and I think that we ought to ask ‘Is that the right way to think about future generations?’ You could perhaps instead have a different principle that we should not discriminate on the grounds of date of birth. We should value the welfare of a newborn baby as much as of someone who is middle aged. And if we take that line instead of straight economic criteria, we would surely be motivated to do more now to minimise these potential long-term threats.’

The full discussion is available here.

5th IPCC Synthesis Report released

The synthesis report from the IPCC’s fifth assessment period was released on Saturday.

This document is a collective assessment of climate change by governments, subject to agreement by representatives of 195 government members. It reports that anthropogenic climate change poses substantial global risks, stressing the importance of keeping warming below 2 degrees celcius. It states that the cost of intervention will be higher if intervention is delayed, and advocates for an integrated individual, governmental and corporate response, which may include changes to water, energy and land use, as well as renewable energy and carbon sequestration technologies. The synthesis report is available here.

Elon Musk warns of existential risk from AI

This past week, Elon Musk, CEO of Tesla and SpaceX, warned an audience of MIT students of the risks from artificial intelligence.

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.”

Musk has previously made headlines for explaining that his investments in Vicarious were intended to keep an eye on artificial intelligence risks, and for tweeting about Nick Bostrom’s book Superintelligence. He joined the advisory board of CSER several months ago.

Musk’s recent remarks on artificial intelligence and the remainder of his talk can be viewed here.

Margaret Boden to interview on AI this week

This Tuesday, Professor Margaret Boden, an advisor to CSER, will be interviewed on The Life Scientific on the topic of artificial intelligenceAt 9am, on BBC radio 4, she will discuss the potential of artificial intelligence, as well as the potential insight of a computational approach to understanding the mind. If you can’t catch the segment, it will be made available online shortly after broadcast.

CSER and German Government organise workshop on extreme technological risks

The Centre for the Study of Existential Risk is delighted to partner with the German government in organising a high-level workshop on existential and extreme technological risks, to take place on Friday September 19th.  The meeting will bring together leading German and UK research networks to focus on emerging technological threats, and will be hosted by the German Federal Foreign Office, together with the Ministry of Science and Education and the Ministry of Defence.

Ten of CSER’s leading academics and advisors will take part and present: Lord Martin Rees, Professor Huw Price, Professor William Sutherland, Professor Susan Owens, Mr Jaan Tallinn, Professor Nick Bostrom, Professor Stuart Russell, Professor Tim Lewens, Dr Anders Sandberg and Dr Sean O hEigeartaigh. They will be joined by leading experts from a range of Germany’s research networks, including the Max Planck Society, the Robert Koch Institute, the Center for Artificial Intelligence, the Fraunhofer Institute, the Hemholtz Association, as well as German universities. The attendance will be completed by members of a range of German governmental departments, the UK’s Foreign and Commonwealth Office, and senior representatives of the Volkswagen Foundation.

Topics to be discussed will include approaches for analysing high impact low probability risks from technology, horizon-scanning and foresight methods, policy challenges, and areas of potential synergy or collaboration between research networks. Specific sciences/technologies to be discussed include artificial intelligence, emerging capabilities in biotechnology, and pathogen research.

CSER is very grateful for the support of the German government, and the Federal Foreign Office in particular, in organising and funding the event and the travel of German participants, and for helping to bring this level of expertise to bear on questions of global importance. CSER is also extremely grateful for the financial support of cryptographer and software engineer Paul Crowley, who funded flights and accommodation for CSER academics, and without whose support the workshop could not have taken place.

CSER co-founders Price and Tallinn at the Festival of Dangerous Ideas

Today and tomorrow CSER co-founders Huw Price and Jaan Tallin will  be presenting a series of dangerous ideas – that our continued survival and flourishing as a species is in our hands, that we may be in the most dangerous century of earth’s history, and that the responsibility we owe to future generations is far greater than we may realise. They will speak  to an audience of thousands at the Sydney Opera House in Australia.

For those in the wrong hemisphere, talks and discussions will be available online; CSER will post links:

http://fodi.sydneyoperahouse.com/events/end-of-the-world

http://fodi.sydneyoperahouse.com/events/we-are-risking-our-existence

 

CSER welcomes Professor Chris Lowe to advisory board

CSER is happy to welcome Professor Chris Lowe to our advisory board.  As Professor of Biotechnology and Director of the Institute of Biotechnology at the University of Cambridge, he is perfectly suited to provide guidance on emerging risks from advanced biotechnologies.