Livestream of the open lecture with Prof Paul Ehrlich.

There will be a livestream of the open lecture at the Babbage Lecture Theatre with Prof Paul Ehrlich, starting at 16:00 today, 9 May  2016.
To watch the livestream, please click on the image below.

Screen Shot 2016-05-09 at 13.21.26

The video will be made available later on on our YouTube channel.

CSER Lent Newsletter 2016

CSER had another busy term. We now have a growing team of brilliant postdoctoral researchers from across a range of disciplinary backgrounds, including governance, law, biotechnology, mathematics, philosophy, physics and ecology, and are establishing collaborations within and outside Cambridge on our key risk areas. The team is already hard at work on high-impact research projects, workshops, public lectures and seminars, and it promises to be a tremendous year. To read our latest newsletter click here.
To get regular updates about our work, please sign up to our newsletter.

Partha Dasgupta Awarded Tyler Prize for Environmental Achievement

We are delighted to report that Partha Dasgupta, Frank Ramsey Professor Emeritus of Economics at the University of Cambridge, and Chair of CSER’s Management Committee and an intellectual lead within its projects, is this year’s recipient of the Tyler Prize for Environmental Achievement (

The award is in recognition of his contributions to the field of environmental economics, and particularly his “pioneering work… in establishing new paradigms at the nexus of society and sustainable development, and his continuing commitment to problems of population and poverty, loss of biodiversity and conservation” (

CSER is co-organising: FUKUSHIMA – Five Years On

The 2011 Fukushima Nuclear Accident constitutes a technological accident, a humanitarian disaster, and the largest civil liability case in legal history. In light of a recent nuclear renaissance, ambitious energy transitions, and aiming to identify concrete policy recommendations regarding the prevention, mitigation, and compensation of future accidents, this international workshop critically addresses the legal challenges and necessary policy lessons from Fukushima. It will be the first to cover all three legal dimensions of the disaster – in public, private, and criminal law – and will comprise contributions by international experts pioneering their fields, including the regulation of risk, crisis management, nuclear safety, disaster resilience, environmental and energy law, and dispute resolution for victims. Further contributions will treat recent developments in Japan, such as the first judgments awarding compensation for the death of evacuees by suicide and the criminal trial against TEPCO executives for negligent manslaughter.

This two-day international workshop MUST be booked for EITHER or BOTH of the days:

  • Friday, 4 March evening KEYNOTE: Nuclear Power and the Mob: Extortion and Social Capital in Japan
  • Saturday, 5 March WORKSHOP: Fukushima Five Years on – Legal Fallout in Japan, Lessons for the EU

Registration via Eventbrite

Event details:



Friday, 4 March 2016 at 17:30 – Saturday, 5 March 2016 at 18:15 (GMT)


Darwin College – Silver Street Cambridge, England CB3 9EU GB

A distraction or an essential discussion? Confronting extreme environmental risks.

An expert panel will explore different perspectives on risk in the face of uncertainties, unknowns, and the possibilities of extreme outcomes. This event is being co-hosted by the Cambridge Forum for Sustainability and the Environment (CFSE) and the Centre for Existential Risk (CSER). Click here for more information.

Monday 7 March: 7:30pm – 8:30pm

Mill Lane Lecture Rooms , 8 Mill Lane, CB2 1RW

Blavatnik Public Lecture Series – Prof. Charles Kennel and Prof. Stephen Briggs

Date: 26 February

Time: 16:00-18:00

Location: Seminar room, 1st floor, David Attenborough Building, Cambridge.

Lecture Title: Planetary Vital Signs, Planetary Decisions, Planetary Intelligence.

Book your ticket


Doesn’t the world need to look beyond global temperature to a set of planetary vital signs? When all indicators of change are fragile, you should not rely on one; you risk over-focusing policy on it.  You look at a number different of ones and ask whether they all point in the same general direction.  You look at the balance of evidence.

A coalition of scientists and policy makers should start work at once, since some vital signs should be ready at the entry into force of the Paris Agreement in 2020 or it will be hard to infuse any into policy processes later.

But vital signs are only the beginning.  They are not indicators of risk to the things  people care about.  And the world needs to learn how to use the vast knowledge we will be acquiring about climate change and its impacts.

Is it possible to use the tools at hand- observations from space and ground networks; demographic, economic and societal measures; big data statistical techniques; and numerical models-to inform politicians, managers, and the public of the evolving risks of climate change at global, regional, and local scales?

Should we not think in advance of an always-on social and information network that provides decision-ready knowledge to those who hold the responsibility to act, wherever they are, at times of their choosing?  Shouldn’t we prepare the social infrastructure-policies, governance, institutions, financing- needed to knit climate knowledge and action together?

Professor Kennel will be joined by Professor Stephen Briggs who will talk about planetary vital signs.

About the speakers:

Charles F. Kennel is Distinguished Professor, Vice-Chancellor, and Director emeritus at the Scripps Institution of Oceanography at the University of California. He was educated in astronomy and astrophysics at Harvard and Princeton. He served as UCLA’s Executive Vice Chancellor, its chief academic officer, from 1996 to 1998. From 1994 to 1996, Kennel was Associate Administrator at NASA and Director of Mission to Planet Earth, a global Earth science satellite program. Kennel’s experiences at NASA influenced him to go into Earth and climate science, and he became the ninth Director and Dean of the Scripps Institution of Oceanography and Vice Chancellor of Marine Sciences at the University of California, San Diego, serving from 1998 to 2006.

Stephen Briggs is currently the senior advisor to the ESA (European Space Agency) and the chair of the UN Global Climate Observing System. He headed the Department of “Earth Observation” (EO) Science, Applications & Future Technologies of ESA at ESRIN (European Space Research Institute). Before joining ESA in 2000, Stephen worked as Director of Earth Observation British National Space Centre & Head of Earth Observation NERC, UK (1994-1999), Head of Remote Sensing Applications Development Unit, NERC/BNSC (1986-1994), Senior Scientist at NERC Thematic Information Systems (1983-1986), and Lecturer at the Dept of Physics, Queen Mary College London (1982-1983). Stephen Briggs is also a Visiting Professor in the Dept. of Meteorology, Reading University.

CSER events this week – all welcome!

There are two great CSER-related events this week in Cambridge.
On Friday: Kay Firth-Butterfield, who leads Lucid AI’s ethical advisory panel, will be speaking about safe and beneficial development of AI, and its relevance to global challenges, for CSER’s public lecture at 4pm at the Winstanley Lecture Theatre, Trinity College. A great opportunity to get an industry perspective on “AI for the good of the many, not the few”. Attendance is free, but please register.

Bio: Kay Firth-Butterfield has worked as a Barrister, Mediator, Arbitrator, Professor and Judge in the United Kingdom. She is a humanitarian with a strong sense of social justice. She has advanced degrees in Law and International Relations which focused on ramifications of pervasive artificial intelligence. After moving to the US she taught at University level before becoming the Chief Officer of the Lucid Ethics Advisory Panel which she envisioned with the CEO and is in the process of creating. Additionally she teaches a course at the University of  Texas Law School on Law and Policy regarding AI and other emerging technologies. 

Book tickets here.

On Sunday, Kay, Dr Fumiya Lida and Seán Ó hÉigeartaigh will be speaking on challenges and policy related to long-term AI as part of the Wilberforce Society’s excellent conference on AI and automation.

Please spread the word!


Leverhulme Centre for the Future of Intelligence

CSER is delighted to announce that a new centre on the future of artificial intelligence will be established due to the generosity of the Leverhulme Foundation. The Centre proposal was developed at CSER and CRASSH, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration led by Nick Bostrom and Seán Ó hÉigeartaigh recently funded by the Future of Life Institute’s AI safety grants program).


Human-level intelligence is familiar in biological “hardware” – it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term. The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions. Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.

Cambridge University’s press release