Videos from our 2016 Conference now online

Videos of the keynote lectures from the 2016 Cambridge Conference on Catastrophic Risk are now available.


Claire Craig – Extreme risk management in the policy environment

Rowan Douglas – Opening Session Part 2


Jo Husbands – Lessons from Efforts to Mitigate the Risks of “Dual Use” Research

Sam Weiss Evans – Words Of Caution On Making Objects Of Security Concern

Zabta K. Shinwari – Young Researchers & Responsible Conduct of Science: Successes and failures

Artificial Intelligence

Hawking on existential risk, inequality, and humility

For me, the really concerning aspect of this is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges: climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans.

Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it.

In a Guardian article CSER adviser Stephen Hawking calls for elites to learn “a measure of humility” and writes eloquently about the emerging risks that threaten our continued existence as a species.

CSER autumn update

A quick update on our recent activities – it’s been a busy but remarkably successful year for us.

1) From our first postdoc starting in September of last year, we’ve built up to a team of eight postdocs from across fields. Our team now consists of Shahar Avin (currently working on a classification framework for global catastrophic risk scenarios), Yang Liu (decision theory for advanced AI), Bonnie Wintle (horizon-scanning for risk), Catherine Rhodes (biorisk, biotech governance, and academic project management), Julius Weitzdorfer (law, governance, and catastrophic risk), Simon Beard (population ethics, future generations, and alternatives to cost-benefit analysis), Adrian Currie (extreme risk and the culture of science), Tatsuya Amano (ecological risks and tipping points) as well as Huw Price, Seán Ó hÉigeartaigh and Jens Steffensen (CSER’s new administrator).

We are delighted to announce that we will be joined in the spring by synthetic biologist Lalitha Sundaram, who will work with us on bio-threats. More on our team here:

2) Our postdocs have already begun submitting their first papers, and have been hard at work organising workshops on key workshops relating to emerging risks. Recent workshops have included

– Population, ethics and risk (in collaboration with Cumberland Lodge).

– Gene drives: regulatory, legal and ethical issues (with the Synthetic Biology Strategic Research Initiative, the Centre for Science and Policy, and the Centre for Law, Medicine and the Life Sciences)

– Data Analytics for sustainability and environmental risk (in collaboration with the British Antarctic Survey, the Cambridge Forum for Sustainability and the Environment, and Google DeepMind).

– Horizon-scanning for advances and risks related to biological engineering.

We will next be co-organising a workshop with the Centre for Risk Studies and the UK Cabinet Office to share knowledge on resilience planning and risk assessment, and will be supporting the organisation of several workshops and symposia on legal, regulatory and risk challenges relating to AI at this year’s Neural Information Processing Systems (NIPS) conference.

We are currently preparing for our first Cambridge Conference on Catastrophic Risk (December 12-14) to which you would all be welcome (

3) Our public lecture series, generously supported this year by the Blavatnik Foundation continues to be a great success. Highlights have included talks by Paul Ehrlich (long-term environmental risks) and Hilary Greaves (population ethics and existential risk). Talks available online here:

4) Our other major accomplishment has been our role in the development of a sister Centre, the Centre for the Future of Intelligence (CFI;, which launched last month and is currently recruiting for postdocs to work on a range of topics related to the future of artificial intelligence. Funded by the Leverhulme Foundation and consisting of a partnership between Cambridge, Oxford, Berkeley and Imperial, CFI is collaborating with CSER on AI-related research topics.

For more information, see our CSER Summer report.

Lecture on Sculpting Evolution by Dr Kevin Esvelt

18 October 2016

Biologists can now design genetic systems that engineer evolution in powerful ways with social, legal, ethical and environmental implications for our future. Mosquito populations can already be engineered using cutting edge techniques to drastically reduce their numbers or make them resistant to transmitting diseases like malaria, dengue or the emerging zika virus.

Synthetic biologist Dr Kevin Esvelt (MIT Media Lab) introduced his work on gene drive systems which rapidly spread malaria resistance within populations while Professor Luke Alphey (Pirbright Institute) discussed his work founding Oxitec, a UK company that was the first to release genetically modified male mosquitoes whose offspring fail to reproduce, leading to dramatic reductions in numbers.

What safeguards and regulations are required to ensure responsible use of such technologies? What does it mean for humans to use nature’s tools in this way? How do we balance the direct benefits for global health with any risks to our shared environment?

This event was co-organised by the Centre for the Study of Existential Risk and the Cambridge SynBio Forum.

Partnership on AI

The Centre for the Study of Existential Risk strongly welcomes the recently announced partnership on AI to benefit people and society (current partners: DeepMind/Google, Facebook, Microsoft, Amazon, IBM). Increasingly powerful AI systems are becoming used in an ever-wider range of real-world settings. This offers wonderful opportunities for helping us with many global challenges – for example, DeepMind have recently developed tools to aid doctors in the NHS, and massively improved the energy efficiency of Google’s servers with a version of DQN (the Atari-beating algorithm), which has very beneficial implications for climate change. Similarly, Microsoft Research are making great progress on applying AI to cancer diagnosis and prevention.

However, the widespread use and further development of these systems will also throw up challenges – including fairness and potential biases in algorithms or the data they generate their results from; our ability to understand how these algorithms function and the settings in which they may not perform as well, and the impact of AI on job markets. In the longer-term, AI is set to be such a transformative technology that it is prudent to think carefully about its safe development, the potential impacts and risks of long-term advances, and the global challenges to which it can be applied beneficially. These challenges will require deep interdisciplinary and cross-sector collaboration between technology research leaders, scholars across disciplines, and policymakers who seek to stay up to date with a rapidly progressing technology. Cambridge is taking a leading role in these discussions; in addition to CSER’s work, research leaders in Cambridge’s machine learning department have been organising workshops at the major machine learning conferences on the societal impacts of AI , legal and policy challenges that AI will present, and the technical design of AI systems so as to be reliable ‘in the wild’. Cambridge has also  recently partnered with Oxford, Berkeley and Imperial on a new centre to study the long-term opportunities and challenges of AI, supported by the Leverhulme Foundation – the Centre for the Future of Intelligence.

The research leaders in companies such as DeepMind, Facebook, Microsoft, Amazon and IBM are among the best placed to think in a long-term manner about these issues, in their deep understanding of the current state of the art, their unique insights into where the field will be in ten years’ time, and the ways in which their advances will change the world. They also have a unique opportunity to play a guiding role, in collaboration with others. This partnership is a tremendously positive step, and demonstrates laudable responsibility and leadership from the companies involved. We strongly welcome it, and look forward to opportunities to collaborate on many of the research issues the Partnership highlights.

Seán Ó hÉigeartaigh,

Executive Director, Centre for the Study of Existential Risk

Thinking the Unthinkable

Thanks to the large and thoughtful audience who joined our fascinating discussion with Nik Gowing, Chris Langdon and Sir Peter Gershon on Friday 5 August. Please note that the Thinking the Unthinkable report, and Nik and Chris’s excellent recent World Today article about their project, are both available for download at

CSER Blavatnik Public Lecture on Feeding Everyone No Matter What by Dr David Denkenberger

CSER is pleased to welcome Dr David Denkenberger for this Blavatnik Public Lecture.
2 September 2016, 14:00 – 15:30
Room 1.25, David Attenborough Building, Pembroke Street, Cambridge CB2 3QZ


A large asteroid or comet impact, super volcanic eruption, or full-scale nuclear war could cause a ~100% global agricultural shortfall. Together these have a probability ~10% this century. We have proposed solutions that could feed everyone without the sun, such as growing mushrooms on dead trees. Abrupt climate change, coincident extreme weather, a volcanic eruption like that which caused the year without a summer in 1816, regional nuclear war, complete loss of bees, and medium-sized comet/asteroid could cause a ~10% global agricultural shortfall. Together these have a probability ~80% this century.

We have proposed solutions that would mitigate the food price rise, such as relocating animals to the farm fields so they can consume agricultural residues. A number of risks could cause widespread electrical failure, including a series of high-altitude electromagnetic pulses (HEMPs) caused by nuclear weapons, an extreme solar storm, and a coordinated cyber attack. Since modern industry depends on electricity, it is likely there would be a collapse of the functioning of industry and machines in these scenarios. We have proposed solutions for food (e.g. burning wood from landfills for fertilizer) and nonfood (such as retrofitting ships to be wind powered) requirements of everyone. These alternate food solutions require only low-cost preparation research and planning (unlike storing food), and therefore are cost-effective ways of saving expected lives and reducing the chance of loss of civilization, from which humanity may not recover.

Speaker Biography:

Dr. David Denkenberger received his B.S. from Penn State in Engineering Science, his M.S.E. from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on his patent-pending expanded microchannel heat exchanger. He is an assistant professor at Tennessee State University in architectural engineering. He is also an associate at the Global Catastrophic Risk Institute. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, and is a Penn State distinguished alumnus. He has authored or co-authored over 50 publications, including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. He has given over 80 technical presentations.

CSER Blavatnik Public Lecture on The Rise of Data Religion by Professor Yuval Harari

A Blavatnik public lecture with Professor Yuval Harari on September 6, 2016. Response by Professor Andrew Briggs and panel discussion chaired by Lord Martin Rees.


The most interesting place in the world from a religious perspective is not the Middle East, but rather Silicon Valley. That is where the new religions of the twenty-first century are being created. Particularly important is the Data Religion, which promises humans all the traditional religious prizes – happiness, peace, prosperity, and even eternal life – but here on earth with the help of data-processing technology, rather than after death with the help of supernatural beings.

Data Religion believes that the entire universe is a flow of data, that organisms are algorithms, and that humanity’s cosmic vocation is to create an all-encompassing data-processing system – and then merge into it. On the practical level Dataists believe that given enough biometric data and enough computing power, you could create an external algorithm that will understand us humans much better than we understand ourselves. Once this happens, authority will shift from humans to algorithms and humanist practices such as democratic elections and free markets will become as obsolete as rain dances and flint knives.

More detail here.

Climate Justice & Disaster Law book launch

imagesClimate disasters demand an integration of multilateral negotiations on climate change, disaster risk reduction, sustainable development, human rights and human security. Via detailed examination of recent law and policy initiatives from around the world, and making use of a Capability Approach, Rosemary Lyster develops a unique approach to human and non-human climate justice and its application to all stages of a disaster: prevention; response, recovery and rebuilding; and compensation and risk transfer. She comprehensively analyses the complexities of climate science and their interfaces with the law- and policy-making processes, and also provides an in-depth analysis of multilateral climate change negotiations dating from the establishment of the 1992 United Nations Framework Convention on Climate Change (UNFCCC) to the Twenty First Conference of the Parties in Paris in December 2015.

Professor Lyster will give an introduction to her book, followed by discussion by Dr Julius Weizdoerfer, Cambridge Centre for the Study of Existential Risk (CSER) and Dr Leslie-Anne Duvic Paoli, Cambridge Centre for Environment, Energy and Natural Resources Governance (C-EENRG).

When: 9 June, 12:30PM.
Where: Lauterpacht Centre for International Law, 5 Cranmer Road, CB3 9BL Cambridge.

The event will be preceded by a lunch reception, kindly sponsored by Cambridge University Press.
All are welcome to attend, but please RSVP via Eventbrite so we have numbers for catering.

Rescheduled – Blavatnik Public Lecture, Hilary Greaves

ImageforPosterWe are delighted that Prof Hilary Greaves has agreed to reschedule her talk which was initially planned to take place on 29 April.

The new date of the lecture is: 10 June 2016. The location and the time remain the same.

Location: Hopkinson Lecture Theatre, 1st Floor, Phoenix Building, New Museums Site.

Time: 4-5pm followed by Q&A.

Click here for further info.