Learn more about CSER

We are grateful to the Future of Life Institute and writer Sophie Hebdon for an excellent piece about our work at CSER.

Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. But as our tools become more powerful, we could be putting ourselves at risk should they fall into the wrong hands— or if humanity loses control of them altogether. Concerned with bioengineered viruses, unchecked climate change, and runaway artificial intelligence? These are the challenges the Centre for the Study of Existential Risk (CSER) was founded to grapple with.

At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. If we feel any responsibility to the billions of people in future generations, then a key concern is ensuring that there are future generations at all… Read more

The in-depth article contains updates on some of our recent events, interviews with directors Seán Ó hEigeartaigh, Huw Price and Martin Rees, and can be read in full at the FLI website.

Jaan Tallinn on thinking from first principles

Jaan Tallinn, one of CSER’s three cofounders, has recently given an interview for John Brockman’s Edge.org. In this interview, Jaan discusses the difficulty of thinking clearly about existential risks:

Elon Musk said at his interview at the TED conference a couple of years ago, that there are two kinds of thinking. All of humanity, most of the time, engages in what you call metaphorical thinking, or analog-based thinking. They bring in metaphors from different domains and then apply them to a domain that they want to analyze, which is like things that they do intuitively. It’s quick, cheap, but it’s imprecise. The other kind of thinking is that you reason from first principles. It’s slow, painful, and most people don’t do it, but reasoning from first principles is really the only way we can deal with unforeseen things in a sufficiently rigorous manner. For example, sending a man to the moon, or creating a rocket. If it hasn’t been done before, we can’t just use our knowledge. We can’t just think about “how would I behave if I were a rocket” and then go from there. You have to do the calculations. The thing with existential risks is it’s the same. It’s hard to reason about them, these things that have never happened. But they’re incredibly important, and you have to engage in this slow and laborious process of listening to the arguments and not pattern-matching them to things that you think might be relevant.

In relation to risks from artificial intelligence, which have long been an object of his attention, Jaan draws a conciliatory line with AI researchers, while advocating that more research on safety needs to be done:

More generally, everyone who is on a causal path of new technologies being developed, is in some way responsible for making sure that the new technologies that are brought into existence as a result of their efforts, they are responsible for ensuring that they are beneficial in the long term for humanity.

I would say that I don’t have any favorites, or any particular techniques within the domain of AI that I’m particularly worried about. First of all, I’m much more calm about these things. Perhaps by virtue of just having longer exposure to AI companies and people who develop AI. I know that they are well-meaning and people with good integrity.

Personally, I think the biggest research that we need to advance is how to analyze the consequences of bringing about very competent decision-making systems to always ensure that we have some degree of control over them, and we won’t just end up in a situation where this thing is loose and there’s nothing we can do now.

You can read the full interview, or watch the video here.

Bill Gates and Elon Musk on the “Huge Challenge” of Artifical Intelligence

In a recent interview with Baidu CEO Roblin Li, both Bill Gates and Elon Musk spoke of the importance of research into ensuring artifial intelligence (AI) would be safe, if AI advanced to smater-than-human level.

Explaining the potential risk that superintelligent AI would pose, Elon Musk suggested

[An] analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

Bill Gates added that his view of the seriousness of the risk is no different and would “highly recommend” people read Nick Bostrom’s book Superintelligence.

A video of their discussion of AI is avaiable here, and the transcript here.

CSER Seminar April 24th: Will We Cause Our Own Extinction?

CSER’s April seminar will be on Friday 24th April, 4.00-5.30pm.  Dr Toby Ord (Oxford) will present on the topic “Will we cause our own extinction? Natural versus anthropogenic extinction risks

Toby Ord is a Research Fellow at the Future of Humanity Institute, Oxford University & Oxford Martin School. He works on theoretical and practical questions concerning population ethics, global priorities, existential risk and new technologies, and recently contributed a report on Managing Existential Risk from Emerging Technologies to the Chief Scientific Advisor’s annual report for the UK government.

This seminar in particular should prove an excellent introduction to the risks that CSER focuses on and the importance of global prioritization to reduce existential risk.

We are grateful for the high level of interest in our seminar series so far and for Dr Ord’s talk we have moved to the larger venue Little Hall, Sidgwick Site, Cambridge University, CB3 9DA. The event is free, open to all and will be followed by a drinks reception.

Videos of previous seminars are available on the CSER Youtube Channel.

April 24th is also the application deadline for our current vacancies, details here.


New research vacancies at CSER

The Centre for the Study of Existential Risk is recruiting for up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk.

We are looking for outstanding and highly-committed researchers, interested in working as part of growing research community, with research projects relevant to any aspect of the project. We invite applicants to explain their project to us, and to demonstrate their commitment to the study of extreme technological risks.

We have several shovel-ready projects for which we are looking for suitable postdoctoral researchers. These include:

  • Ethics and evaluation of extreme technological risk (ETR) (with Sir Partha Dasgupta);
  • Horizon-scanning and foresight for extreme technological risks (with Professor William Sutherland);
  • Responsible innovation and extreme technological risk (with Dr Robert Doubleday and the Centre for Science and Policy).
  • However, recruitment will not necessarily be limited to these subprojects, and our main selection criterion is suitability of candidates and their proposed research projects to CSER’s broad aims.

    Details are available here. Closing date: April 24th.

    Partha Dasgupta and Martin Rees to discuss recent Pontifical Academy seminar.

    This Thursday, CSER’s Partha Dasgupta and Martin Rees will give a seminar in relation to a workshop that recently took place at the Pontifical Academy of Social Sciences. This workshop, entitled Sustainable Humanity, Sustainable Nature: Our Responsibility was co-convened by Sir Dasgupta and attended by Lord Rees. It explored how we can fulfil our desire for sustained economic and technological growth in light of threats to the natural environment.

    More information on the upcoming seminar is available from the Centre for Science and Policy, and the past workshop is described by the Pontifical Academy of Sciences.

    New Reports on the Philosophy of Existential Risk, by FHI Oxford

    The Future of Humanity Institute, Oxford University, have recently released two technical reports on the philosophy of existential risk.

    The first examines the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. The full technical report is available to read on the FHI website.

    The second, on priority-setting work aiming to reduce existential risk, argues that all else being equal we should prefer work earlier and prefer to work on risks that might come early. You can read this report in full here.

    Philosophy/CSER/MIRI conference: “Decision Theory and Artificial Intelligence”

    Self-prediction in Decision Theory and Artificial Intelligence Conference

    Cambridge’s Faculty of Philosophy will host a conference on Self-prediction in Decision Theory and Artificial Intelligence from the 13th to the 19th of May 2015. The conference is organized in conjunction with the Machine Intelligence Research Institute (MIRI) in Berkeley, CA, and the Centre for the Study of Existential Risk (Cambridge). The local organizers are Dr Arif Ahmed and Prof. Huw Price. We are most grateful for financial assistance from the Analysis Trust and the Mind Association, as well as from MIRI.

    It aims to bring together speakers from philosophy and computer science to discuss the special philosophical and practical problems that arise for decision-making agents whose confidence in the upshot of their current deliberation makes a difference to the deliberative process itself. Speakers include Alan Hajek (ANU), James Joyce (Michigan), Stuart Russell (Berkeley), Katie Steele (LSE), Vladimir Slepnev (Google) and Jenann Ismael (Arizona).

    All details relating to speaker schedule, registration, call for papers, and bursaries are available at the conference page. Enquiries to Dr Arif Ahmed on ama24@cam.ac.uk

    “Minds like ours” online shortly

    Thank you to everyone who came to CSER’s second seminar on Friday. We were humbled and taken by surprise by the huge level of interest, and would like to apologise to anyone who came but could not get a seat. However, for anyone who missed it Professor Murray Shanahan’s talk will be online shortly, most likely by the end of next week on this website and on our brand new Youtube channel.