Biological Extinction Workshop at the Vatican

The Chair of our Management Committee, Sir Partha Dasgupta, is one of the organisers of the Vatican workshop on Biological Extinction. He is presenting on the workshop’s ‘Goals and Objectives’ and the ‘Summary and Conclusions’. He is also speaking about ‘Why We Are in the Sixth Extinction and What It Means to Humanity’, while our co-founder Lord Martin Rees is speaking about ‘Extinction: What it Means to Us’.

This workshop follows a previous Vatican workshop and report, 2014’s Sustainable Humanity Sustainable Nature: Our Responsibility, with which Sir Partha and Lord Rees were heavily involved.

The Guardian article about the 2017 workshop (Biologists say half of all species could be extinct by end of century) quotes Sir Partha Dasgupta as saying:

“We need to unravel the processes that led to the ills we are now facing. That is why the Vatican symposia involve natural and social scientists, as well as scholars from the humanities. That the symposia are being held at the Papal Academy is also symbolic. It shows that the ancient hostility between science and the church, at least on the issue of preserving Earth’s services, has been quelled.”

The crucial point is to put the problem of biological extinctions in a social context, he said. “That gives us a far better opportunity of working out what we need to in the near future. We have to act quickly, however.”

Bad Actors and AI – workshop

Photo credit: Future of Humanity Institute

On the 19th and 20th of February, the Future of Humanity Institute (FHI) hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence. The workshop, co-chaired by Miles Brundage at FHI and Shahar Avin of the Centre for the Study of Existential Risk, invited experts in cybersecurity, AI governance, AI safety, counter-terrorism and law enforcement. The workshop was jointly organised by the Future of Humanity Institute, the Centre for the Study of Existential Risk, and the Centre for the Future of Intelligence.

The attendees were invited to consider a range of risks from emerging technologies included automated hacking, the use of AI for targeted propaganda, the role of autonomous and semi-autonomous weapons systems, and the political challenges posed by the ownership and regulation of advanced AI systems.

The outputs of the workshop will be consolidated into a research agenda for the field over the coming months and made available to the research and policy communities to inform their future work prioritisation.

If you are a researcher interested in contacting the researchers regarding this project, you can email miles dot brundage at philosophy dot ox dot ac dot uk. (Media inquiries should be directed here.)

Dealing with Extremism – Professor David Runciman

14 February 2017

On Friday 3 February, Professor David Runciman gave a talk on “Dealing with Extremisim” the third of the popular Darwin Lecture Series, which this year is co-convened by CSER Research Associate Julius Weitzdoerfer. Professor Runciman argued that not all conspiracy theorists are extremists; but that almost all extremists are conspiracy theorists.

The talk can be viewed on the Darwin College Lectures website.

The ‘Extremes’ lecture series take place every Friday during Lent term (January to March). The lectures are given at 5.30 p.m. in The Lady Mitchell Hall, Sidgwick Avenue, with an adjacent overflow theatre with live TV coverage. Each lecture is typically attended by 600 people so you must arrive early to ensure a place.

The next lectures are:

 

Extreme Rowing – Roz Savage MBE, Ocean Rower, Yale University.

Friday 10 February 2017

Extremes of the Universe – Professor Andy Fabian, University of Cambridge.

Friday 17 February 2017

Extreme Politics – Professor Matthew Goodwin, University of Kent.

Friday 24 February 2017

Extreme Ageing – Professor Sarah Harper, University of Oxford.

Friday 03 March 2017

Reporting from Extreme Environments – Lyse Doucet, BBC.

Friday 10 March 2017

Call for Papers and Responders: Risk, Uncertainty and Catastrophe Scenarios

14 January 2017

Call for Papers and Responders: Risk, Uncertainty and Catastrophe Scenarios

Workshop on Climate Ethics and Climate Economics

May 9th & 10th, Centre for the Study of Existential Risk

Scholars have warned that there is an uncertain chance of runaway climate change that could devastate the planet. At least since Hans Jonas’s The Imperative of Responsibility, some have argued that even low-probability existential risks should be treated in a fundamentally different way. How should we act when we believe that there is a chance of a catastrophe, but cannot make reliable probability estimates? How much should we worry about worst-case scenarios? What should we do when experts disagree about whether catastrophe is possible?

These are some of the questions we will be posing at the fifth of six ESRC-funded workshops exploring issues where the ethics and economics of climate change intersect. It will be held at the University of Cambridge’s Centre for the Study of Existential Risk.

We are seeking both paper givers and discussants from philosophy, economics and other disciplines. Funds are available to cover accommodation and internal travel expenses for up to three research students and early-career researchers. Papers, where available, will be circulated before the workshop.

Those wishing to present a paper should submit a 500-word abstract by 24th March to Simon Beard (sjb316@cam.ac.uk). Anyone interested in serving as a discussant should send an expression of interest by the same date. If applying for funding, please indicate that you are a student, or the year that you received the PhD.

Wired UK article

13 February 2017

Wired Front Cover

The Centre for the Study of Existential Risk is featured in a piece on existential risk in Wired UK, alongside the Future of Humanity Institute, the Global Catastrophic Risk Institute, and the Centre for the Future of Intelligence.

Wired UK Existential Risk article

Extreme Events and How to Live with Them – Professor Nassim Nicholas Taleb

7 February 2017

On Friday 20 January, Professor Nassim Nicholas Taleb gave a talk on “Extreme Events and How to Live with Them” the second of the popular Darwin Lecture Series, which this year is co-convened by CSER Research Associate Julius Weitzdoerfer. Professor Taleb is the author of a multivolume essay, the Incerto (The Black Swan, Fooled by Randomness, and Antifragile) covering broad facets of uncertainty.

The talk can be viewed on the Darwin College Lectures website. In his talk, Professor Taleb said: “I am honoured to be in Cambridge… Cambridge has Wittgenstein, but it also has Martin Rees, who has a Centre on Existential Risk. And we are going to discuss the properties of events that can lead to these existential risks”.

Talk summary by Nikolas Bernaola

Professor Nassim Nicholas Taleb, the famous author of The Black Swan, gave a talk for this year’s Darwin College Lecture series’ on the Extreme. The theme and the speaker gathered a huge audience, with more than fifty people waiting outside the lecture theater before the doors opened. An hour later, when Taleb came in, Lady Mitchell’s Hall and two overflow theaters were packed and ready to listen to “The Logic and Statistics of Extremes.”

The talk started with Taleb describing the countries of Mediocristan and Extremistan. In Mediocristan, we can find events that can be properly described by their average and standard deviation. Extreme events are incredibly unlikely. An example would be human height. Most people are close to the mean and it is very unlikely or impossible to find extreme outliers, i.e. people who are 3 meters or 50 centimeters high.

In Extremistan, however, events are not accurately described by the mean and single events can have disproportionate impact. Here we are dealing with fat-tailed distributions, meaning that the probability of extreme events is a lot more likely than usual. For example, the deadliest diseases can kill thousand or millions of times more than usual, the richest people can be millions of times wealthier than the average and best-selling books will sell millions of copies while the average book with a major publisher will sell ten thousand.

As we can see, the events that belong to these two classes are extremely different and problems come, says Taleb, when we use the methods and intuitions we have about Mediocristan and apply them to Extremistan. Usual methods like sampling or extrapolating from previous evidence go out of the window since the effect of single events can’t be covered by these models and it will heavily dominate over any other effects. This is the Black Swan problem.

Taleb continued that talk by attacking financial experts. He explained how a lot of the problems with the most recent financial crisis can be explained because traders are using models that belong to Mediocristan and suffering from Black Swans as a consequence. As an example he brought up how some experts had claimed that the causes leading to the last crisis were 10 sigma events and some even claimed several 25 sigma events. To give an example of what this means, if you won the lottery 15 times in a row and got hit by an asteroid on the way back home it would still not be as unlikely as a 25-sigma event. And 10-sigma events should happen only once every ten thousand years and we’re supposed to have seen several of them in the last thirty years. These extreme implausibilities start to pile up and suggest that our models are very likely wrong.

In the last part of the lecture Taleb mentioned another problem of standard economic analysis, non-ergodicity or how statistics are done by averaging over groups but seldom from the perspective of a single person making choices over time. This can lead to very different results and it supports Taleb’s next point. A defense of the precautionary principle, his point essentially being that since we only have one Earth and we are dealing with events that can have extreme outcomes we should be extraordinarily careful. One single miss can send us towards irreparable ruin so there are some risks that we simply should not take. He mentioned Lord Martin Rees and his institute in Cambridge, the Centre for Study of Existential Risk (CSER) praising their work as one of the most important things we can do to make sure that we can sustain humanity into the future.

Extremes

The ‘Extremes’ lecture series take place every Friday during Lent term (January to March). The lectures are given at 5.30 p.m. in The Lady Mitchell Hall, Sidgwick Avenue, with an adjacent overflow theatre with live TV coverage. Each lecture is typically attended by 600 people so you must arrive early to ensure a place.

The next lectures are:

Dealing with Extremism – Professor David Runciman, University of Cambridge.

Friday 03 February 2017

Extreme Rowing – Roz Savage MBE, Ocean Rower, Yale University.

Friday 10 February 2017

Extremes of the Universe – Professor Andy Fabian, University of Cambridge.

Friday 17 February 2017

Extreme Politics – Professor Matthew Goodwin, University of Kent.

Friday 24 February 2017

Extreme Ageing – Professor Sarah Harper, University of Oxford.

Friday 03 March 2017

Reporting from Extreme Environments – Lyse Doucet, BBC.

Friday 10 March 2017

CSER at the Asilomar Beneficial AI 2017 Conference

31 January 2017

Asilomar-group-photo

The Centre for the Study of Existential Risk (CSER) was delighted to participate in the Beneficial Artificial Intelligence 2017 conference in Asilomar early in January.

AI leaders came together to discuss opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial. This was a sequel to the landmark 2015 Puerto Rico AI conference, also organised by the Future of Life Institute (FLI).

All three of our co-founders – Martin Rees, Jaan Tallinn and Huw Price – gave talks or appeared on panels. We were joined by representatives from our sister organisations the Centre for the Future of Intelligence (LCFI) from Cambridge and the Future of Humanity Institute (FHI) from Oxford.

A major outcome of the conference are the Asilomar AI Principles: 23 principles for ensuring beneficial AI. Developed and agreed by AI leaders and scholars, they have now been signed by over 1500 people.

Our Executive Director, Seán Ó hÉigeartaigh, attended the conference and said:
The Asilomar Conference was an important moment in the development of AI. It is encouraging to see such strong leadership from the AI community in drafting and supporting the Asilomar AI Principles. I look forward to more partners from around the world joining this crucial global conversation. I congratulate the Future of Life Institute on their leadership in developing the Principles and I’m delighted that Centre for the Study of Existential Risk researchers and advisers helped develop and agree them.
You can read the schedule and watch videos from the conference here. You can also read the Asilomar AI Principles and consider signing them here.

What can x-risk learn from established fields?

30 January 2017

Read more about our 2016 conference here and watch videos of keynotes here.

By Beth Barnes  

The Cambridge Conference on Catastrophic Risk brought together a wide range of academics from different disciplines relating to existential risk (x-risk) – from reinsurance to sustainable development to population ethics. I’m going to list some of the interesting lessons I think the x-risk community can learn from these different fields, and try and draw out concrete action points. There were too many useful insights to cover in one post, but videos of many of the talks will be available.

Disclaimer – these are from notes taken during the talks. I may have misheard, misunderstood or typoed what the speakers actually meant. I have tried to find citations where possible.

Methodologies from the insurance industry

Rowan Douglas gave a talk outlining how the reinsurance industry gives us an example of the success of quantitative models in areas where people previously thought this was impossible. Bringing in an ‘engineering’ mode of thinking with catastrophe models in the 1990s rejuvenated a struggling industry. We now see a tight linking between details of the models and prices. For example, an update to a model of how quickly hurricanes lose energy over the US coast quickly caused a shift in prices. Prices are no longer reactive, following the latest hype, but based on statistics. Although 2011 was a year with very high cost of disasters, prices did not change because this was still within the bounds of the models’ prediction and the models remained a better means of prediction than the previous year’s losses. Using insurance mechanisms more widely seems a very promising mechanism to correctly incentivise those developing risky technology or vulnerable infrastructure – the Global Priorities Project’s paper discusses insurance as a way to price risky research, and there was discussion of how insurance may be beginning to change grid infrastructure to make it more resilient to solar storms.

Action point 1: where else can insurance mechanisms be used to reduce x-risks or increase resilience?

Communication of science and the science of communication

Something that came up frequently in several different talks and discussions was the problem of communication and narrative framing. This could be communication with the public, with AI researchers, with policy makers or with synthetic biology researchers. The key points were that different ways of communicating can have very different effectiveness, and we can work out which are the most effective.

During discussion of a climate change research funding proposal, an audience member with a marketing background suggested one tactic a ‘marketing’ approach might include:

  • Find out who makes the funding decision
  • Localise the impacts to them and the people they care about:
    • Ordinary citizens – their friends, family
    • Politicians – their constituents
  • Include a photo of the impacts climate change could have on their hometown/mention the impacts to this specific group of people

Action point 2: use marketing expertise more widely and remember that communication occurs with people with their own personal interests, not perfect rational agents

In the discussion afterwards, I spoke to someone who had worked at buzzfeed. Apparently their method was to generate ~70 titles, narrow these down to the best few, and A/B test these. The difference between a good and a great title is very large. Of course, buzzfeed are optimising for very different criteria than us, but the same methodology could be applied to optimising an explanation for a correct understanding of the problem, or optimising a tagline like ‘responsible science’ for having a positive impression upon synthetic biologists.

Many people will feel uncomfortable with using ‘manipulative’ marketing techniques. It’s an important question to discuss whether the adoption of these techniques could lead to a decline in the quality of the scientific system in the longer term. I would emphasize that some of these techniques are not inherently manipulative, and can be used to make communication clearer and generate deeper understanding rather than to mislead. On the other hand, these techniques are already being used for harm in some cases – e.g. the fossil fuel industry marketing climate change scepticism – so there is a case to be made that we need to think equally carefully about our messaging to avoid utterly losing the battle for public support.

Action point 3: collect as much data as possible on the efficacy of different methods of communicating with different groups – whether it’s A/B testing a mass communication, doing before and after surveys for a workshop that was intended to change people’s opinions, or collecting feedback from events.

In a presentation on biodiversity, a study was cited http://www.nature.com/nclimate/journal/v4/n1/abs/nclimate2059.html

that found that uncertainty about the threshold at which a tipping point would be reached was much more harmful to coordination than uncertainty about the impact.

Action point 4: presenting clear thresholds to stay above/below (even if they’re completely artificial and the underlying system is probably linear) can help motivate action

Action point 5: the literature in this area – game theory, collective action, cognitive biases – probably contains more very useful results

The lecture on conservation gave a different perspective on the promises and perils of marketing. After $250 million was spent on the science to determine the cause of the decline in Snake River salmon, the publicity around it and eventual investment focused exclusively on dams. The investigation had found that the removal of dams would not be sufficient to prevent the extinction of the Salmon, but the dams caught public attention in a way that other issues did not. Investment in reaching a scientific consensus can be completely wasted if the message is not adequately communicated.

A lesson learnt from sustainable development policy is that more political power for your cause is not always better. The Wilberforce Society/ Future of Sentience paper on rights and representation of future generations identified factors that contributed to success or failure of mechanisms of representing future people. Having too much political power seemed to be a factor that contributed to the mechanism being dismantled at the next election cycle whereas less ambitious initiatives endured for longer.

Action point 6: think about the sustainability of political infrastructure, not just efficacy – too much power can make it more likely to be dismantled

Lessons from recent successes in AI

Talking about artificial intelligence safety has gone from something that would get you labelled as a crackpot ~5 years ago, to something that leading AI researchers openly express support for.

Victoria Krakovna gave an overview of which factors she thought had created these successes. She highlighted getting the key players together in a relaxed environment (the Puerto Rico conference), Chatham House rules so people could talk about things they felt they couldn’t say openly, and developing a concrete research agenda that tied the problems to technical research in mainstream AI. Going forward she recommended continuing to take ‘portfolio’ approach with different organisation trying different tactics in the hope that at least one would succeed. She also recommended continuing informal outreach to individual researchers, and integration with AI capability research, such as through papers and technical events at mainstream ML conferences.

The relationship between short-term and long-term issues could be both a blessing and a curse, helping to put AI safety on people’s radar but also causing confusion about what the most important issues are.


CSER at the Geneva Centre for Security Policy

27 January 2017

16112877_10154537227793791_4485329562918922413_o

Picture: Petri Hakkarainen‏

The Geneva Centre for Security Policy held a two-day course on ‘Strategic Foresight and Existential Risks – Helping International Governance Respond to Global Catastrophic Risks’ on 19-20 January 2017. Participants included professionals from national departments, international organisations and think-tanks.

CSER was represented by our Academic Project Manager Haydn Belfield, who presented on Science and Technology Review and Advice, and Cooperation between International Organisations.

The other expert presenters were Dr Anders Sandberg from our sister organisation the Future of Humanity Institute, University of Oxford; Dr Petteri Taalas, the Secretary-General of the World Meteorological Organisation; Professor Ilona Kickbusch, Director of the Global Health Centre; Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit; and Martin Mayer, co-founder of YouMeO.

Thanks to the course participants, the course convenor Dr Petri Hakkarainen‏, and to the Geneva Centre for Security Policy for a very fruitful course.

C2oAQ6TW8AEChVR

Picture: Petri Hakkarainen‏


Is English as the lingua franca of research posing a risk?

26 January 2017

tatsuya

Picture: Keith Heppell

Dr Tatsuya Amano, post-doc at CSER, was interviewed by Cambridge Independent on the issues of English being the lingua franca of research.

“This one small aspect of the tension between the conservation of biodiversity and the challenge of food security helped to get him interested in the wider problem of the gaps in global information about biodiversity and conservation. He came to Cambridge, and to an office in the David Attenborough building, in 2011 to pursue the question. He is also based at the Centre for the Study of Existential Risk, where he has now started to think about the kind of catastrophic ecosystem collapse that could threaten the stability of human society. He won’t be drawn on the detail yet, but imagine a world without insect pollinators, or the collapse of fish populations.”

Read the full interview here.