Extreme Events and How to Live with Them – Professor Nassim Nicholas Taleb

7 February 2017

On Friday 20 January, Professor Nassim Nicholas Taleb gave a talk on “Extreme Events and How to Live with Them” the second of the popular Darwin Lecture Series, which this year is co-convened by CSER Research Associate Julius Weitzdoerfer. Professor Taleb is the author of a multivolume essay, the Incerto (The Black Swan, Fooled by Randomness, and Antifragile) covering broad facets of uncertainty.

The talk can be viewed on the Darwin College Lectures website. In his talk, Professor Taleb said: “I am honoured to be in Cambridge… Cambridge has Wittgenstein, but it also has Martin Rees, who has a Centre on Existential Risk. And we are going to discuss the properties of events that can lead to these existential risks”.

Talk summary by Nikolas Bernaola

Professor Nassim Nicholas Taleb, the famous author of The Black Swan, gave a talk for this year’s Darwin College Lecture series’ on the Extreme. The theme and the speaker gathered a huge audience, with more than fifty people waiting outside the lecture theater before the doors opened. An hour later, when Taleb came in, Lady Mitchell’s Hall and two overflow theaters were packed and ready to listen to “The Logic and Statistics of Extremes.”

The talk started with Taleb describing the countries of Mediocristan and Extremistan. In Mediocristan, we can find events that can be properly described by their average and standard deviation. Extreme events are incredibly unlikely. An example would be human height. Most people are close to the mean and it is very unlikely or impossible to find extreme outliers, i.e. people who are 3 meters or 50 centimeters high.

In Extremistan, however, events are not accurately described by the mean and single events can have disproportionate impact. Here we are dealing with fat-tailed distributions, meaning that the probability of extreme events is a lot more likely than usual. For example, the deadliest diseases can kill thousand or millions of times more than usual, the richest people can be millions of times wealthier than the average and best-selling books will sell millions of copies while the average book with a major publisher will sell ten thousand.

As we can see, the events that belong to these two classes are extremely different and problems come, says Taleb, when we use the methods and intuitions we have about Mediocristan and apply them to Extremistan. Usual methods like sampling or extrapolating from previous evidence go out of the window since the effect of single events can’t be covered by these models and it will heavily dominate over any other effects. This is the Black Swan problem.

Taleb continued that talk by attacking financial experts. He explained how a lot of the problems with the most recent financial crisis can be explained because traders are using models that belong to Mediocristan and suffering from Black Swans as a consequence. As an example he brought up how some experts had claimed that the causes leading to the last crisis were 10 sigma events and some even claimed several 25 sigma events. To give an example of what this means, if you won the lottery 15 times in a row and got hit by an asteroid on the way back home it would still not be as unlikely as a 25-sigma event. And 10-sigma events should happen only once every ten thousand years and we’re supposed to have seen several of them in the last thirty years. These extreme implausibilities start to pile up and suggest that our models are very likely wrong.

In the last part of the lecture Taleb mentioned another problem of standard economic analysis, non-ergodicity or how statistics are done by averaging over groups but seldom from the perspective of a single person making choices over time. This can lead to very different results and it supports Taleb’s next point. A defense of the precautionary principle, his point essentially being that since we only have one Earth and we are dealing with events that can have extreme outcomes we should be extraordinarily careful. One single miss can send us towards irreparable ruin so there are some risks that we simply should not take. He mentioned Lord Martin Rees and his institute in Cambridge, the Centre for Study of Existential Risk (CSER) praising their work as one of the most important things we can do to make sure that we can sustain humanity into the future.

Extremes

The ‘Extremes’ lecture series take place every Friday during Lent term (January to March). The lectures are given at 5.30 p.m. in The Lady Mitchell Hall, Sidgwick Avenue, with an adjacent overflow theatre with live TV coverage. Each lecture is typically attended by 600 people so you must arrive early to ensure a place.

The next lectures are:

Dealing with Extremism – Professor David Runciman, University of Cambridge.

Friday 03 February 2017

Extreme Rowing – Roz Savage MBE, Ocean Rower, Yale University.

Friday 10 February 2017

Extremes of the Universe – Professor Andy Fabian, University of Cambridge.

Friday 17 February 2017

Extreme Politics – Professor Matthew Goodwin, University of Kent.

Friday 24 February 2017

Extreme Ageing – Professor Sarah Harper, University of Oxford.

Friday 03 March 2017

Reporting from Extreme Environments – Lyse Doucet, BBC.

Friday 10 March 2017

CSER at the Asilomar Beneficial AI 2017 Conference

31 January 2017

Asilomar-group-photo

The Centre for the Study of Existential Risk (CSER) was delighted to participate in the Beneficial Artificial Intelligence 2017 conference in Asilomar early in January.

AI leaders came together to discuss opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial. This was a sequel to the landmark 2015 Puerto Rico AI conference, also organised by the Future of Life Institute (FLI).

All three of our co-founders – Martin Rees, Jaan Tallinn and Huw Price – gave talks or appeared on panels. We were joined by representatives from our sister organisations the Centre for the Future of Intelligence (LCFI) from Cambridge and the Future of Humanity Institute (FHI) from Oxford.

A major outcome of the conference are the Asilomar AI Principles: 23 principles for ensuring beneficial AI. Developed and agreed by AI leaders and scholars, they have now been signed by over 1500 people.

Our Executive Director, Seán Ó hÉigeartaigh, attended the conference and said:
The Asilomar Conference was an important moment in the development of AI. It is encouraging to see such strong leadership from the AI community in drafting and supporting the Asilomar AI Principles. I look forward to more partners from around the world joining this crucial global conversation. I congratulate the Future of Life Institute on their leadership in developing the Principles and I’m delighted that Centre for the Study of Existential Risk researchers and advisers helped develop and agree them.
You can read the schedule and watch videos from the conference here. You can also read the Asilomar AI Principles and consider signing them here.

What can x-risk learn from established fields?

30 January 2017

Read more about our 2016 conference here and watch videos of keynotes here.

By Beth Barnes  

The Cambridge Conference on Catastrophic Risk brought together a wide range of academics from different disciplines relating to existential risk (x-risk) – from reinsurance to sustainable development to population ethics. I’m going to list some of the interesting lessons I think the x-risk community can learn from these different fields, and try and draw out concrete action points. There were too many useful insights to cover in one post, but videos of many of the talks will be available.

Disclaimer – these are from notes taken during the talks. I may have misheard, misunderstood or typoed what the speakers actually meant. I have tried to find citations where possible.

Methodologies from the insurance industry

Rowan Douglas gave a talk outlining how the reinsurance industry gives us an example of the success of quantitative models in areas where people previously thought this was impossible. Bringing in an ‘engineering’ mode of thinking with catastrophe models in the 1990s rejuvenated a struggling industry. We now see a tight linking between details of the models and prices. For example, an update to a model of how quickly hurricanes lose energy over the US coast quickly caused a shift in prices. Prices are no longer reactive, following the latest hype, but based on statistics. Although 2011 was a year with very high cost of disasters, prices did not change because this was still within the bounds of the models’ prediction and the models remained a better means of prediction than the previous year’s losses. Using insurance mechanisms more widely seems a very promising mechanism to correctly incentivise those developing risky technology or vulnerable infrastructure – the Global Priorities Project’s paper discusses insurance as a way to price risky research, and there was discussion of how insurance may be beginning to change grid infrastructure to make it more resilient to solar storms.

Action point 1: where else can insurance mechanisms be used to reduce x-risks or increase resilience?

Communication of science and the science of communication

Something that came up frequently in several different talks and discussions was the problem of communication and narrative framing. This could be communication with the public, with AI researchers, with policy makers or with synthetic biology researchers. The key points were that different ways of communicating can have very different effectiveness, and we can work out which are the most effective.

During discussion of a climate change research funding proposal, an audience member with a marketing background suggested one tactic a ‘marketing’ approach might include:

  • Find out who makes the funding decision
  • Localise the impacts to them and the people they care about:
    • Ordinary citizens – their friends, family
    • Politicians – their constituents
  • Include a photo of the impacts climate change could have on their hometown/mention the impacts to this specific group of people

Action point 2: use marketing expertise more widely and remember that communication occurs with people with their own personal interests, not perfect rational agents

In the discussion afterwards, I spoke to someone who had worked at buzzfeed. Apparently their method was to generate ~70 titles, narrow these down to the best few, and A/B test these. The difference between a good and a great title is very large. Of course, buzzfeed are optimising for very different criteria than us, but the same methodology could be applied to optimising an explanation for a correct understanding of the problem, or optimising a tagline like ‘responsible science’ for having a positive impression upon synthetic biologists.

Many people will feel uncomfortable with using ‘manipulative’ marketing techniques. It’s an important question to discuss whether the adoption of these techniques could lead to a decline in the quality of the scientific system in the longer term. I would emphasize that some of these techniques are not inherently manipulative, and can be used to make communication clearer and generate deeper understanding rather than to mislead. On the other hand, these techniques are already being used for harm in some cases – e.g. the fossil fuel industry marketing climate change scepticism – so there is a case to be made that we need to think equally carefully about our messaging to avoid utterly losing the battle for public support.

Action point 3: collect as much data as possible on the efficacy of different methods of communicating with different groups – whether it’s A/B testing a mass communication, doing before and after surveys for a workshop that was intended to change people’s opinions, or collecting feedback from events.

In a presentation on biodiversity, a study was cited http://www.nature.com/nclimate/journal/v4/n1/abs/nclimate2059.html

that found that uncertainty about the threshold at which a tipping point would be reached was much more harmful to coordination than uncertainty about the impact.

Action point 4: presenting clear thresholds to stay above/below (even if they’re completely artificial and the underlying system is probably linear) can help motivate action

Action point 5: the literature in this area – game theory, collective action, cognitive biases – probably contains more very useful results

The lecture on conservation gave a different perspective on the promises and perils of marketing. After $250 million was spent on the science to determine the cause of the decline in Snake River salmon, the publicity around it and eventual investment focused exclusively on dams. The investigation had found that the removal of dams would not be sufficient to prevent the extinction of the Salmon, but the dams caught public attention in a way that other issues did not. Investment in reaching a scientific consensus can be completely wasted if the message is not adequately communicated.

A lesson learnt from sustainable development policy is that more political power for your cause is not always better. The Wilberforce Society/ Future of Sentience paper on rights and representation of future generations identified factors that contributed to success or failure of mechanisms of representing future people. Having too much political power seemed to be a factor that contributed to the mechanism being dismantled at the next election cycle whereas less ambitious initiatives endured for longer.

Action point 6: think about the sustainability of political infrastructure, not just efficacy – too much power can make it more likely to be dismantled

Lessons from recent successes in AI

Talking about artificial intelligence safety has gone from something that would get you labelled as a crackpot ~5 years ago, to something that leading AI researchers openly express support for.

Victoria Krakovna gave an overview of which factors she thought had created these successes. She highlighted getting the key players together in a relaxed environment (the Puerto Rico conference), Chatham House rules so people could talk about things they felt they couldn’t say openly, and developing a concrete research agenda that tied the problems to technical research in mainstream AI. Going forward she recommended continuing to take ‘portfolio’ approach with different organisation trying different tactics in the hope that at least one would succeed. She also recommended continuing informal outreach to individual researchers, and integration with AI capability research, such as through papers and technical events at mainstream ML conferences.

The relationship between short-term and long-term issues could be both a blessing and a curse, helping to put AI safety on people’s radar but also causing confusion about what the most important issues are.


CSER at the Geneva Centre for Security Policy

27 January 2017

16112877_10154537227793791_4485329562918922413_o

Picture: Petri Hakkarainen‏

The Geneva Centre for Security Policy held a two-day course on ‘Strategic Foresight and Existential Risks – Helping International Governance Respond to Global Catastrophic Risks’ on 19-20 January 2017. Participants included professionals from national departments, international organisations and think-tanks.

CSER was represented by our Academic Project Manager Haydn Belfield, who presented on Science and Technology Review and Advice, and Cooperation between International Organisations.

The other expert presenters were Dr Anders Sandberg from our sister organisation the Future of Humanity Institute, University of Oxford; Dr Petteri Taalas, the Secretary-General of the World Meteorological Organisation; Professor Ilona Kickbusch, Director of the Global Health Centre; Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit; and Martin Mayer, co-founder of YouMeO.

Thanks to the course participants, the course convenor Dr Petri Hakkarainen‏, and to the Geneva Centre for Security Policy for a very fruitful course.

C2oAQ6TW8AEChVR

Picture: Petri Hakkarainen‏


Is English as the lingua franca of research posing a risk?

26 January 2017

tatsuya

Picture: Keith Heppell

Dr Tatsuya Amano, post-doc at CSER, was interviewed by Cambridge Independent on the issues of English being the lingua franca of research.

“This one small aspect of the tension between the conservation of biodiversity and the challenge of food security helped to get him interested in the wider problem of the gaps in global information about biodiversity and conservation. He came to Cambridge, and to an office in the David Attenborough building, in 2011 to pursue the question. He is also based at the Centre for the Study of Existential Risk, where he has now started to think about the kind of catastrophic ecosystem collapse that could threaten the stability of human society. He won’t be drawn on the detail yet, but imagine a world without insect pollinators, or the collapse of fish populations.”

Read the full interview here.


Extreme Weather by Dr Emily Shuckburgh

25 January 2017

On Friday 20 January, Dr Emily Shuckburgh gave a talk on “Extreme Weather” as part of the popular Darwin Lecture Series, which this year is co-convened by CSER Research Associate Julius Weitzdoerfer. Dr Shuckburgh is a climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, which is focused on understanding the role of the polar oceans in the global climate system. The talk can be viewed on the Darwin College Lectures website.

The ‘Extremes’ lecture series take place every Friday during Lent term (January to March). The lectures are given at 5.30 p.m. in The Lady Mitchell Hall, Sidgwick Avenue, with an adjacent overflow theatre with live TV coverage. Each lecture is typically attended by 600 people so you must arrive early to ensure a place.

The other lectures are:

Extreme Events and How to Live with Them – Professor Nassim Nicholas Taleb, New York.

Friday 27 January 2017

Dealing with Extremism – Professor David Runciman, University of Cambridge.

Friday 03 February 2017

Extreme Rowing – Roz Savage MBE, Ocean Rower, Yale University.

Friday 10 February 2017

Extremes of the Universe – Professor Andy Fabian, University of Cambridge.

Friday 17 February 2017

Extreme Politics – Professor Matthew Goodwin, University of Kent.

Friday 24 February 2017

Extreme Ageing – Professor Sarah Harper, University of Oxford.

Friday 03 March 2017

Reporting from Extreme Environments – Lyse Doucet, BBC.

Friday 10 March 2017

Videos from our 2016 Conference now online

Videos of the keynote lectures from the 2016 Cambridge Conference on Catastrophic Risk are now available.

Keynotes

Claire Craig – Extreme risk management in the policy environment

Rowan Douglas – Opening Session Part 2

Biorisk

Jo Husbands – Lessons from Efforts to Mitigate the Risks of “Dual Use” Research

Sam Weiss Evans – Words Of Caution On Making Objects Of Security Concern

Zabta K. Shinwari – Young Researchers & Responsible Conduct of Science: Successes and failures

Artificial Intelligence

Hawking on existential risk, inequality, and humility

For me, the really concerning aspect of this is that now, more than at any time in our history, our species needs to work together. We face awesome environmental challenges: climate change, food production, overpopulation, the decimation of other species, epidemic disease, acidification of the oceans.

Together, they are a reminder that we are at the most dangerous moment in the development of humanity. We now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it.

In a Guardian article CSER adviser Stephen Hawking calls for elites to learn “a measure of humility” and writes eloquently about the emerging risks that threaten our continued existence as a species.

Lecture on Sculpting Evolution by Dr Kevin Esvelt

18 October 2016

Biologists can now design genetic systems that engineer evolution in powerful ways with social, legal, ethical and environmental implications for our future. Mosquito populations can already be engineered using cutting edge techniques to drastically reduce their numbers or make them resistant to transmitting diseases like malaria, dengue or the emerging zika virus.

Synthetic biologist Dr Kevin Esvelt (MIT Media Lab) introduced his work on gene drive systems which rapidly spread malaria resistance within populations while Professor Luke Alphey (Pirbright Institute) discussed his work founding Oxitec, a UK company that was the first to release genetically modified male mosquitoes whose offspring fail to reproduce, leading to dramatic reductions in numbers.

What safeguards and regulations are required to ensure responsible use of such technologies? What does it mean for humans to use nature’s tools in this way? How do we balance the direct benefits for global health with any risks to our shared environment?

This event was co-organised by the Centre for the Study of Existential Risk and the Cambridge SynBio Forum.

Partnership on AI

The Centre for the Study of Existential Risk strongly welcomes the recently announced partnership on AI to benefit people and society (current partners: DeepMind/Google, Facebook, Microsoft, Amazon, IBM). Increasingly powerful AI systems are becoming used in an ever-wider range of real-world settings. This offers wonderful opportunities for helping us with many global challenges – for example, DeepMind have recently developed tools to aid doctors in the NHS, and massively improved the energy efficiency of Google’s servers with a version of DQN (the Atari-beating algorithm), which has very beneficial implications for climate change. Similarly, Microsoft Research are making great progress on applying AI to cancer diagnosis and prevention.

However, the widespread use and further development of these systems will also throw up challenges – including fairness and potential biases in algorithms or the data they generate their results from; our ability to understand how these algorithms function and the settings in which they may not perform as well, and the impact of AI on job markets. In the longer-term, AI is set to be such a transformative technology that it is prudent to think carefully about its safe development, the potential impacts and risks of long-term advances, and the global challenges to which it can be applied beneficially. These challenges will require deep interdisciplinary and cross-sector collaboration between technology research leaders, scholars across disciplines, and policymakers who seek to stay up to date with a rapidly progressing technology. Cambridge is taking a leading role in these discussions; in addition to CSER’s work, research leaders in Cambridge’s machine learning department have been organising workshops at the major machine learning conferences on the societal impacts of AI , legal and policy challenges that AI will present, and the technical design of AI systems so as to be reliable ‘in the wild’. Cambridge has also  recently partnered with Oxford, Berkeley and Imperial on a new centre to study the long-term opportunities and challenges of AI, supported by the Leverhulme Foundation – the Centre for the Future of Intelligence.

The research leaders in companies such as DeepMind, Facebook, Microsoft, Amazon and IBM are among the best placed to think in a long-term manner about these issues, in their deep understanding of the current state of the art, their unique insights into where the field will be in ten years’ time, and the ways in which their advances will change the world. They also have a unique opportunity to play a guiding role, in collaboration with others. This partnership is a tremendously positive step, and demonstrates laudable responsibility and leadership from the companies involved. We strongly welcome it, and look forward to opportunities to collaborate on many of the research issues the Partnership highlights.

Seán Ó hÉigeartaigh,

Executive Director, Centre for the Study of Existential Risk

http://www.wired.co.uk/article/ai-partnership-facebook-google-deepmind

http://www.partnershiponai.org/