30 January 2017
By Beth Barnes
The Cambridge Conference on Catastrophic Risk brought together a wide range of academics from different disciplines relating to existential risk (x-risk) – from reinsurance to sustainable development to population ethics. I’m going to list some of the interesting lessons I think the x-risk community can learn from these different fields, and try and draw out concrete action points. There were too many useful insights to cover in one post, but videos of many of the talks will be available.
Disclaimer – these are from notes taken during the talks. I may have misheard, misunderstood or typoed what the speakers actually meant. I have tried to find citations where possible.
Methodologies from the insurance industry
Rowan Douglas gave a talk outlining how the reinsurance industry gives us an example of the success of quantitative models in areas where people previously thought this was impossible. Bringing in an ‘engineering’ mode of thinking with catastrophe models in the 1990s rejuvenated a struggling industry. We now see a tight linking between details of the models and prices. For example, an update to a model of how quickly hurricanes lose energy over the US coast quickly caused a shift in prices. Prices are no longer reactive, following the latest hype, but based on statistics. Although 2011 was a year with very high cost of disasters, prices did not change because this was still within the bounds of the models’ prediction and the models remained a better means of prediction than the previous year’s losses. Using insurance mechanisms more widely seems a very promising mechanism to correctly incentivise those developing risky technology or vulnerable infrastructure – the Global Priorities Project’s paper discusses insurance as a way to price risky research, and there was discussion of how insurance may be beginning to change grid infrastructure to make it more resilient to solar storms.
Action point 1: where else can insurance mechanisms be used to reduce x-risks or increase resilience?
Communication of science and the science of communication
Something that came up frequently in several different talks and discussions was the problem of communication and narrative framing. This could be communication with the public, with AI researchers, with policy makers or with synthetic biology researchers. The key points were that different ways of communicating can have very different effectiveness, and we can work out which are the most effective.
During discussion of a climate change research funding proposal, an audience member with a marketing background suggested one tactic a ‘marketing’ approach might include:
- Find out who makes the funding decision
- Localise the impacts to them and the people they care about:
- Ordinary citizens – their friends, family
- Politicians – their constituents
- Include a photo of the impacts climate change could have on their hometown/mention the impacts to this specific group of people
Action point 2: use marketing expertise more widely and remember that communication occurs with people with their own personal interests, not perfect rational agents
In the discussion afterwards, I spoke to someone who had worked at buzzfeed. Apparently their method was to generate ~70 titles, narrow these down to the best few, and A/B test these. The difference between a good and a great title is very large. Of course, buzzfeed are optimising for very different criteria than us, but the same methodology could be applied to optimising an explanation for a correct understanding of the problem, or optimising a tagline like ‘responsible science’ for having a positive impression upon synthetic biologists.
Many people will feel uncomfortable with using ‘manipulative’ marketing techniques. It’s an important question to discuss whether the adoption of these techniques could lead to a decline in the quality of the scientific system in the longer term. I would emphasize that some of these techniques are not inherently manipulative, and can be used to make communication clearer and generate deeper understanding rather than to mislead. On the other hand, these techniques are already being used for harm in some cases – e.g. the fossil fuel industry marketing climate change scepticism – so there is a case to be made that we need to think equally carefully about our messaging to avoid utterly losing the battle for public support.
Action point 3: collect as much data as possible on the efficacy of different methods of communicating with different groups – whether it’s A/B testing a mass communication, doing before and after surveys for a workshop that was intended to change people’s opinions, or collecting feedback from events.
In a presentation on biodiversity, a study was cited http://www.nature.com/nclimate/journal/v4/n1/abs/nclimate2059.html
that found that uncertainty about the threshold at which a tipping point would be reached was much more harmful to coordination than uncertainty about the impact.
Action point 4: presenting clear thresholds to stay above/below (even if they’re completely artificial and the underlying system is probably linear) can help motivate action
Action point 5: the literature in this area – game theory, collective action, cognitive biases – probably contains more very useful results
The lecture on conservation gave a different perspective on the promises and perils of marketing. After $250 million was spent on the science to determine the cause of the decline in Snake River salmon, the publicity around it and eventual investment focused exclusively on dams. The investigation had found that the removal of dams would not be sufficient to prevent the extinction of the Salmon, but the dams caught public attention in a way that other issues did not. Investment in reaching a scientific consensus can be completely wasted if the message is not adequately communicated.
A lesson learnt from sustainable development policy is that more political power for your cause is not always better. The Wilberforce Society/ Future of Sentience paper on rights and representation of future generations identified factors that contributed to success or failure of mechanisms of representing future people. Having too much political power seemed to be a factor that contributed to the mechanism being dismantled at the next election cycle whereas less ambitious initiatives endured for longer.
Action point 6: think about the sustainability of political infrastructure, not just efficacy – too much power can make it more likely to be dismantled
Lessons from recent successes in AI
Talking about artificial intelligence safety has gone from something that would get you labelled as a crackpot ~5 years ago, to something that leading AI researchers openly express support for.
Victoria Krakovna gave an overview of which factors she thought had created these successes. She highlighted getting the key players together in a relaxed environment (the Puerto Rico conference), Chatham House rules so people could talk about things they felt they couldn’t say openly, and developing a concrete research agenda that tied the problems to technical research in mainstream AI. Going forward she recommended continuing to take ‘portfolio’ approach with different organisation trying different tactics in the hope that at least one would succeed. She also recommended continuing informal outreach to individual researchers, and integration with AI capability research, such as through papers and technical events at mainstream ML conferences.
The relationship between short-term and long-term issues could be both a blessing and a curse, helping to put AI safety on people’s radar but also causing confusion about what the most important issues are.