The hosting and recording of these lectures is made possible by the generous support of our donors, especially the Blavatnik Foundation.
Watch below, or for the latest videos and lectures subscribe to our YouTube channel.
Claire Craig – Extreme risk management in the policy environment
Rowan Douglas – Opening Session Part 2
Jo Husbands – Lessons from Efforts to Mitigate the Risks of “Dual Use” Research
Sam Weiss Evans – Words Of Caution On Making Objects Of Security Concern
Zabta K. Shinwari – Young Researchers & Responsible Conduct of Science: Successes and failures
Professor Neal Katyal – Technology: Transparency vs Privacy
The Centre for the Study of Existential Risk’s November 2016 Lecture, with Professor Neal Katyal.
Professor Katyal, one of the top US Supreme Court advocates as well as the Paul Saunders Professor of National Security Law at Georgetown University, contrasts European and American approaches to data privacy, digital security, and transparency, with an eye on recent groundbreaking cases in the United States.
Colin Melvin – Who Actually Controls Public Companies And In Whose Interest Are They Run?
The Centre for the Study of Existential Risk’s October 2016 Lecture, with Colin Melvin.
Colin Melvin, the Global Head of Stewardship for Hermes Investment Management (the largest shareholder engagement company in the world) discusses on the 24th of October, 2016 the role of institutional investors — pension funds, university endowments (such as the Cambridge University Endowment Fund), foundations, charities, and others — in influencing company behaviour regarding the environment, human rights, and other issues of societal concern. The particular focus of his talk is the transportation sector, which accounts for 26% of global CO2 emissions and 60% of global oil demand. Colin’s keynote speech presented at a workshop hosted by the Cambridge Centre for the Study of Existential Risk, the Global Shapers Cambridge Hub, Positive+Investment, and Positive Investment Cambridge.
Dr David Denkenberger – Feeding Everyone No Matter What
The Centre for the Study of Existential Risk’s September 2016 Lecture, with Dr David Denkenberger.
A large asteroid or comet impact, super volcanic eruption, or full-scale nuclear war could cause a ~100% global agricultural shortfall. Together these have a probability ~10% this century. We have proposed solutions that could feed everyone without the sun, such as growing mushrooms on dead trees. Abrupt climate change, coincident extreme weather, a volcanic eruption like that which caused the year without a summer in 1816, regional nuclear war, complete loss of bees, and medium-sized comet/asteroid could cause a ~10% global agricultural shortfall. Together these have a probability ~80% this century.
We have proposed solutions that would mitigate the food price rise, such as relocating animals to the farm fields so they can consume agricultural residues. A number of risks could cause widespread electrical failure, including a series of high-altitude electromagnetic pulses (HEMPs) caused by nuclear weapons, an extreme solar storm, and a coordinated cyber attack. Since modern industry depends on electricity, it is likely there would be a collapse of the functioning of industry and machines in these scenarios. We have proposed solutions for food (e.g. burning wood from landfills for fertiliser) and nonfood (such as retrofitting ships to be wind-powered) requirements of everyone. These alternate food solutions require only low-cost preparation research and planning (unlike storing food), and therefore are cost-effective ways of saving expected lives and reducing the chance of loss of civilisation, from which humanity may not recover.
Dr. David Denkenberger is an assistant professor at Tennessee State University in architectural engineering. He is also an associate at the Global Catastrophic Risk Institute. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, and is a Penn State distinguished alumnus. He has authored or co-authored over 50 publications, including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe.
Professor Hilary Greaves – Extinction Risk and Population Ethics
The Centre for the Study of Existential Risk’s June 2016 Lecture, with Professor Hilary Greaves.
How important is it that we reduce the risk of human extinction? This depends sensitively on fundamental questions in moral theory. On the one hand, if humanity goes extinct prematurely, vast amounts of well-being will be lost – all the well-being that would have been contained in the future lives that are prevented, by the premature extinction event, from coming into existence. On the other hand, if humanity goes extinct prematurely, then (aside from the suffering involved in the process of extinction itself) the extinction event seems to be in one clear sense victimless – precisely because of the extinction, there do not exist any persons who lose the well-being in question. The first thought suggests that reducing the risk of extinction is about the most important thing we could do; the second suggests it is a matter of relative indifference.
I will argue for the first thought over the second, via arguing that the moral theory that would be required to justify the second, however initially intuitive, is not in the end coherent. A further question, initially apparently unrelated, is what the optimal size is for the human population at any given time; many in the public sphere are increasingly concerned about overpopulation, for reasons related to resource scarcity, climate change, economic growth or others. I will first suggest that if the arguments in the first part of my talk are correct, these make it much harder to argue that population size ought to be reduced via any of the usual routes. Second, however, I will sketch one new (and tentative) argument for population-size reduction that is a *result* of the thesis that extinction risk is overwhelmingly important.
Professor Paul Ehrlich – Population, Environment, Extinction and Ethics
The Centre for the Study of Existential Risk’s May 2016 Lecture, with Professor Paul Ehrlich.
Prof Ehrlich received early inspiration to study ecology. When in his high school years he read William Vogt’s Road to Surivival, an early study of the problem of rapid population growth and food production. He graduated in zoology from the University of Pennsylvania and took MA and Ph.D. degrees from the University of Kansas. He became a full professor of biology at Stanford University and Bing professor of population studies from 1976. Though much of his research was done in the field of entomology, Ehrlich’s overriding concern became unchecked population growth. He was concerned that humanity treat the Earth as a spaceship with limited resources and a heavily burdened life-support system; otherwise, he feared, “mankind will breed itself into oblivion.” He published a distillation of his many articles and lectures on the subject in The Population Bomb (1968) and wrote hundreds of papers and articles on the subject.
Professors Charles Kennel and Stephen Briggs – Planetary Vital Signs, Planetary Decisions, Planetary Intelligence
The Centre for the Study of Existential Risk’s February 2016 Lecture, with Professors Charles Kennel and Stephen Briggs.
Professor Charles Kennel highlights the importance of acknowledging that climate change is not only measurable and noticeable by looking at global temperature, but that assessing a set of planetary vital signs plays a crucial role as well. By relying too much on one indicator, we risk that we over focus on it.
Are there possibilities to use the tools at hand – such as observations from space and ground networks; demographic, economic and societal measures; big data statistical techniques; and numerical models – to inform politicians, managers, and the public of the evolving risks of climate change at global, regional, and local scales? Professor Kennel is joined by Professor Stephen Briggs who gives an analysis on planetary vital signs.
Kay Firth-Butterfield – Lucid AI’s Ethics Advisory Panel
The Centre for the Study of Existential Risk’s January 2016 Lecture, with Kay Firth-Butterfield.
Lucid is an AI company with an Ethics Advisory Panel which is led by Kay Firth-Butterfield. She will talk about the Panel’s composition and mandate and why the company thinks it is important. Also, she will discuss how the Panel ties to the aims of the Future of Intelligence/CSER and a very brief overview of how Lucid’s AI differs from machine learning.
Kay Firth-Butterfield has worked as a Barrister, Mediator, Arbitrator, Professor and Judge in the United Kingdom. She is a humanitarian with a strong sense of social justice. She has advanced degrees in Law and International Relations which focused on ramifications of pervasive artificial intelligence. After moving to the US she taught at University level before becoming the Chief Officer of the Lucid Ethics Advisory Panel which she envisioned with the CEO. Additionally she teaches a course at the University of Texas Law School on Law and Policy regarding AI and other emerging technologies.
Professor Jane Heal – Pushing the Limits
The Centre for the Study of Existential Risk’s November 2015 Lecture, with Professor Jane Heal.
What do theory of evolution, intellectual history and philosophy tell us about what we human beings are like? And what resources – intellectual, emotional, moral – we can muster for dealing with the existential risks of our current situation? The talk will offer a speculative overview of these topics, which set the scene for the challenging issues CSER faces.
Professor Jane Heal a world leading expert in the Philosophy of Mind. Professor Heal studied for her first degree in Cambridge, reading History for two years and then Philosophy (or “Moral Sciences” as it was called in those days) for another two years. She also took her Ph.D. at Cambridge, working on problems on the philosophy of language. After two years post doctoral study in the US (at Princeton and Berkeley) she was appointed to a lectureship at the University of Newcastle upon Tyne. Having taught there for several years she moved back to Cambridge where she is now a Fellow of St John’s College. She was elected a Fellow of the British Academy in 1997.
Professor Margaret Boden – Human-level AI: Is It Looming or Illusory?
The Centre for the Study of Existential Risk’s June 2015 Lecture, with Professor Margaret Boden.
Human-level (“general”) AI is more difficult to achieve than most people think. One key obstacle is relevance, a conceptual version of the frame problem. Another is lack of the semantic web. Yet another is the difficulty of computer vision. So artificial general intelligence (AGI) isn’t on the horizon. Possibly, it may never be achieved. No AGI means no Singularity. Even so, there’s already plenty to worry about—and future AI advances will add more. Areas of concern include unemployment, computer companions, and autonomous robots (some, military). Worries about the (illusory) Singularity have had the good effect of waking up the AI community (and others) to these dangers. At last, they are being taken seriously.
Professor Margaret Boden is a world-leading academic in the study of intelligence, both artificial and otherwise. She is is Research Professor of Cognitive Science at the Department of informatics at the University of Sussex, where her work embraces the fields of artificial intelligence, psychology, philosophy, cognitive and computer science. She was the founding-Dean of Sussex University’s School of Cognitive and Computing Sciences, a pioneering centre for research into intelligence and the mechanisms underlying it — in humans, other animals, or machines. The School’s teaching and research involves an unusual combination of the humanities, science, and technology.
Professor Boden has also been an important participant in the recent international discussions over the long-term impacts of AI. She was a member of the AAAI’s 08/09 Presidential Panel on long-term AI futures (http://www.aaai.org/Organization/pres…), and also took part in the recent Puerto Rico conference on the Future of AI, co-organised by CSER (http://futureoflife.org/misc/ai_confe… she is therefore uniquely well-placed to discuss near- and long-term prospects in AI.
Professor Stuart Russell – The Long-Term Future of (Artificial) Intelligence
The Centre for the Study of Existential Risk’s May 2015 Lecture, with Professor Stuart Russell.
The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.
Stuart Russell is one of the leading figures in modern artificial intelligence. He is a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley. He is author of the textbook ‘Artificial Intelligence: A Modern Approach’, widely regarded as one of the standard textbooks in the field. Russell is on the Scientific Advisory Board for the Future of Life Institute and the Advisory Board of the Centre for the Study of Existential Risk.
Dr Toby Ord – Will We Cause Our Own Extinction? Natural versus Anthropogenic Extinction Risks
The Centre for the Study of Existential Risk’s April 2015 Lecture, with Dr Toby Ord.
How will humanity go extinct? Is it more likely to be from natural causes such as an asteroid impact or anthropogenic causes such as a nuclear war? Using the fossil record, we can place a rough upper bound on the probability of human extinction from natural causes: all natural causes put together have less than a 1% chance of causing human extinction each century, and probably less than 0.1%. In contrast, it is very difficult to put upper or lower bounds on the chance of extinction from anthropogenic causes. In this talk, Dr Toby Ord advance an argument that anthropogenic causes currently produce ten or more times as much extinction risk as the natural causes, and shows how this suggests that we should prioritise the reduction of anthropogenic extinction risks over natural ones.
Toby Ord is a Research Fellow at the Future of Humanity Institute, Oxford University & Oxford Martin School. He works on theoretical and practical questions concerning population ethics, global priorities, existential risk and new technologies, and recently contributed a report on Managing Existential Risk from Emerging Tehnologies to the Chief Scientific Advisor’s annual report for the UK government. Dr Ord’s concern for current and future generations also led him to found the orgnanisation Giving What We Can.
Professor Murray Shanahan – Minds Like Ours: An Approach To AI Risk
The Centre for the Study of Existential Risk’s February 2015 Lecture, with Professor Murrary Shanahan.
Writers who speculate about the future of artificial intelligence (AI) and its attendant risks often caution against anthropomorphism, the tendency to ascribe human-like characteristics to something non human. An AI that is engineered from first principles will attain its goals in ways that would be hard to predict, and therefore hard to control, especially if it is able to modify and improve on its own design.
However, this is not the only route to human-level AI. An alternative is to deliberately set out to make the AI not only human-level but also human-like. The most obvious way to do this is to base the architecture of the AI on that of the human brain. But this path has its own difficulties, many pertaining to the issue of consciousness. Do we really want to create an artefact that is not only capable of empathy, but also capable of suffering?
Professor Marc Lipsitch – Risks and Benefits of Gain-of-Function Experiments in Potentially Pandemic Pathogens
The Centre for the Study of Existential Risk’s January 2015 Lecture, with Professor Marc Lipsitch and Professor Derek Smith.
A growing trend in experimental virology has been the modification of influenza viruses that are antigenically novel to, and virulent in humans, such that these variant viruses are readily transmissible in mammals, including ferrets which are thought to be the best animal model for influenza infection. Novel, contagious, virulent viruses are potential pandemic pathogens in that their accidental or malevolent release into the human population could cause a pandemic.
Professor Marc Lipsitch (Harvard) describes the purported benefits of such studies, arguing that these are overstated; estimates the magnitude of the risk they create, argues for the superiority of alternative scientific approaches on both safety and scientific grounds, and proposes an ethical framework in which such experiments should be evaluated. The talk also explore recent developments following the pause in funding for this research announced by the United States Government in October, and steps towards the risk-benefit analysis called for by the announcement.
Professor Lipsitch is a professor of epidemiology and the Director of the Centre for Communicable Disease Dynamics at Harvard University. He is one of the founders of the Cambridge Working group, which calls for a “quantitative, objective and credible assessment of the risks, potential benefits, and opportunities for risk mitigation” of gain of function experiments in potentially pandemic pathogen strains.
A response is given by Professor Derek Smith, Professor of Infectious Disease Informatics at Cambridge University.