Our staff members are often available for media interviews. We work with journalists to provide an expert opinion and perspective, commentary on breaking news, and participation in discussion and debate, for radio, television, and print media.
We understand that journalists often work to tight deadlines, and try to respond promptly to all media inquiries.
- Email: firstname.lastname@example.org
- Phone: +44 0122-3766-838
Articles and talks by us
- Earth in its final century? A TED talk by Martin Rees, 2008.
- Watching the doomsday clock together Martin Rees in Times Higher Education, February 9 2017.
- The World in 2050 and beyond. Martin Rees in the New Statesman, November 26 2014.
- AI — Can we keep it in the box? A brief introduction to the issues in the case of AI, with links to further reading, by Huw Price and Jaan Tallinn.
- Are we risking our existence? A talk by Martin Rees, Jaan Tallin and Huw Price at the Festival of Dangerous Ideas.
- The Intelligence Stairway. A public lecture by Jaan Tallinn at Sydney Ideas, July 2012 (introduced by Huw Price).
- Cambridge, Cabs and Copenhagen: My Route to Existential Risk. Huw Price in the New York Times.
- Surviving the 21st Century. The launch lecture for the Centre, with talks by Martin Rees, Jaan Tallin and Huw Price.
- Existential Risk. An interview from Edge covering Jaan Tallinn’s views on artificial intelligence and existential risks.
- The Anthropocene epoch could inaugurate even more marvellous eras of evolution by Martin Rees in the Guardian, August 29 2016.
- Five-part series on the world beyond 2050. Martin Rees in WorldPost,
- How soon will robots take over the world? Martin Rees in the Telegraph, May 23 2015.
- Should We Rage, Rage Against the Dying of the Mites? Huw Price and Martin Rees in WorldPost.
- Astronomer Royal on science, environment and the future. A speech by Martin Rees at the British Science Festival in Newcastle on September 12th, 2013.
- At the Vatican, a call to avoid ‘biological extinction’, Environmental Health News, 27 February, 2017
- Meet Earth’s Guardians, the real-world X-men and women saving us from existential threats and Apocalypse, now? The 10 biggest threats facing civilisation, from asteroids to tyrannical leaders by Richard Benson, Wired UK, 12 February, 2017
- Open-Minded Conversation May Be Our Best Bet for Survival in the 21st Century A Conversation with Martin Rees by TechEmergence, March 20, 2016
- Astronomer Royal: If we find aliens, they will be machines by Sarah Knapton, The Telegraph, June 6, 2015
- Asteroid Day tries to save life as we know it by Robin McKie, The Gardian, 13 June 2015
- Meet the people out to stop humanity from destroying itself by Kabir Chibber, Quartz, May 11, 2015
- What will happen to the humans when science fiction becomes fact? by Nick Bilton, The Irish Times, May 28, 2015
- Vatican presses politicians on climate change by Roger Harrabin, BBC News, 28 April 2015
- Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world by Aaron Labaree, Salon, October 5, 2014
- Is Artificial Intelligence a Threat? by Angela Chen, the Chronicle of Higher Education, September 11, 2014
- Apocalypse soon: the scientists preparing for the end times by Sophie McBain, New Statesman, 10 August, 2014
- Meet the Co-Founder of an Apocalypse Think Tank Interview with Martin Rees by Scientific American
- Martin Rees on Night Waves BBC Radio Three, 5 December 2012 (interview begins around 11:50)
- Centre to study technology risks to humans The Institution of Engineering and Technology, 26 November, 2012
- Humanity’s last invention and our uncertain future Research News, University of Cambridge, 25 November 2012
- Cambridge to study technology’s risk to humans by Sylvia Hui, Associated Press, 25 November 2012
- Mega-risks that could drive us to extinction New Scientist, 26 November 2012
Articles and talks by others
- Why the future doesn’t need us. A classic and controversial piece by Bill Joy, co-founder of Sun Microsystems.
- Catastrophe: Risk and Response (2005). An important book by Richard Posner — “worth the price of the book simply for Posner’s lively and readable summary of the apocalyptic dystopias that serious scientists judge to be possible” (Washington Post).
- Omens. A profile of the work of Nick Bostrom and his colleagues at the Future of Humanity Institute, Oxford.
- Accelerated modern human–induced species losses: Entering the sixth mass extinction. An article from Science Advances – the window of opportunity to take measures to avert the sixth mass extinction is rapidly closing.
- CRISPR Science Can’t Solve it. Daniel Sarewitz on the need for an inclsive discussion aout the benefits and risks of gene editing, artificial intelligence and other transformative technology.
- How Google Plans to Solve Artificial Intelligence by Tom Simonite, MIT Technology Review, March 31, 2016
- Why we should think about the threat of artificial intelligence by Gary Marcus, The New Yorker, October 24, 2013
- Understanding Artificial Intelligence An article from Brooking’s ‘Tech Tank’by Mohit Causal and Scott Nolan, Brookings, April 14, 2015
- Extreme events – The Tragedy of the Uncommons CSER external adviser Jonathan Wiener on the psychology and politics of global catastrophic/existential risk
- Life as we know it. CSER Advisor Max Tegmark in Edge Magazine
- Professor Stephen Hawking, Theoretical Physicist – The Theory of Everything. A talk by CSER Advisor Professor Stephen Hawking on Artificial Intelligence at Google Zeigeist. A transcript of the talk can be found here
- The Future of Artificial Intelligence. An expert panel discussion from Science Friday featuring CSER Advisor Professor Stuart Russell
- The Future of Humanity Institute
- Leverhulme Centre for the Future of Intelligence
- Global Catastrophic Risk Institute
- Machine Intelligence Research Institute
- The Future of Life Institute
- Bulletin of the Atomic Scientists
- The Biological Weapons Convention
- UK Foresight Programme
- Federation of American Scientists
- Defense Threat Reduction Agency – US Strategic Command
- Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter