Leverhulme Centre for the Future of Intelligence

CSER is delighted to announce that a new centre on the future of artificial intelligence will be established due to the generosity of the Leverhulme Foundation. The Centre proposal was developed at CSER and CRASSH, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration led by Nick Bostrom and Seán Ó hÉigeartaigh recently funded by the Future of Life Institute’s AI safety grants program).


Human-level intelligence is familiar in biological “hardware” – it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term. The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions. Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.

Cambridge University’s press release

The Future of Biotech Enterprise: Exponential Opportunities and Existential Risks

On Wendesday 2nd, CSER (Managing Extreme Technological Risks), Cambridge University Entrepreneurs and the Masters for Bioscience Enterprise are partnering to host “The Future of Biotech Enterprise: Exponential Opportunities and Existential Risks”.

Speakers include CSER adviser Prof Chris Lowe, biotechnology investor Dmitry Kaminski and Prof Derek Smith, who spoke on gain-of-function influenza research at a CSER lecture earlier this year.

“Bioscience technologies have the power to build or destroy a world of abundance. Leveraging entrepreneurial opportunities whilst avoiding catastrophic risk is a balancing act with potentially fatal consequences.”
Attendance is free and open to all, please register below.

Venue: The Queen’s Lecture Theatre, Emmanuel College, Cambridge
Date: Wednesday 2nd December
Time: 15:30-17:00

Four new positions at the Centre for the Study of Existential Risk

The Centre for the Study of Existential Risk is delighted to announce four new postdoctoral positions for the subprojects below, to begin in January 2016 or as soon as possible afterwards. The research associates will join a growing team of researchers developing a general methodology for the management of extreme technological risk.

Evaluation of extreme technological risk will examine issues such as:
The use and limitations of approaches such as cost-benefit analysis when evaluating extreme technological risk; the importance of mitigating extreme technological risk compared to other global priorities; issues in population ethics as they relate to future generations; challenges associated with evaluating small probabilities of large payoffs; challenges associated with moral and evaluative uncertainty as they relate to the long-term future of humanity.
Relevant disciplines include philosophy and economics, although suitable candidates outside these fields are welcomed.
Evaluation of extreme technological risk

Extreme risk and the culture of science will explore the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk, and investigate the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include philosophy of science and science and technology studies, although suitable candidates outside these fields are welcomed.
Extreme risk and the culture of science

Responsible innovation and extreme technological risk asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential? Relevant disciplines include science and technology studies, geography, sociology, governance, philosophy of science, plus relevant technological fields (e.g., AI, biotechnology, geoengineering), although suitable candidates outside these fields are welcomed.
Responsible innovation and extreme technological risk

We are also seeking to appoint an academic project manager, who will play a central role in developing CSER into a world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and administrative responsibilities. The Academic Project Manager will co-ordinate and develop CSER’s projects and the Centre’s overall profile, and build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide. This is a unique opportunity to play a formative research development role in the establishment of a world-class centre.
CSER Academic Project Manager

Candidates will normally have a PhD in a relevant field or an equivalent level of experience and accomplishment (for example, in a policy, industry, or think tank setting). Application Deadline: Midday (12:00) on November 12th 2015.

The Vulnerability of Man

CSER’s Jaan Tallinn, Professor Sir John Beddington (Senior Advisor, Oxford Martin School & the UK Government’s former Chief Scientific Adviser) and Sir Crispin Tickell (former diplomat and advisor to successive UK Prime Ministers, who is regarded as the world’s foremost authority on climate change and environmental issues) speak to Vikas Shah at Thought Economics  on existential risk and the vulnerability of our species.


Researchers Urge UN to Ban Autonomous Weapons

Over 1000 researchers working in the field of Artificial Intelligence and Robotics have signed an open letter to the United Nations urging that the development and use of autonomous weapons be banned.

Presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, the letter includes signatures from CSER’s co-founders Jaan Tallinn and Huw Price as well as CSER advisors Stephen Hawking, Elon Musk and Stuart Russell.

It states that whilst “AI has great potential to benefit humanity in many ways” that “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”.

The complete open letter and list of signatories can be read here.