A distraction or an essential discussion? Confronting extreme environmental risks.

An expert panel will explore different perspectives on risk in the face of uncertainties, unknowns, and the possibilities of extreme outcomes. This event is being co-hosted by the Cambridge Forum for Sustainability and the Environment (CFSE) and the Centre for Existential Risk (CSER). Click here for more information.

Monday 7 March: 7:30pm – 8:30pm

Mill Lane Lecture Rooms , 8 Mill Lane, CB2 1RW

Blavatnik Public Lecture Series – Prof. Charles Kennel and Prof. Stephen Briggs

Date: 26 February

Time: 16:00-18:00

Location: Seminar room, 1st floor, David Attenborough Building, Cambridge.

Lecture Title: Planetary Vital Signs, Planetary Decisions, Planetary Intelligence.

Book your ticket

Abstract:

Doesn’t the world need to look beyond global temperature to a set of planetary vital signs? When all indicators of change are fragile, you should not rely on one; you risk over-focusing policy on it.  You look at a number different of ones and ask whether they all point in the same general direction.  You look at the balance of evidence.

A coalition of scientists and policy makers should start work at once, since some vital signs should be ready at the entry into force of the Paris Agreement in 2020 or it will be hard to infuse any into policy processes later.

But vital signs are only the beginning.  They are not indicators of risk to the things  people care about.  And the world needs to learn how to use the vast knowledge we will be acquiring about climate change and its impacts.

Is it possible to use the tools at hand- observations from space and ground networks; demographic, economic and societal measures; big data statistical techniques; and numerical models-to inform politicians, managers, and the public of the evolving risks of climate change at global, regional, and local scales?

Should we not think in advance of an always-on social and information network that provides decision-ready knowledge to those who hold the responsibility to act, wherever they are, at times of their choosing?  Shouldn’t we prepare the social infrastructure-policies, governance, institutions, financing- needed to knit climate knowledge and action together?

Professor Kennel will be joined by Professor Stephen Briggs who will talk about planetary vital signs.

About the speakers:

Charles F. Kennel is Distinguished Professor, Vice-Chancellor, and Director emeritus at the Scripps Institution of Oceanography at the University of California. He was educated in astronomy and astrophysics at Harvard and Princeton. He served as UCLA’s Executive Vice Chancellor, its chief academic officer, from 1996 to 1998. From 1994 to 1996, Kennel was Associate Administrator at NASA and Director of Mission to Planet Earth, a global Earth science satellite program. Kennel’s experiences at NASA influenced him to go into Earth and climate science, and he became the ninth Director and Dean of the Scripps Institution of Oceanography and Vice Chancellor of Marine Sciences at the University of California, San Diego, serving from 1998 to 2006.

Stephen Briggs is currently the senior advisor to the ESA (European Space Agency) and the chair of the UN Global Climate Observing System. He headed the Department of “Earth Observation” (EO) Science, Applications & Future Technologies of ESA at ESRIN (European Space Research Institute). Before joining ESA in 2000, Stephen worked as Director of Earth Observation British National Space Centre & Head of Earth Observation NERC, UK (1994-1999), Head of Remote Sensing Applications Development Unit, NERC/BNSC (1986-1994), Senior Scientist at NERC Thematic Information Systems (1983-1986), and Lecturer at the Dept of Physics, Queen Mary College London (1982-1983). Stephen Briggs is also a Visiting Professor in the Dept. of Meteorology, Reading University.

CSER events this week – all welcome!

There are two great CSER-related events this week in Cambridge.
On Friday: Kay Firth-Butterfield, who leads Lucid AI’s ethical advisory panel, will be speaking about safe and beneficial development of AI, and its relevance to global challenges, for CSER’s public lecture at 4pm at the Winstanley Lecture Theatre, Trinity College. A great opportunity to get an industry perspective on “AI for the good of the many, not the few”. Attendance is free, but please register.

Bio: Kay Firth-Butterfield has worked as a Barrister, Mediator, Arbitrator, Professor and Judge in the United Kingdom. She is a humanitarian with a strong sense of social justice. She has advanced degrees in Law and International Relations which focused on ramifications of pervasive artificial intelligence. After moving to the US she taught at University level before becoming the Chief Officer of the Lucid Ethics Advisory Panel which she envisioned with the CEO and is in the process of creating. Additionally she teaches a course at the University of  Texas Law School on Law and Policy regarding AI and other emerging technologies. 

Book tickets here.

On Sunday, Kay, Dr Fumiya Lida and Seán Ó hÉigeartaigh will be speaking on challenges and policy related to long-term AI as part of the Wilberforce Society’s excellent conference on AI and automation.
http://twsconference.wix.com/tws-conference-2016

Please spread the word!

 

Leverhulme Centre for the Future of Intelligence

CSER is delighted to announce that a new centre on the future of artificial intelligence will be established due to the generosity of the Leverhulme Foundation. The Centre proposal was developed at CSER and CRASSH, but will be a stand-alone centre, albeit collaborating extensively with CSER and with the Strategic AI Research Centre (an Oxford-Cambridge collaboration led by Nick Bostrom and Seán Ó hÉigeartaigh recently funded by the Future of Life Institute’s AI safety grants program).

PRESS RELEASE:

Human-level intelligence is familiar in biological “hardware” – it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be “the biggest event in human history”. Professor Stephen Hawking agrees, saying that “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term. The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: “Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad”.

The Centre is a response to the Leverhulme Trust’s call for “bold, disruptive thinking, capable of creating a step-change in our understanding”. The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University’s Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity’s future including climate change, disease, warfare and technological revolutions. Dr Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, “a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH’s vision and expertise.

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John’s College, Cambridge, said: “The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks—from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications.

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: “With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.

Cambridge University’s press release

The Future of Biotech Enterprise: Exponential Opportunities and Existential Risks

On Wendesday 2nd, CSER (Managing Extreme Technological Risks), Cambridge University Entrepreneurs and the Masters for Bioscience Enterprise are partnering to host “The Future of Biotech Enterprise: Exponential Opportunities and Existential Risks”.

Speakers include CSER adviser Prof Chris Lowe, biotechnology investor Dmitry Kaminski and Prof Derek Smith, who spoke on gain-of-function influenza research at a CSER lecture earlier this year.

“Bioscience technologies have the power to build or destroy a world of abundance. Leveraging entrepreneurial opportunities whilst avoiding catastrophic risk is a balancing act with potentially fatal consequences.”
Attendance is free and open to all, please register below.
https://www.eventbrite.com/e/the-future-of-biotech-enterprise-exponential-opportunities-and-existential-risks-tickets-19638230476

Venue: The Queen’s Lecture Theatre, Emmanuel College, Cambridge
Date: Wednesday 2nd December
Time: 15:30-17:00

CSER Public lecture: Jane Heal on Pushing the Limits, November 20th

The next CSER public lecture will take place on Friday November 20th at 5pm, and will be given by Professor Jane Heal (Philosophy, Cambridge).

What do theory of evolution, intellectual history and philosophy tell us about what we human beings are like? And what resources – intellectual, emotional, moral – we can muster for dealing with the existential risks of our current situation? The talk will offer a speculative overview of these topics, which set the scene for the challenging issues CSER faces.

http://cser.org/event/pushing-the-limits-public-lecture-with-professor-jane-heal/

Tickets available here (free):
https://www.eventbrite.co.uk/e/cser-seminar-series-public-lecture-with-professor-jane-heal-tickets-19207983596

Four new positions at the Centre for the Study of Existential Risk

The Centre for the Study of Existential Risk is delighted to announce four new postdoctoral positions for the subprojects below, to begin in January 2016 or as soon as possible afterwards. The research associates will join a growing team of researchers developing a general methodology for the management of extreme technological risk.

Evaluation of extreme technological risk will examine issues such as:
The use and limitations of approaches such as cost-benefit analysis when evaluating extreme technological risk; the importance of mitigating extreme technological risk compared to other global priorities; issues in population ethics as they relate to future generations; challenges associated with evaluating small probabilities of large payoffs; challenges associated with moral and evaluative uncertainty as they relate to the long-term future of humanity.
Relevant disciplines include philosophy and economics, although suitable candidates outside these fields are welcomed.
Evaluation of extreme technological risk

Extreme risk and the culture of science will explore the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk, and investigate the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include philosophy of science and science and technology studies, although suitable candidates outside these fields are welcomed.
Extreme risk and the culture of science

Responsible innovation and extreme technological risk asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential? Relevant disciplines include science and technology studies, geography, sociology, governance, philosophy of science, plus relevant technological fields (e.g., AI, biotechnology, geoengineering), although suitable candidates outside these fields are welcomed.
Responsible innovation and extreme technological risk

We are also seeking to appoint an academic project manager, who will play a central role in developing CSER into a world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and administrative responsibilities. The Academic Project Manager will co-ordinate and develop CSER’s projects and the Centre’s overall profile, and build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide. This is a unique opportunity to play a formative research development role in the establishment of a world-class centre.
CSER Academic Project Manager

Candidates will normally have a PhD in a relevant field or an equivalent level of experience and accomplishment (for example, in a policy, industry, or think tank setting). Application Deadline: Midday (12:00) on November 12th 2015.

The Vulnerability of Man

CSER’s Jaan Tallinn, Professor Sir John Beddington (Senior Advisor, Oxford Martin School & the UK Government’s former Chief Scientific Adviser) and Sir Crispin Tickell (former diplomat and advisor to successive UK Prime Ministers, who is regarded as the world’s foremost authority on climate change and environmental issues) speak to Vikas Shah at Thought Economics  on existential risk and the vulnerability of our species.

http://thoughteconomics.com/the-vulnerability-of-man/

Researchers Urge UN to Ban Autonomous Weapons

Over 1000 researchers working in the field of Artificial Intelligence and Robotics have signed an open letter to the United Nations urging that the development and use of autonomous weapons be banned.

Presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, the letter includes signatures from CSER’s co-founders Jaan Tallinn and Huw Price as well as CSER advisors Stephen Hawking, Elon Musk and Stuart Russell.

It states that whilst “AI has great potential to benefit humanity in many ways” that “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”.

The complete open letter and list of signatories can be read here.