Gain of Function Influenza Research – Lecture online

The recording for CSER’s January seminar on gain of function influenza research (a discussion between Professors Marc Lipsitch and Derek Smith) is now online.

 

In response to the interest we’ve received, we will aim to record and put all of CSER’s seminars online. They will be available the week after the event on our website (the page for the event) and our Youtube channel.

The next seminar will by given by Murray Shanahan on “Minds Like Ours: An Approach to AI Risk” on February 20th, 4pm.

CSER Seminar February 20th: Murray Shanahan “Minds Like Ours: An approach to AI risk”

The Centre for the Study of Existential Risk’s second seminar, February 20th, with Professor Murray Shanahan (Imperial College London) speaking on “Minds Like Ours: An approach to AI risk”.

In addition to his numerous academic achievements, Professor Shanahan was on the Scientific Organising Committee of the recent Chatham House Rule conference on the Future of Artificial Intelligence in Puerto Rico (of which CSER was a co-organiser), which resulted in an open letter promoting “robust and beneficial development of AI” that has been signed by AI leaders worldwide. It also resulted in a $10M donation by CSER advisor Elon Musk to fund a global grants programme on AI safety research priorities.

Professor Shanahan has an upcoming book with MIT Press titled the Technological Singularity, and was scientific advisor on the highly praised Alex Garland blockbuster “Ex Machina” currently in cinemas.

The seminar takes place at the Alison Richard Building in Cambridge at 4pm Friday 20th February, and is free and open to the public.

Technological Risks in the World Economic Forum’s 2015 Global Risk Report

This year, the World Economic Forum have featured risks from emerging technology in their tenth Global Risks Report. In this lucid 50-page long document, specific note is made of the challenges associated with regulating risks that are extreme or unforeseen:

The establishment of new fundamental capabilities, as is happening for example with synthetic biology and artificial intelligence, is especially associated with risks that cannot be fully assessed in the laboratory. Once the genie is out of the bottle, the possibility exists of undesirable applications or effects that could not be anticipated at the time of invention. Some of these risks could be existential – that is, endangering the future of human life… The growing complexity of new technologies, combined with a lack of scientific knowledge about their future evolution and often a lack of transparency, makes them harder for both individuals and regulatory bodies to understand.

Under this category of emerging technological risks, they feature three domains, all of which are of particular interest to CSER: artificial intelligence, synthetic biology, and gene drives. On AI, the authors reinforce the message of CSER advisors Nick Bostrom and Stuart Russell:

These and other challenges to AI progress are by now well known within the field, but a recent survey shows that the most-cited living AI scientists still expect human-level AI to be produced in the latter half of this century, if not sooner, followed (in a few years or decades) by substantially smarter-than-human AI. If they are right, such an advance would likely transform nearly every sector of human activity.

If this technological transition is handled well, it could lead to enormously higher productivity and standards of living. On the other hand, if the transition is mishandled, the consequences could be catastrophic.

Contrary to public perception and Hollywood screenplays, it does not seem likely that advanced AI will suddenly become conscious and malicious. Instead, according to a co-author of the world’s leading AI textbook, Stuart Russell of the University of California, Berkeley, the core problem is one of aligning AI goals with human goals. If smarter-than-human AIs are built with goal specifications that subtly differ from what their inventors intended, it is not clear that it will be possible to stop those AIs from using all available resources to pursue those goals, any more than chimpanzees can stop humans from doing what they want.

On synthetic biology, the report notes that some environmental risks may be substantial, suggesting that there may be a gap in regulating small to medium enterprises and amateurs:

The risk that most concerns analysts, however, is the possibility of a synthetized organism causing harm in nature, whether by error or terror. Living organisms are self-replicating and can be robust and invasive. The terror possibility is especially pertinent
because synthetic biology is “small tech” – it does not require large, expensive facilities or easily-tracked resources… The amateur synthetic biology community is very aware of safety issues and pursuing bottom-up options for self-regulation in various ways, such as developing voluntary codes of practice. However, self-regulation has been criticized as inadequate, including by a coalition of civil society groups campaigning for strong oversight mechanisms. Such mechanisms would need to account for the cross-border nature of the technology, and inherent uncertainty over its future direction.

For gene drives, a technology that is as yet still rapidly evolving, further analysis is recommended:

Scientists and regulators need to work together from an early stage to understand the challenges, opportunities and risks associated with gene drives, and agree in advance to a governance regime that would govern research, testing and release. Acting now would allow time for research into areas of uncertainty, public discussion of security and environmental concerns, and the development and testing of safety features. Governance standards or regulatory regimes need to be developed proactively and flexibly to adapt to the fast-moving development of the science.

It is encouraging that extreme technological risks are continuing to recieve attention in high-level social, economic and legal analysis.

CSER advisors respond to the edge.org annual question: What do you think about machines that think?

A wide range of CSER’s advisors, including co-founder Martin Rees, responded to edge.org’s annual question earlier this month. Stuart Russell, Max Tegmark, Nick Bostrom, George Church, Alison Gopnik, Murray Shanahan, and Lord Rees offered a range of opinions on the prospects, dangers and possibilities of machine intelligence.

George Church considered the task of applying our current understanding of human rights to artificial intelligences, while Stuart Russell confronted the difficulty of value alignment between possible machine intelligences and humans, suggesting that the extensive existing research into human motivations and values could be employed by a system also capable of studying humans itself to understand their actions.

Other responses included that of Martin Rees, who writes that we should be looking far further into the future than is customary, suggesting that over a sufficiently long time frame the replacement of biological brains by machines as the world’s dominant intellectual objects should be regarded as inevitable. Max Tegmark responded to a number of arguments commonly deployed by those sceptical of the value of research into AI safety, and Alison Gopnik outlined the dramatic strides that AI researchers must make to create machines with the general cognitive ability of even small children.

Read the full responses at edge.org

 

Stuart Russell on AI in the news

Speaking from the World Economic Forum annual meeting in Davos, Professor Stuart Russell gave an interview distinguishing the two different ways AI has been in the news recently. Firstly, stories about the effects on jobs and the economy we might see as intelligent systems are able to do more of the tasks currently done by humans. Secondly, stories about the research question of how we ensure artificial intelligences have values aligned with those of the human race, when AIs become as capable or more capable than humans.

Stuart Russell is Professor of Computer Science at the University of California, Berkeley, co-author of Artificial Intelligence: A Modern Approach, and member of CSER’s advisor board.

CSER advisor Elon Musk funds $10M research programme on safe and beneficial development of AI

We are delighted that CSER advisor Elon Musk has provided $10M dollars in funding for a research grants programme aimed at safe and beneficial development of artificial intelligence. The grant programme will be administered by the our colleagues at the Future of Life Institute, and will be open to applications globally. A majority of the funding will go towards direct AI technical research, with the remainder going to AI-related research involving other fields such as economics, law, ethics and policy.

The announcement follows Musk’s participation in the “Future of AI: Opportunities and Challenges” Conference, organised by the Future of Life Institute with organisational suppport by CSER, and funded by CSER cofounder Jaan Tallinn.

The conference resulted in a research priorities paper outlining promising approaches to the technical, legal, economic and ethical challenges posed by future advances in artificial intelligence. An open letter supporting this research has been signed by leading AI researchers and interdisciplinary experts worldwide:

Open Letter for Robust AI Research

Over 70 AI scientists and AI safety researchers have signed an open letter calling for more research on robust and beneficial AI. The signatories included Berkeley AI Professor Stuart Russell, president of AAAI Tom Dietterich, Microsoft Research Director Eric Horvitz, both co-founders of Vicarious, all three co-founders of Google Deepmind and a half-dozen other staff from Google including Peter Norvig, Laurent Orseau, and Blaise Aguera y Arcas. AI safety researchers Nick Bostrom, Eliezer Yudkowsky and Luke Muehlhauser have also signed on. From CSER, all three founders signed, as did our advisors Margaret Boden, Stephen Hawking, Elon Musk and Murray Shanahan. So have Max Tegmark and others from the Future of Life Institute.

The statement is as follows:

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

The research priorities include law, ethics and economics research as well as computer science research for robust AI that is aligned with human interests.

CSER Seminar: Mark Lipsitch: Risks and benefits of gain-of-function pathogen research (4pm, January 16th)

16 January 2015, 16:00 – 17:30
SG1, Alison Richard Building

The Centre for the Study of Existential Risk is pleased to announce a monthly seminar series beginning in January 2015.

The January seminar will be given by Professor Marc Lipsitch (Harvard) :

“Risks and benefits of gain-of-function experiments in potentially pandemic pathogens. How should we evaluate them, and what alternatives exist?”

Professor Derek Smith, Professor of Infectious Disease Informatics at Cambridge, will give a response. The seminar will be followed by a drinks reception.

The event is free and open to everyone but online registration is required. Please book your place by clicking on the online registration link at the right of the CRASSH event page (linked) .

Abstract: “A growing trend in experimental virology has been the modification of influenza viruses that are antigenically novel to, and virulent in humans, such that these variant viruses are readily transmissible in mammals, including ferrets which are thought to be the best animal model for influenza infection. Novel, contagious, virulent viruses are potential pandemic pathogens in that their accidental or malevolent release into the human population could cause a pandemic. This talk will describe the purported benefits of such studies, arguing that these are overstated; estimate the magnitude of the risk they create, argue for the superiority of alternative scientific approaches on both safety and scientific grounds, and propose an ethical framework in which such experiments should be evaluated. The talk will also explore recent developments following the pause in funding for this research announced by the United States Government in October, and steps towards the risk-benefit analysis called for by the announcement”

Professor Lipsitch is a professor of epidemiology and the Director of the Centre for Communicable Disease Dynamics at Harvard University. He is one of the founders of the Cambridge Working group, which calls for a “quantitative, objective and credible assessment of the risks, potential benefits, and opportunities for risk mitigation” of gain of function experiments in potentially pandemic pathogen strains.

For more on the scientific debate, see Gain-of-function experiments: time for a real debate.

For administrative enquiries please contact Michelle Maciejewska..

Stuart Russell argues for a new approach to AI risk

Stuart Russell, Professor of Computer Science at the University of California, Berkeley, author (with Peter Norvig) of Artificial Intelligence: A New Approach, and CSER External Advisor, has warned of risks from Artificial Intelligence, and encouraged researchers to rethink the goals of their research.

Writing on edge.org, described by The Guardian as ‘an internet forum for the world’s most brilliant minds’, Russell noted that while it has not been proved that AI will be the end of the world, “there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility.” He notes that many unconvincing arguments have been refuted, but claims that the more substantial arguments proposed by Omohundro, Bostrom and others remain largely unchallenged.

Up to now improving decision quality has been the mainstream goal of AI research, an end towards which significant progress has been made in recent years. In Russell’s view AI research has been accelerating rapidly, and senior AI researcher express considerably more optimism over the field’s prospects than was the case even a few years ago, and that, as a result, we should be correspondingly more concerned about the field’s risks.

He dismisses fears that those raising the prospect of AI risk will call for regulation of basic research, an approach which would be misguided and misdirected given the potential benefits of AI for humanity. The right response, he writes, is to change the goals of the field itself from building a pure intelligence, to building an intelligence which is provably aligned with human values. In the same way as containment is seen as an intrinsic part of nuclear fusion research, such an approach would reliably and practically limit the risk of future catastrophe.

Read the entire discussion including contributions from, among others, George Church, Professor of Genetics at Harvard and CSER advisor, and Peter Diamandis, Chairman of the X Prize Foundation here.

Rees launches Asteroid Day

asteroid day
Yesterday, Martin Rees and other 100 other experts announced Asteroid Day. Asteroid day is an awareness movement dedicated to learning about asteroids and how to protect our planet. It will take place on the anniversary of the 1908 Siberian Tunguska event, starting on the 30th June 2015.

At the launch event, Martin Rees read the 100X Asteroid Declaration:

As scientists and citizens, we strive to solve humanity’s greatest challenges to safeguard our families and quality of life on Earth in the future.

Asteroids impact Earth: such events, without intervention, will cause great harm to our societies, communities and families around the globe. Unlike other natural disasters, we know how to prevent asteroid impacts.

There are a million asteroids in our solar system that have the potential to strike Earth and destroy a city, yet we have discovered less than 10,000 — just one percent — of them. We have the technology to change that situation.

Therefore, we, the undersigned, call for the following action:

Employ available technology to detect and track Near-Earth Asteroids that threaten human populations via governments and private and philanthropic organisations.

A rapid hundred-fold (100x) acceleration of the discovery and tracking of Near-Earth Asteroids to 100,000 per year within the next ten years.

Global adoption of Asteroid Day, heightening awareness of the asteroid hazard and our efforts to prevent impacts, on June 30, 2015.

I declare that I share the concerns of this esteemed community of astronauts, scientists, business leaders, artists and concerned citizens to raise awareness about protecting and preserving life on our planet by preventing future asteroid impacts.

Over 100 leaders from these areas have signed the declaration, including: Astronomer Royal Ed Lu, astrophysicist Kip Thorne, astronomer Carolyn Shoemaker, Nobel laureate Harold Kroto, Google’s Peter Norvig, science guy Bill Nye, science presenter Brian Cox, Queen guitarist and astrophysicist Brian May, and over 38 astronauts and cosmonauts.

This is a great group to have advocating for reducing the risk of catastrophe!

Read more at the Asteroid Day website.