Artificial Intelligence

A transformative general purpose technology

  • The field of artificial intelligence is advancing rapidly along a range of fronts. Recent years have seen dramatic improvements in AI applications like image and speech recognition, autonomous robotics, and game playing; these applications have been driven in turn by advances in areas such as neural networks (i.e. deep learning techniques), search (i.e. Monte Carlo search and metareasoning), and the scaling of existing techniques to modern computers and clusters.
  • Despite the major scientific and economic impacts of AI and machine learning currently seen, the bulk of transformative impacts from this technology lie in the future. While the field promises tremendous benefits, a growing body of experts within and outside the field of AI has raised concerns that future developments may represent a major technological risk. 

“Narrow” versus “General AI”

  • Most of the applications of artificial intelligence that have had an impact on the world to date have been what is sometimes called “narrow” AI applications. These are algorithms and approaches specifically designed to tackle a well-specified problem in a single domain; such approaches cannot adapt to new or broader challenges without significant redesign.
  • However, a long-held goal in the field has been the development of human-level general problem-solving ability: artificial intelligence that can learn and adapt to a very broad range of challenges and reach human-level ability (or greater) for most of these challenges.
  • While this has yet to be achieved, many researchers within the field predict that human-level general intelligence may be achievable in the foreseeable future. Among experts, predicted timelines vary from 300 years to just fifteen, with the bulk of predictions falling within the next 50 years.

Risk from artificial intelligence

  • As AI algorithms become both more powerful and more general – able to function in a wider variety of ways in different environments – their potential benefits and their potential for harm will increase rapidly. Even very simple algorithms, such as those implicated in the 2010 financial flash crash, demonstrate the difficulty in designing safe goals and controls for AI; goals and controls that prevent unexpected catastrophic behaviours and interactions from occurring.
  • With the level of power, autonomy, and generality of AI expected to increase in coming years and decades, forward planning and research to avoid unexpected catastrophic consequences is essential.

Focusing the field around safe development

  • One area of concern is the level of uncertainty associated with both the speed of development and the potential risks of artificial intelligence. This is due to several factors that include conceptual barriers in AI research, a perceived lack of communication between experts in different subfields of AI, and a paucity of research on appropriate safety and control mechanisms for AI development.
  • CSER’s research team will engage with each of these factors by drawing on the expertise of leaders in the relevant subfields of artificial intelligence. We will embed researchers with top AI development teams both in academia and industry, and will organise workshops and conferences to focus the attention of the field as a whole on the challenge of safe development of AI.
  • Our end goal is both to significantly advance both the state of research on AI safety protocol and risk, and to inform industry leaders and policymakers on appropriate strategies and regulations to allow the benefits of AI advances to be safely realized.
  • Our research will be guided by leaders in computer science, AI, and technology risk: these include Stuart Russell (Berkeley; a recognised world leader in artificial intelligence), Nick Bostrom (Oxford), Murray Shanahan (Imperial), Margaret Boden (Sussex), David Chalmers (NYU), Sean Holden (Cambridge) and Dana Scott (Carnegie Mellon).