The field of artificial intelligence is advancing rapidly along a range of fronts with recent years seeing dramatic improvements in AI applications like image and speech recognition, autonomous robotics, and game playing. The bulk of transformative impacts from this technology lie in the future, and while the field promises tremendous benefits, a growing body of experts within and outside the field of AI has raised concerns that future developments may represent a major technological risk.
Most current AI applications are ‘narrow’ applications – algorithms and approaches specifically designed to tackle a well-specified problem in a single domain. Such approaches cannot adapt to new or broader challenges without significant redesign. However, a long-held goal in the field has been the development of artificial intelligence that can learn and adapt to a very broad range of challenges and reach human-level ability (or greater).
As AI algorithms become more powerful and more general – able to function in a wider variety of ways in different environments – potential benefits and potential for harm will increase rapidly. With the level of power, autonomy, and generality of AI expected to increase in coming years and decades, forward planning and research to avoid unexpected catastrophic consequences is essential.
Our end goal is both to significantly advance the state of research on AI safety protocol and risk, and to inform industry leaders and policy makers on appropriate strategies and regulations to allow the benefits of AI advances to be safely realised.
One area of concern is the level of uncertainty associated with both the speed of development and the potential risks of artificial intelligence. This is due to several factors that include conceptual barriers in AI research, a perceived lack of communication between experts in different subfields of AI, and a paucity of research on appropriate safety and control mechanisms for AI development.
CSER’s research team will engage with each of these factors by drawing on the expertise of leaders in the relevant subfields of artificial intelligence. We will embed researchers with top AI development teams both in academia and industry, and will organise workshops and conferences to focus the attention of the field as a whole on the challenge of safe development of AI.