Bill Gates and Elon Musk on the "Huge Challenge" of Artifical Intelligence

16 April 2015

In a recent interview with Baidu CEO Roblin Li, both Bill Gates and Elon Musk spoke of the importance of research into ensuring artifial intelligence (AI) would be safe, if AI advanced to smater-than-human level.

Explaining the potential risk that superintelligent AI would pose, Elon Musk suggested

[An] analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

Bill Gates added that his view of the seriousness of the risk is no different and would “highly recommend” people read Nick Bostrom’s book Superintelligence.

A video of their discussion of AI is avaiable here, and the transcript here.

Subscribe to our mailing list to get our latest updates