Response to the 2024 Doomsday Clock announcement

30 January 2024
  • The Bulletin said that work on AI risk should focus on “threat posed by AI must be enabled by a link to devices that can change the state of the physical world”. Matthijs Maas, Kayla Lucero-Matteucci and Di Cooke have written about how these connections significantly contribute to the risk from AI. 
  • The statement noted how that “many nuclear weapon states are engaged in extensive modernization and expansion programs”. These decisions were explored at a public lecture CSER hosted in June on Scoping Nuclear Weapons Choices in an Age of Existential Threats, given by Benoît Pelopidas, which highlighted how leaders appeal to nostalgia and systematically downplay the role of luck and the scale of nuclear weapons’ environmental impacts.
  • The Bulletin raises significant concerns about how AI could provide information that would allow more harmful and transmissible biological agents. This year, CSER researchers have been involved in policy discussions on this topic, including a team including CSER’s Alex Klein won the 2023 Next Generation for Biosecurity Competition to present  their paper on the convergence of AI and the Life Sciences at a meeting of the Biological Weapons Convention. 

In 2024, The Bulletin of Atomic Scientists declared this “a moment of historic danger,” as they decided to leave the iconic Doomsday Clock at 90 seconds to midnight, the closest it has ever been in its 75+ year history. CSER researchers came to much the same conclusion in our recently published book The Era of Global Risk, published last year. In what follows we respond to the detailed assessment report published alongside the Bulletin’s announcement and reflect on how it relates to CSER’s ongoing work.

Nuclear weapons

The Bulletin’s statement noted instances of nuclear posturing in 2023, and drew attention to the fact that “President Putin announced in March 2023 the deployment of tactical nuclear weapons in Belarus.” - In July we held a panel event on Nuclear Risk Reduction in the Baltic Sea Region that discussed a number of these developments. Panellists described how three proposals for nuclear risk reduction fit broadly into three areas: military posture and capabilities; military doctrine and intention; and building communication and relationships. All are important and ideas might range from hardened lines of reliable communication to restraint in deployments (including a Baltic nuclear weapon free zone). However, one indisputable element is the critical need for better communication and understanding, both between allies and adversaries. Unfortunately state leaderships seem to be more intent on disrupting and isolating each other than on communicating.

The statement also noted how these “developments are happening at a time when many nuclear weapon states are engaged in extensive modernization and expansion programs.” The decision making around these programmes was explored at a public lecture CSER hosted in June on Scoping Nuclear Weapons Choices in an Age of Existential Threats, given by Benoît Pelopidas of the Nuclear Knowledges program at Sciences Po. His lecture explored the role that perceptions play in nuclear decision making, highlighting the importance of projecting credibility, appealing to nostalgia, and imagining futures to decision makers while also showing how they systematically downplay the role of luck and the scale of nuclear weapons’ environmental impacts.

However, they also highlight the role of citizens in influencing the choices of their leaders; arguing that in the forthcoming US elections “candidates’ suitability to shoulder the immense presidential authority to launch nuclear weapons has serious implications for international stability and should be a central concern.” Last year CSER published the results of a survey showing that at present many people in the UK and USA are not well informed on the global risks associated with nuclear weapons, and that greater awareness of current research may influence their decision making.


Climate Change

The Bulletin reported a ‘mixed outlook’ in relation to climate change. On the one hand, they note that we had entered “‘uncharted territory’ for climate impacts last year, with conditions exceeding past extremes by enormous margins.” However, they argue that this needs to be set against the fact that “the world is seeing record and surging investments in renewables,” with large new public initiatives in the US and EU, although ultimately we have “not yet entered a trajectory that will lead to net zero.” 

Researchers at CSER take a longer view and are already researching the catastrophic impacts climate change is almost certainly bringing our way, and how to avoid them. In August two CSER alumni and affiliates published an article in Nature on How to Reduce Africa’s Undue Exposure to Climate Risks and CSER also hosted a workshop and published a report on managing the contribution of Solar Radiation Modification and climate change to Global Catastrophic Risk. We believe that it is no longer sufficient to evaluate the current risk posed by climate change to humanity and that we must already turn to the question of preparing for the more severe impacts that are to come.

Biosecurity

The bulletin raises significant concerns about the rapid progress being made in biotechnology and what these might mean for biosecurity. In particular they note that “generative AI could provide information that would allow states, subnational groups, and non-state actors to create more harmful and transmissible biological agents.” They welcome recent developments in US policy on using AI to engineer pathogens. This year, CSER researchers have also highlighted the important role, and potential for improvement, of other initiatives such as EU regulations, the UK Biosecurity Strategy, and the Biological Weapons Convention as well, hosting a workshop discussion connections between the latter two. CSER’s participation in a variety of external workshops on the future of Chemical and Biological Weapons Prohibition in the context of AI means that we are at the forefront of conversations in this area. 

The Bulletin also noted the continued risk of accidental release of organisms from laboratories as well as the emergence of naturally occurring infectious diseases. They point out that “[d]eforestation, urbanization, and climate change continue to destabilize microbe-host relationships and facilitate the emergence of infectious diseases.” However, CSER published research this year showing that these events are often misunderstood, with inattention to core epidemiological processes leading to ineffective responses

Artificial intelligence

The Bulletin’s statement holds that the nature of AI’s relationship with existential risk is “highly contested” with some experts expressing “concern about possible existential risks arising from further rapid advancements in the field” while others viewed this as “highly speculative and distracts from real and immediate nonexistential risks that AI poses today.” Their view is that any “physical threat posed by AI must be enabled by a link to devices that can change the state of the physical world” and they thus focus on such connections. These include the use of AI in biotechnology (mentioned above) and nuclear weapons. While agreeing that these connections significantly contribute to the risk from AI and that more needs to be done to take account of the potential harms of releasing new AI systems, CSER researchers have argued that current disagreements about whether AI poses an existential risk do not serve to improve risk assessment in this area.

The Bulletin welcome several initiatives that have emerged in the last year to better govern AI and its risks, including the EU AI Act, a US Executive Order, and the Bletchley Declaration. However, they believe that these face two key challenges “to agree on specific domains, such as military and biotechnology applications, in which the use of AI is governed by widely accepted rules or norms of behavior” and “to agree on the specific content and implementation of those rules and norms.” CSER researchers have also been working on improving these initiatives, publishing responses to the Bletchley Declaration and the UK’s AI Regulations.

Subscribe to our mailing list to get our latest updates