Will humans and/or machines save us from nuclear doomsday?
Analyse Geopolitiek & Wereldorde

Will humans and/or machines save us from nuclear doomsday?

06 Jan 2021 - 08:48
Photo: A nuclear weapon is detonated at Bikini Atoll in the Marshall Islands in 1946. (Image has been colorized.) © International Campaign to Abolish Nuclear Weapons
Terug naar archief

The idea of a human outside the loop of nuclear decision-making may appear to be far-fetched. Yet, this is exactly the idea behind numerous recent advances in nuclear security. In order to maximise the benefits and minimise the risks of both AI and human judgment, the policy debate needs to go beyond the false dilemma of humans versus machines in nuclear decision making.

In 2011, Russia announced that its automated response system Perimetr had been reactivated. Seven years later, President Vladimir Putin announced the development of the unmanned underwater vehicle Poseidon, able to carry nuclear munitions. This vehicle, once ready, should autonomously execute commands.1

Russia insists that a human is always in control in both systems. Yet, Russia is not the only country attracted to the growing capabilities of artificial intelligence (AI). In the United States (US), analysts have called for an automated ‘dead-hand’: an AI-enabled system that is able to detect, decide and direct a response to an incoming attack.2

By making nuclear weapons faster and more unpredictable, the time a targeted party has to detect, interpret and respond to a threat becomes severely compressed

These calls are driven by growing risks that advances in other emerging technologies – such as hypersonic gliding vehicles or hypersonic cruise missiles – severely compress the time available to respond to an incoming threat. The integration of hypersonic gliding vehicles and hypersonic cruise missiles into the nuclear arsenals of major powers is changing the nature of nuclear threats. By making nuclear weapons faster and more unpredictable, the time a targeted party has to detect, interpret and respond to a threat becomes severely compressed.

In a recent piece in the Clingendael Spectator, Sibylle Bauer notes that the rush to develop hypersonic weapons – which has spurred an arms race among the major nuclear powers – has further compressed the time it takes to make a decision ‘in real time’.3  Scholars advocating for more autonomous systems are concerned that current American nuclear command, control and communications (NC3) systems will not act quick enough in the face of an impending threat. Hence, an automated system is necessary to reinforce nuclear deterrence.

OndercoZutt-Source The Office of the Deputy Assistant Secretary of Defense for Nuclear Matters (ODASD (NM)), “Chapter 2 Nuclear Weapons Employment Policy, Planning And NC3,”
Command and Control overview. Source: The Office of the Deputy Assistant Secretary of Defense for Nuclear Matters (ODASD (NM)), “Chapter 2 Nuclear Weapons Employment Policy, Planning And NC3”.

Others have cautioned against increased automation in command and control, calling an AI-enabled NC3 “a recipe for disaster”.4  Yet another group of scholars has highlighted that AI is in one way or another already integrated into NC3 systems, and hence future discussions should not be focused on “whether” but on “where, to what extent, and at what risk”.5

In order to maximise the benefits and minimise the risks of both AI and human judgment, the policy debate needs to go beyond the false dichotomy of humans versus machines. In NC3, there is room for both machines and humans. We will be better off thinking about how humans and AI will address each other’s biases.

Machine-driven command and control
The inclusion of automation in nuclear decision-making is not new. Both the US and the Soviet Union began building automatic nuclear command systems in the 1960s. More recently, advances in AI have shown promise when it comes to other features of NC3.

For instance, machine learning (a subset of AI) can add to the perceptual intelligence6  of surveillance and reconnaissance programs, early warning systems, and decision-support systems by enhancing the abilities of these systems to process huge amounts of data in a short period of time.7

This feature is particularly alluring as we think about the growing difficulties of detecting nuclear threats – and shortening the time to respond to them – associated with the growing integration of hypersonic technologies into nuclear arsenals.

Titan II Nuclear Missile Silo at the Titan Missile Museum in 2008. © Kelly Michals - Flickr
Titan II Nuclear Missile Silo at the Titan Missile Museum in 2008. © Kelly Michals / Flickr

But the increased automation of nuclear decision-making carries significant risks. In a recent policy roundtable in the Texas National Security Review, authors cautioned against “the misperceptions about AI’s capabilities” as well as against automation bias.8  These biases could cause human decision-makers to surrender control to machines. Machines’ susceptibilities to technological failures make these biases particularly concerning.

AI-enabled decision-making is also unable to engage in contextual thinking, make ethical judgments, understand intent, or even question orders in ways human decision-makers can.9  Moreover, as US Air Force senior pilot Zach Hughes noted, AI could make the ‘fog of war’ worse.10  When it comes to something with high stakes as the employment of nuclear weapons, a fully automated ‘dead-hand’ that removes humans from the decision-making process can be catastrophic.

‘Humans in the Loop’
Some scholars hold the view that AI should not be included in any aspect of NC3 because algorithms are incapable of considering ethical aspects of a decision, because they automate decisions when you need judgement and because they reduce the time needed for rational behaviour to prevail.11  Humans can use their values, their judgments (of what is right and wrong), their experiences and their rationality to shape the way they assess a situation, whereas machines – programmed for profit and efficiency12  – could tend to assume the worst.

OndercoZutt-Vogeler talks with another missile alert crew while manning the Rapid Execution and Combat Targeting System, an integrated communication and weapon system that controls the Minuteman III ICBMs
An American officer talks with another missile alert crew while manning the Rapid Execution and Combat Targeting System, an integrated communication and weapon system that controls the Minuteman III ICBMs. © U.S. Air Force photo

Interestingly, some have suggested that automating nuclear command and control may vary depending on regime type. In a recent piece, James Johnson, Assistant Professor in the School of Law and Government at Dublin City University, suggests that more autocratic nuclear states (for example China) that have more centralised nuclear decision-making structures and are more concerned about the survivability of their nuclear weapons, tend to engage in more risky forms of automation and tend to be less worried about the ethical or moral implications of automation.13

That said, top-level policy-makers, whom we interviewed in our research14 , indicated very clearly that they expected that launch decisions of nuclear weapons would always rest with humans. However, these views also demonstrate that we tend to view the value of human judgment uncritically. This view assumes that all human beings will exercise good judgment or even that all humans, given their penchant for reason, will make the same calculations in a crisis.

There is great variation among leaders in different countries when it comes to how accepting they are of AI in their militaries

Behavioural research has shed light on the limits of human judgment and rationality.15  Arguments that humans should be the only ones in the loop assume that all humans are risk-averse and so will do anything to avoid a crisis. While AI proponents – such as those arguing for a ‘dead-hand’ system – might be guilty of having blind faith in AI, sceptics do the same with human judgment.

Neither one in isolation is sufficient. Ample research has shown that humans (let alone leaders) are not always risk-averse.16  For instance, behavioural theories have stressed that how individuals perceive uncertainty will affect their risk preferences. This can help explain why some people may not always be risk-averse when faced with potentially bad outcomes; in fact, some individuals may actually become risk-seeking in an effort to receive a better outcome.

These outcomes are not only psychologically conditioned, but also results of long socialisation processes. As Assistant Professor of Political Science Erik Lin-Greenberg argued in his recent paper, there is great variation among leaders in different countries when it comes to how accepting they are of AI in their militaries.17

You only need to imagine a world leader boasting about the size of their nuclear button to understand that not everyone is equally risk-averse

While Lin-Greenberg draws a lesson that this creates a problem for the functioning of alliances (something we do not dispute), this unwillingness highlights that there are other factors – for example cultural and normative – that influence risk acceptance. Psychologists and sociologists have long recognised that risk acceptance (or avoidance) has an important cultural aspect.18  And embedding nuclear decisions within complex organisations seems not to completely remove these problems.19

In short, the critique of the implementation of AI in nuclear decision-making rests on a rather uncritical view of human judgment and its relationship with risk. You only need to imagine a world leader boasting about the size of their nuclear button to understand that not everyone is equally risk-averse.

OndercoZutt-US nuclear weapons test at Eniwetok in 1956. © International Campaign to Abolish Nuclear Weapons - Flickr
US nuclear weapons test at Eniwetok in 1956. © International Campaign to Abolish Nuclear Weapons / Flickr

A healthy mix of machine learning and human judgment
Under the right circumstances, well-functioning AI can serve as a trusty ‘adviser’ to human decision-makers, providing them with more accurate information in less time, thus increasing stability by reducing the risk of human error.20

Ensuring that AI is always well-functioning is difficult, not least because mitigating the risks of nefarious manipulation remains tricky. These risks underline why the discussion about the inclusion of AI in NC3 systems must include looking at where and how AI becomes involved.21

However, if we want to have a more measured view of the role of AI in NC3 we need to start thinking seriously and critically about the roles that both machines and humans can – and should – play in NC3. Given the biases inherent in both, neither one of them alone is going to save us from a nuclear doomsday.

The research for this article was supported by a subsidy from the Dutch Ministry of Foreign Affairs. The views represented here are those of the authors and do not represent in any way the views or policies of the Dutch Ministry of Foreign Affairs.

Auteurs

Michal Onderco
Professor of International Relations at Erasmus University Rotterdam
Madeline Zutt
Research associate at Erasmus University Rotterdam