Don’t Be Evil: should the tech sector prevent killer robots?
Opinie Veiligheid & Defensie

Don’t Be Evil: should the tech sector prevent killer robots?

25 Sep 2019 - 12:59
Photo: Pixabay
Terug naar archief

With the emergence of the digital era, militaries have increasingly been looking for cooperation with the tech sector. The risk of involvement of tech companies in the development of lethal autonomous weapons has led to heated discussions. What purposes may their technology be used for? Are there clear red lines? For peace ngo PAX it is crucial that companies make explicit where they draw the line regarding increasing autonomy in weapon systems to avoid contributing to the development of lethal autonomous weapons.

Ten years ago no one would have expected that a company like Google would be involved in a military project. However in 2018 it became clear Google was working with the Pentagon on project Maven, which uses artificial intelligence to interpret video images, which could provide the basis for automatic targeting and autonomous weapon systems.1 This led to Google employees writing an open letter that called on Google to cancel Project Maven, stating “Google should not be in the business of war”.2 Following the staff’s letter, Google decided to not renew its contract and has since published ethical AI principles, which state that Google will not design or deploy AI in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.3

Silicon Valley from above. © Patrick Nouhailler Flickr
Silicon Valley from above. © Patrick Nouhailler Flickr

In the past few years, there has been increasing debate within the tech sector about the impact of new technologies on our societies. Concerns related to privacy, human rights and other issues have been raised. The issue of weapon systems with increasing levels of autonomy, which could lead to the development of lethal autonomous weapons, has also led to discussions within the tech sector.

The latest PAX report ‘Don’t Be Evil?’ looks at the risk of tech companies (unintentionally) becoming involved in the development of lethal autonomous weapon systems, popularly known as ‘killer robots’.4 As part of the report, fifty tech companies were surveyed in order to establish their level of concern.

The use of AI to allow weapon systems to autonomously select and attack targets is highly controversial

This is an important debate in which tech companies play a key role. To ensure that this debate is as fact-based and productive as possible, it is valuable for tech companies to articulate and publicise clear policies on their stance, clarifying where they draw the line between what AI technology they will and will not develop.

Lethal autonomous weapon systems
Lethal autonomous weapon systems are weapon systems that could select and attack targets without meaningful human control.  This means that the decision to use lethal force is delegated to a machine, and that an algorithm can ‘decide’ to kill humans. The function of autonomously selecting and attacking targets could be applied to various autonomous platforms, for instance drones, tanks, fighter jets or ships.

With the emergence of the digital era, militaries have increasingly been looking for cooperation with the tech sector. The use of AI by militaries in itself is not necessarily problematic, for example when used for autonomous take-off and landing, navigation or refuelling. However the use of AI to allow weapon systems to autonomously select and attack targets is highly controversial. The development of these weapons would have an enormous effect on the way war is conducted. It has been called the third revolution in warfare, after gunpowder and the atomic bomb. Many experts warn that these weapons would violate fundamental legal and ethical principles and would destabilize international peace and security. In particular, delegating the decision over life and death to a machine is seen as deeply unethical.

Increasing autonomy in weapon systems
It has become evident in recent years that militaries are striving to increase the levels of autonomy throughout their weapon systems. Many have placed AI as a high priority area of work. There are multiple examples of weapon systems in use today that illustrate the concerns regarding increasing levels of autonomy.5

National militaries are keen to profit from the expertise held in the private sector, often outsourcing certain aspects of military projects to tech companies. Today, a main issue is that most technologies are dual-use, meaning they can have both civilian and military applications. Nowadays, even seemingly innocent tech tools can be weaponised and repurposed for the battlefield. An example is that of the HoloLens, Microsoft’s Augmented Reality Goggles. These were originally designed to be used by technicians, doctors and gamers. In November 2018 it was reported that Microsoft had secured a USD 480 million contract with US military to supply 100.000 headsets for training and combat purposes. This contract intends to “increase lethality by enhancing the ability to detect, decide and engage before the enemy”. The news of this contract also drew protests from within the company, with an open letter being addressed to both the CEO and the President of Microsoft.6

An unmanned aerial vehicle, the UAVision Ogassa V, prepares to take off in Troia, Portugal during an unmanned systems trial in September 2019. © NATO
An unmanned aerial vehicle, the UAVision Ogassa V, prepares to take off in Troia, Portugal during an unmanned systems trial in September 2019. © NATO

Many emerging technologies are dual-use and have clear peaceful uses. In the context of the PAX report, the concern is with products that could potentially also be used as key-components in lethal autonomous weapons. Moreover, there is the worry that unless companies develop proper policies, some technologies not intended for battlefield use may ultimately end up being used in weapon systems.

The development of lethal autonomous weapons takes place in a wide spectrum, with levels of technology varying from simple automation to full autonomy, and being applied in different weapon systems’ functionalities. This has raised concerns of a slippery slope where the human role is gradually diminishing in the decision-making loop regarding the use of force, prompting suggestions that companies, through their research and production, must help guarantee meaningful human control over decisions to use force.

The PAX report
In the hope of contributing to the discussion, the PAX report illustrates some developments in this area, with varying levels of (proclaimed) autonomy or use of AI. A number of technologies can be relevant in the development of lethal autonomous weapons. Companies working on these technologies need to be aware of that potential in their technology and they need to have policies that make explicit how and where they draw the line regarding the military application of their technologies. The report looks at tech companies from the following perspectives: big tech, hardware, AI software and system integration, pattern recognition, autonomous (swarming) aerial systems, ground robots.

21 companies are classified as ‘high concern’

For the report fifty companies from twelve countries, all working on one or more of the technologies mentioned above, were selected and asked to participate in a short survey, asking them about their current activities and policies in the context of lethal autonomous weapons. Based on this survey and our own research PAX has ranked these companies based on three criteria:

  1. Is the company developing technology that could be relevant in the context of lethal autonomous weapons?
  2. Does the company work on relevant military projects?
  3. Has the company committed to not contribute to the development of lethal autonomous weapons?

Based on these criteria, seven companies are classified as showing ‘best practice’, 22 as companies of ‘medium concern’, and 21 as ‘high concern’. To be ranked as ‘best practice’ a company must have clearly committed to ensuring its technology will not be used to develop or produce autonomous weapons. Companies are ranked as high concern if they develop relevant technology, work on military projects and have not yet committed to not contributing to the development or production of these weapons.

German Rheinmetall KZO drone being launched during NATO's Iron Wolf II exercise in 2017 in Lithuania. © NATO
German Rheinmetall KZO drone being launched during NATO's Iron Wolf II exercise in 2017 in Lithuania. © NATO

Both Amazon and Microsoft are ranked as high concern. They both work on various relevant technologies including recognition software, drones and cloud infrastructures. Both companies are currently bidding for JEDI, a USD 10 billion contract that will serve as the cloud infrastructure spanning the Pentagon to soldiers on the ground. Chief Management Officer of the project has explained: “This program is truly about increasing the lethality of our department”.7  Without clear policies that make explicit what purposes their technology may be used for they risk contributing to the development of lethal autonomous weapons.

Another example of company of concern is AerialX, a Canadian company that produces the DroneBullet. It is a counter-drone system designed to take out rogue drones in the sky. The DroneBullet is a small kamikaze drone that looks like a miniature missile. It has an operational endurance of ten minutes and can cover a maximum of 3km. However, it can reach speeds of up to 200km/h and uses a machine vision target system.

The target system is an AI-led capability that enables the DroneBullet to autonomously identify, track and engage or not engage an approved target set. Although it is designed to only engage specifically approved drones based on certain characteristics, it is not difficult to imagine how such a technology could work with a different target library against other targets. In fact, AerialX is currently working to modify the weapon for a warhead-equipped loitering munition solution.

For PAX it is crucial that companies make explicit where they draw the line regarding increasing autonomy in weapon systems. They should set up clear policies to make explicit what purposes their technology may be used for and what their clear red lines are. Without such policies, companies risk contributing to the development of lethal autonomous weapons.

Google now has policy stating that it will not develop AI for weapons systems

What can tech do?

As the above examples show, tech companies have an important role to play to prevent the development of killer robots. There are concrete steps companies can take to prevent their products contributing to the development and production of lethal autonomous weapons.

  • Commit publicly to not contributing to the development of lethal autonomous weapons.8
  • Establish a clear policy stating that the company will not contribute to the development or production of lethal autonomous weapon systems, and including implementation measures such as:
    • Ensuring each new project is assessed by an ethics committee;
    • Assessing all technology the company develops and its potential uses and implications;
    • Adding a clause in contracts, especially in collaborations with ministries of defence and arms producers, stating that the technology developed may not be used in lethal autonomous weapon systems.
  • Ensure employees are well informed about what they work on and allow open discussions on any related concerns.

Several of the companies surveyed for the report have already taken similar steps.9 As previously mentioned, Google now has policy stating that it will not develop AI for weapons systems.

As part of NATO's Unified Vision 2014 Trial, members of the Italian Air Force launch a surveillance drone (STRIX, a multi-purpose, man-portable, totally autonomous TUAS) over Oerland, Norway © NATO
As part of NATO's Unified Vision 2014 Trial, members of the Italian Air Force launch a surveillance drone (STRIX, a multi-purpose, man-portable, totally autonomous TUAS) over Oerland, Norway. © NATO

In response to our survey Google added that “since announcing our AI principles, we’ve established a formal review structure to assess new projects, products and deals. We’ve conducted more than 100 reviews so far, assessing the scale, severity, and likelihood of best- and worst-case scenarios for each product and deal”. VisionsLabs, a Russian pattern recognition company, told us that they explicitly prohibit the use of their technology for military applications and that it is part of their contracts. In addition, VisionsLabs said that they also monitor the results/final solution developed by their partners.

Current developments in tech, and military involvement of tech companies, demonstrate how it is crucial that companies take active steps to make sure that none of their technology is used in the development of lethal autonomous weapon systems. Concern among the tech sector is widespread. Indeed, more than 240 companies and organisations, as well as more than 3’200 individuals have signed a pledge to never develop, produce or use lethal autonomous weapon systems.10 This is an important voice in the debate as these experts understand the technology and the consequences of their use in autonomous weapon systems.

As the tech sector possesses the technology required for the next phase of lethal autonomous weapons development, it is now vital for all companies to take additional steps to make sure that these weapon systems remain science fiction.

Op 30 september a.s. organiseert het NGIZ Den Haag een gratis toegankelijke bijeenkomst n.a.v het rapport van vredesorganisatie Pax for Peace over het gevaar van kunstmatige intelligentie (AI) als militaire technologie. Het thema zal worden ingeleid door Frank Slijper, medeauteur van het rapport en bij PAX projectleider wapenhandel. De discussie zal worden geopend door Danny Pronk, senior research fellow veiligheid bij Instituut Clingendael. De bijeenkomst zal worden voorgezeten door Jan Rood, voorzitter NGIZ.

Auteurs

Daan Kayser
Werkzaam bij Vredesorganisatie PAX, gespecialiseerd in Arms trade & autonomous weapons
Alice Beck
Project Officer Autonomous Weapons PAX