How long can NATO abstain from using killer robots?

S-350 Vityaz anti-aircraft missile system
An S-350 'Vityaz' surface-to-air missile launcher on display during the Army 2019 International Military Technical Forum.

Last week’s claim by Russian news agency RIA Novosti that a S-350 Vityaz anti-aircraft missile system shot down a Ukrainian aircraft whilst operating in a fully automatic mode, provides a real life example of why any proposal for a global ban on autonomous weapons systems is unlikely to be supported by global powers.

In news later confirmed by the Russian deputy prime minister Denis Manturov, the medium-range surface-to-air missile system developed by defence contractor NPO Almaz, was operating fully automatically without the control of an operator. According to RIA Novosti sources, this is the first time an S-350 has been successfully used in automatic mode in combat conditions.

Without examining details of what technology was used by the S-350 and what level of autonomy it was operating under, the incident does illustrate the pressures that will push armed forces to use autonomous weapons. According to Russia the S-350 also represents the armies primary defence against hypersonic missiles, a defensive capability that will increasingly demand the use of artificial intelligence.

Couple the arrival of faster missiles with the exponential increase in the speed and volume of targeting on the battlefield, and human response times will soon become completely inadequate. Only with the use of AI and autonomous weapons will armed forces be able to identify and neutralise incoming threats fast enough. The current conundrum is how much autonomy should be given to such systems and how much human control can be retained without diminishing their effectiveness.

In 2021, the majority of the member nations of the United Nations’s Convention on Certain Conventional Weapons (CCW) were in favour of introducing new international laws on restricting autonomous weapons. The U.S. and Russia remained opposed. However, the Pentagon and NATO have both stated an intention to lead in the ethical use of AI.

The key factor, of course, is escalation. As noted at a number of NATO, U.K. and U.S. military conferences, if an adversary resorts to using autonomous AI powered systems in a way that is not compatible with NATO values and morals, then its forces would be bound to defend against and deter use of such systems. Defining which autonomous systems apply, in what circumstances, and what the appropriate counter measures might be is going to be a matter of debate for some time.

by Carrington Malin