War and computers: autonomy, responsibility and modern targeting systems

31 July 2012

Portrait of Dr Alexander Leveringhaus

by Dr Alexander Leveringhaus

Current research investigates the moral and legal implications arising from the development and usage of automated (or operationally autonomous) computer-based targeting systems (CBTS) in the military. I am interested in how the development of CBTS...

US Air Force Radar
© --

In 2012, the German car maker Audi ran an advert in a number of British newspapers. The new Audi A6 model, the advert tells us, can make up to two thousand decisions in a second. Fortunately, however, potential buyers, the advert reassures its readers, only have to make one decision, namely whether or not to buy the car. To figure this out, potential customers are urged to request a test drive ; an opportunity for them to get to know the car, but also, the advert continues, for the car to get to know its potential owner!

Machines, as Audi’s advertising campaign suggests, are getting smarter. They are not just purely receptive objects but, as the advert implies, agents in their own right. Somewhat disconcertingly, the new Audi model seems to be much better at making decisions than the average human. After all, it can make two thousand decisions a second, whereas it takes the discerning human customer an extensive test drive to decide whether to purchase the car. Perhaps cars are increasingly going to outsmart their human owners in the future.

What does all of this have to do with war? The answer is simple. Technological developments have rarely been restricted to the civilian sector. It is hardly surprising, then, that militaries around the world are in the process of developing weapons systems that, like the new Audi, have a degree of what researchers in cognitive engineering refer to as operational autonomy. Under the influence of the Revolution in Military Affairs of the 1980s, it was argued by US military commentators that, during the first Gulf War, Cruise Missiles would be able to turn left at the traffic lights in Baghdad. Twenty years later modern weapons systems are probably going to decide for themselves which target to attack in the first place. Certain types of drones, for instance, will have greater capacities than conventional weapons systems to operate, over prolonged periods of time, without human interference. Likewise, computer-based targeting systems installed aboard modern war vessels will not remain purely receptive either. Instead, they will cooperate with operators in the making of targeting decisions.

These developments have important implications for how we think about the ethical and legal norms regulating the use of technology in general and the use of armed force in particular. Consider the case of smart cars. The development of operationally autonomous cars may well transform how we think about the role of the driver, his/her rights, as well as his/her responsibilities towards other road users. Some behaviour that was previously forbidden may become acceptable. What’s wrong with quickly taking a call on your mobile phone if the computer will take over the car and drive it for you? Similarly, operationally autonomous drones may change the way we think about the role of combatants. Perhaps machines of the future will become combatants in their own right, though arguably, it is unlikely that, like the Terminator, they will start sporting sunglasses and riding Harleys.

The important question is how, on an ever more technologically complex battlefield, we can secure space for individual moral and legal responsibility. In the aftermath of WWII, the Nuremberg Trials against leading figures of the Nazi regime led to more demanding standards of responsibility for combatants. Since Nuremberg, combatants are not only required to prove that the orders they followed were duly authorised. They also have to show that they judged the actions set out by the order to be permissible (moral perception) and they could not have avoided carrying out the order (moral choice). Considered against this background, it would hardly be desirable if all we could say was that, when things go wrong in war, no one is responsible because ‘it was the computer that did it’. That said, it is not clear how individual human agency is affected by the use of new military technologies.

Finding this out is one of the aims of the ‘Military Enhancement: Responsibility for Design and Combat Systems’ project, which is funded by the Netherlands Organisation for Scientific Research and based at the Technical University Delft, NL. The project is run in collaboration with the Oxford Institute for Ethics, Law and Armed Conflict, part of the Oxford Martin School. The aim of the project, however, is not only to clarify the meaning of responsibility in a high-tech military. In addition, it seeks to make recommendations on how legal and moral norms can be incorporated into the design of military technology. We should not wait until things go wrong. Rather, we should anticipate some of the moral (and legal) dilemmas likely to arise during armed conflict in order to prevent them, via sound design, from happening in reality. Indeed, this would be a legitimate and much welcomed instance of ‘intelligent design’.

More about Oxford Institute for Ethics, Law and Armed Conflict

Photo: U.S. Air Force photo/Capt. Carrie Kessler

This opinion piece reflects the views of the author, and does not necessarily reflect the position of the Oxford Martin School or the University of Oxford. Any errors or omissions are those of the author.