ISAAC TAYLOR
  • Home
  • Research
    • AI Ethics
    • The Ethics of War and Peace
    • Publications
  • Teaching
    • Informal Fallacies
  • CV
  • Blog

Philosophers' Strike
A Blog about Philosophy, Politics, and Technology

Killer Robots: The Dangers of Complexity

9/10/2025

0 Comments

 
(Originally published: 12 August 2021)
Within a matter of years, a number of countries may have the capability to manufacture lethal autonomous weapons systems (LAWS). These weapons would rely on sophisticated AI to enable them to select and engage targets without direct human input. It might be thought that LAWS would have an advantage over human soldiers. Unlike humans, these machines would not feel anger, resentment, or other emotions that have led to atrocities in the past. But multiple worries have been raised about their use.

Most obviously, one might wonder whether the first machines of this sort would be able to abide by international humanitarian war. Could LAWS really meet the widely-held principle of discrimination, for example, which requires belligerents in wars to discriminate between combatants and civilians, and target only the former? An apocryphal story illustrates the potential problem here. Supposedly, government researchers once tried to train an AI system to identify a tank by feeding it a number of pictures with tanks in and a number of pictures without. On the basis of this, the system inferred that a tank was anything with a forest in the background. Any attempts at bringing the system into real-world conflicts were thankfully swiftly abandoned.
But there is a deeper worry about LAWS, one which remains even if AI technology can some day surpass human capacities in visual recognition and the like. Suppose that LAWS are deployed in a particular conflict, and apparently breach various ethical principles of war. Suppose that they target a group of civilians, for instance, because they determine that terrorizing the enemy’s population in this way is the most efficient path to victory. Who is to be held responsible for these deaths? While we might think that we can pin responsibility on the commanders who deployed them, on the developers who designed them, or (at least sometimes) on the military-industrial complex as a whole, some think that nobody can justly be held responsible. This responsibility gap, in turn, may render the use of lethal force morally questionable.

International discussions over the past few years have focused on how to regulate the use of LAWS in order to minimize these sorts of moral costs. One of the central principles that has emerged is the requirement of meaningful human control: the idea is that, when LAWS are deployed, the outcomes caused by them must at all times be under the control of one or more human agents. This, it is thought, will both ensure that obvious mistakes caused by existing limitations of AI are avoided, and that responsibility for deaths can be maintained. But the qualifier “meaningful” is important here. It will be of little use giving a military commander a veto over the decisions of LAWS if that person lacks the time, knowledge, or confidence to override decisions when necessary.

The requirement of meaningful human control might be thought to undermine the very reason for turning to LAWS in the first place. If humans have to be the ultimate source of decisions, would this mean that the use of AI is superfluous?  However, as some contributions to this discussion have made clear, this need not be the case. If we understand that control need not (only) be exercised at the point of us, but (also) at earlier stages in the life-cycle of LAWS – including at the development phase and even by governments in implementing regulations – we might be able to combine human responsibility with the superior capacities of AI systems. For example, in certain sorts of predictable environments, designers may be able to program LAWS in a way that means the outcomes they produce may be predictable, even if each individual decision to engage that they make are not strictly done with direct human oversight. In submarine warfare, for example, where the battlefield is unlikely to be cluttered with civilians alongside military targets, letting LAWS make decisions without direct human input may lead only to outcomes where military vessels are targeted.

While these insights are important, it is also crucial to consider more practical issues when developing international laws and norms. We could certainly develop regulations that allow complex networks of human control across the life-cycle of AI systems. But in doing so, we should consider the potential for manipulation and bad-faith interpretations by international actors that such a complex requirement would make possible. Locating responsibility across various different sites would no doubt invite buck-passing and thus no ultimate way of holding anyone accountable. ‘Where all are guilty,’ notes Hannah Arendt, ‘nobody is’.

Philosophers have done valuable work in identifying the potential problems that the use of LAWS would give rise to, as well as providing insights into what solutions could be developed to deal with these. But as the UN seeks to develop concrete regulations for LAWS, a balance needs to be made between the complex principles provided by philosophy and the need for simplicity to avoid opportunistic readings of the rules that are developed on the basis of them. As with other ethical principles of warfare, the ideal regulations need to be designed, not with the best people in mind, but the worst.


0 Comments



Leave a Reply.

    About

    Here are blog posts originally published on my blog "Philosophers' Strike". I may occasionally blog here again in the future.

    Archives

    September 2025

    Categories

    All

    RSS Feed

Proudly powered by Weebly
  • Home
  • Research
    • AI Ethics
    • The Ethics of War and Peace
    • Publications
  • Teaching
    • Informal Fallacies
  • CV
  • Blog