ISAAC TAYLOR
  • Home
  • Research
    • AI Ethics
    • The Ethics of War and Peace
    • Publications
  • Teaching
    • Informal Fallacies
  • CV
  • Blog

Philosophers' Strike
A Blog about Philosophy, Politics, and Technology

Justice by Algorithm: Smart Information Systems and Criminal Procedures

9/10/2025

0 Comments

 
(Originally published: 22 August 2019)

Criminal justice systems have traditionally relied heavily on human decision-making. Juries are asked to decide whether a given defendant is guilty beyond reasonable doubt. Judges routinely make rulings about what severity of sentence to give convicted criminals. And parole boards are tasked with determining whether incarcerated individuals are sufficiently rehabilitated to rejoin society. Because of this reliance on judgement and reason at various stages, the system is vulnerable to being subverted by the various biases and irrationalities that are commonly found in human beings. A study of Israeli judges, for instance, found that they were more likely to give out favourable decisions in parole hearings held after lunch.
Smart information systems (SISs) – which combine large sets of data and sophisticated machine learning processes – offer a way of removing these undesirable elements from decision-making. These tools take a wealth of information about individuals’ behaviour and characteristics, and process them using computer algorithms in order to make various predictions and recommendations. For example, public bodies have been assisted in their decisions over who to grant parole to by inputting data about prisoners into SISs and running algorithms which predict how likely they are to re-offend. Similar programs have been developed to help with decisions about whether to grant bail, whether to require rehabilitation rather than prison for someone who has broken the law, and how long a prison sentence should be given to defendants.
The perils of relying on SISs to make decisions are becoming apparent. For one thing, these systems necessitate the widespread collection of personal data for them to function, and the way in which this data is collected might threaten individuals’ privacy. For another, algorithmic decision-making may often lead to recommendations that we would view as unfair and biased. Some programs, for example, have been known to falsely flag black defendants as more likely to re-offend at a higher rate than white defendants.
But there is another potential drawback of using algorithms in the criminal justice system in particular: one that appears to have gone unnoticed in existing discussions of this emerging technology. To explore this issue, we need to first consider what the goals of criminal punishment should be more generally.
Jeremy Bentham, the utilitarian philosopher, argued that punishment should ensure the greatest happiness of the greatest number. While punishing convicted criminals by locking them up in prison is likely to make them unhappy, the reduction in crime, and the increase in happiness that this will bring about in the population as a whole, may often be enough to offset this unhappiness. And when this is the case, Bentham claimed, punishment should be carried out. There are a number of ways in which criminal punishment might reduce crime. It might act as a deterrent for would-be law-breakers; it might ensure that dangerous people are kept off the streets; and it might lead to serial criminals deciding to change their ways.
If we only think that punishment should aim at these outcomes, the use of algorithmic decision-making may in fact increase its efficacy. That is, we may be able to use computers to come to more accurate decisions about what sort and severity of punishment is necessary in order to have the desired deterrent, incapacitation and reforming effects. SISs can take into account much larger amounts of information than humans, and can also run much more complex calculations in order to arrive at more reliable predictions.
But some philosophers dissent from Bentham’s simple view of criminal punishment. Jean Hampton argued that, although punishment should certainly seek to reduce crime, it should do so in a particular way. This is because humans are capable of responding to moral reasons, and we should respect that capacity in our social practices. While an animal that comes across an electric fence may learn not to cross it simply because of the pain experienced in failed attempts, humans, in addition, might start to reflect on the reasons why the fence is there. And they might, as a result, come to see that it is a good thing not to enter the land that lies across from the fence even if they could. Hampton thinks that punishment should work in a similar manner. Not only should it provide immediate disincentives to would-be criminals. It should also, she says, assist them in understanding the reasons they are being punished and, indirectly, the reasons underlying the laws that they have broken.
If we want punishment to have this educative function, we might start to view the use of algorithms in this area as problematic. When a criminal is handed down a particular sentence by a judge, we can often see the working, as it were. The judge may provide reasons for giving a sentence at the higher or lower end of those permitted by law, and this might serve an educative goal. By explaining to someone, for example, that a harsher sentence has been given to them because they committed their crime without any mitigating factors, or with blatant disregard for public safety, or with no signs of remorse, a message is sent to the criminal about the wrongness of their actions. This may partially explain some of the appeal of the view that justice must not only be done, but also be seen to be done.
The reasons that are routinely given at different places in the criminal justice system may be lost if all decision-making were made by a computer. Even the programmers of SISs often fail to fully understand how their creations function. How are criminals supposed to interpret the outcome of an algorithm if its inner workings are so obscure? While relying on humans may lead to a degree of injustice owing to the unconscious biases of those in positions of power, in passing over that power to algorithms we may compromise an important goal of punishment. Criminal justice should seek to educate, and not just discipline, those to whom it is meted out.


0 Comments



Leave a Reply.

    About

    Here are blog posts originally published on my blog "Philosophers' Strike". I may occasionally blog here again in the future.

    Archives

    September 2025

    Categories

    All

    RSS Feed

Proudly powered by Weebly
  • Home
  • Research
    • AI Ethics
    • The Ethics of War and Peace
    • Publications
  • Teaching
    • Informal Fallacies
  • CV
  • Blog