|
(Originally published: 5 September 2023)
In the future, high-stakes actions may be carried out by machines. If self-driving cars are granted greater autonomy, for instance, they may need to be programmed to respond in certain ways when faced with an inevitable accident. Should the car drive straight on, running over a number of pedestrians in the middle of the road, or should it swerve to one side, thereby endangering its passengers’ lives? If we develop autonomous weapons systems, which would utilize AI to select and engage targets without direct human input, it is vital that the relevant ethical rules of war are followed. Would attacking a given target be proportionate if a number of civilians in the vicinity are put at risk? How should we ensure that AI systems such as these are used ethically? What would this even mean? According to the influential “machine ethics” movement, we should make sure that the machines themselves will act ethically. According to one representative account of this idea, the goal is ‘to create a machine that itself follows an ideal ethical principle or set of principles; that is to say, it is guided by this principle or those principles in decisions it makes about possible courses of action it could take.’ But does it even make sense to talk about machines acting ethically? To determine this, we need to turn from ethics (which concerns questions about right and wrong) to metaethics (which concerns second-order questions about the nature of right and wrong). On certain metaethical views, I suggest, the whole idea of machine ethics is a non-starter. David Hume, the eighteenth-century Scottish philosopher, arguably puts forward one such view. Hume famously claims that one cannot derive an ethical conclusion from purely empirical premises. When we consider moral arguments, if they are well-formed, we will always find at least one normative premise, otherwise there will be an unjustifiable move from statements of fact to statements of what ought to be. An implication of this is that we cannot make well-founded moral judgements on the basis of reason alone. Does Hume’s law, as it is known, undermine the project of machine ethics? Not necessarily. Even if normative conclusions must arise from normative, as well as factual, premises, it may well be possible to ensure that robots follow the prescriptions contained within those conclusions. This could, for example, be done by programming in strict rules that the robots must follow when acting. Designers could, for instance, determine in advance the rules that self-driving cars should follow in accident scenarios and ensure that the car will, say, save the most lives in all these cases. Alternatively, we might use machine learning techniques to ensure that the AI systems within our machines will develop ethical constraints themselves, without the need for these being specified in advance. The latter technique might be necessary if we thing that deciding what the right thing to do is will require a sensitivity to particular features of the case at hand, rather than a simple disposition to follow a small number of ethical rules. But there is another, related, aspect of Hume’s thought which, if justified, might pose greater problems for the proponents of machine ethics. We have seen that Hume believed that ethical conclusions cannot be drawn on the basis of purely empirical premises. There must be a distinctively normative element in the support for these conclusions. But, if we want to draw well-founded ethical conclusions, where might we gain knowledge of this normativity? On one view, normative concepts (like good and bad) exist in the world, and we might gain knowledge of these in exactly the same way in which we gain knowledge about other elements of the world such as the material that a piece of furniture is made from: empirical investigation involving the use of our senses. This, however, is not Hume’s view. In his Treatise on Human Nature, Hume argues that, no matter how hard we search, we will not find normative elements (such as virtue or vice) in the external world: ‘Take any action allow’d to be vicious: Wilful murder for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In whichever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You can never find it, till you turn your reflection to your own breast, and find a sentiment of disapprobation, which arises in you, towards the action.’ One way of interpreting Hume here is as a projectivist. According to this view, moral properties do not exist in a mind-independent way in the world, but are rather “projected” onto the world by us. The crucial think to note here is that moral properties, for the projectivist, still exist and may still be of great importance. We might have reasons to be moral, even if projectivism is true. All the projectivist holds is that, in spite of the views about morality held by most people (the “vulgar”, as Hume puts it), these moral properties are subjective entities that we have a part in bringing into existence rather than objective things that are “out there” waiting to be discovered. But how do we humans give rise to moral properties, if their existence cannot be based on reason alone? In one of his early books, Oxford political theorist David Miller outlines the indispensable role of moral sentiments in Hume’s thought. It is only with the capacity for distinctive sorts of reactions – not entirely rational – to external states of affairs that, when combined with reason, allows us to make moral judgements. We are now in a position to see the potential problem for machine ethicists. Even if future machines will be able to match humans’ rational capacities (which they have at least been able to do in certain domains to date), they seem a long way from developing the sorts of moral sentiments that are also necessary to make well-founded moral judgements. Of course, they might still be able to act ethically if they respond appropriately to the moral properties that we project on the world. However, it may be thought that morality is too messy for us to simply follow the judgments of others and hope to act ethically. According to the sort of particularist ethics that Miller derives from Hume’s thought, acting morally requires not following a certain set of definite principles, but rather considering each new situation one faces and evaluating different course of action in terms of their moral merits. It requires judgement on a case-by-case basis. And, until such a point as we can simulate the moral sentiments in machines, they can have no such judgement on this Humean view. Does this mean that the actions of machines are beyond morality, not properly evaluable in terms of ethics? If Hume is right, this seems to be the correct conclusion to draw. But that is also not the end of the story. We also need to evaluate the actions of those who design and use advanced machines and, in doing so, abdicate responsibility for ethical dilemmas that they would otherwise face. The decisions to delegate are properly ethically evaluable, and we should potentially evaluate them negatively if they create an “ethics-free” zone of action. Passing off high-stakes tasks to machines will not let them off the hook.
0 Comments
(Originally published: 20 November 2022)
The name of this blog comes from Douglas Adams’ masterful work The Hitchhikers’ Guide to the Galaxy. Adams describes a race of “hyperintelligent, pan-dimensional beings” who, long ago, constructed a supercomputer called “Deep Thought” to work out the meaning of life. After Deep Thought is switched on, two philosophers burst in and angrily demand “demarcation”: if a computer can just get rid of any uncertainty around life’s ultimate questions, what role is there left for philosophy? Perhaps sensing their arguments are falling on deaf ears, the pair proceed to threaten the nuclear option: a national philosophers’ strike. In the original radio show, Deep Thought has the discourteousness to ask the philosophers who the strike will inconvenience. ‘Never you mind who it’ll inconvenience, you box of black legging binary bits! It’ll hurt, buster! It’ll hurt!’ responds the philosopher Majikthise. And while, of course, in the 1970s it was difficult to say much more than this, today a philosophers’ strike might be more noticed, at least by certain circles who rely on those with philosophical training to give their actions a stamp of approval. Organizations involved in AI research and development, for example, have over the past few years sought to bring those working on ethical aspects of the technology inside their tent and ensure ethical expertise is present in-house. When tensions arise among the ethicists and their employers, the threat of withdrawing labor by the latter may seem to have more bite. In 2020, the researcher Timnit Gebru says she was fired by Google after refusing to remove her name from a research paper. (Google maintains that she resigned.) The departure of course generated negative press for Google, of the sort that a philosophers’ strike in earlier eras would not. Of course, to have that sort of effect on powerful actors, one needs to make oneself indispensable. And, to do so, it may necessary to provide something valuable to them in the first place. The reason why big tech is hiring ethicists is presumably that they think the recommendations and criticisms provided by them can be dealt with within the status quo. There are, of course, different levels of idealization that we can do philosophy. That is, we might take more or less aspects of the world as we find them as fixed, and seek to mitigate injustice within the boundaries of that. Vigilance is called for in making these methodological commitments. The countless “AI ethics” guidelines that have been put forward might be more or less appropriately concessionary in their acceptance of various aspects of the world as fixed. But there is a second danger that the ethicist must look out for here: the danger that the ethical principles that are put forward will not have the desired effect – and might in fact be purposefully interpreted in bad faith by those who they are supposed to constrain. While there may be better or worse ways of formulating principles to avoid these dangers, any public engagement will inevitably create this danger. The worries about legitimating immoral practices, however, are not restricted to those who seek out impact. The case of the nineteenth century philosopher John Stuart Mill provides one cautionary tale. In his essay “A Few Words on Non-Intervention”, Mill gives a robust argument against interfering militarily in the affairs of sovereign states. Even when an unjust oppressor has turned against its subjects in a civil war, Mill says, we have pragmatic reasons to maintain a distance. “The only test possessing any real virtue of a people’s having become fit for popular institutions,’ says Mill, ‘is that they, or a sufficient portion of them to prevail in the contest, are able to brave labor and danger for their liberation.’ The one exception that Mill allows is in cases where another nation has already intervened; action in these cases can be understood as a simple case of “re-balancing”. Like so much of Mill’s thought, his claims here appear to have had a wide-ranging impact on our world. During the twentieth century, the injunction against external intervention in civil wars was recognized as a customary aspect of international law. But, on at least one occasion, Mill’s impact appeared to have an unexpected effect. During the Korean war, for example, both sides appealed to the international legal framework to justify their own position – and ultimately to extend the war. The US viewed the war as an act of international aggression by North Korea against the South (and consequently viewed their involvement as unproblematic). However, the Eastern block, who were backing Kim Il-Sung’s regime in the North, claimed that there were not two countries on the Korean peninsula but one. Consequently, they said, what was happening was an illegitimate intervention in a civil war by the US – an action ruled out by their own liberal hero. While Mill’s principle may have seemed clear, its application in practice allowed manipulation to justify opposing causes. The worries about normative work being abused by those it is supposed to apply to thus has a long history. Philosophers today – who increasingly seek out impact on the world – are stuck in a double bind. The more practically relevant their work gets, the more it loses its critical potential. Remaining useless – to the extent that the threat of withdrawing one’s labor will be met with a shrug of the shoulders – may not be what we hoped for when we started bringing philosophical tools to bear on real-world practical problems. But it might be a sign that we are uncovering inconvenient – but important – truths. (Originally published: 25 October 2021)
In a provocative 1965 essay entitled “Repressive Tolerance”, the philosopher Herbert Marcuse calls for intolerance of right-wing views that favor the (unjust) status quo. Marcuse, who was forced to leave his homeland of Germany in the 1930s as the Nazi’s rose to power, was fast becoming a key figure in radical politics in the US. His views had great influence on the emerging “new left” movement of the 60s, and more recent attempts to disrupt talks by conservative speakers might be traced back to his ideas. His contribution in the essay was to blow apart the Western faith in the value of free speech. The classic liberal defense of free speech is of course found in chapter 2 of John Stuart Mill’s On Liberty. According to Mill, we should allow uncensored views – even ones that might appear to be outlandish or repugnant – to be expressed because this is our best bet of arriving at the truth of any matter (scientific, philosophical, or political). If we stop a view from being expressed, says Mill, we risk discarding what might end up being the truth. But even if we are already in possession of the truth, we might nonetheless have reasons for allowing challenges to be voiced. Silencing these false views would deprive us of ‘the clearer perception and livelier impression of truth produced by its collision with error’. Perhaps we only really possess knowledge once we are required to defend the truth against objections. Marcuse agrees with Mill insofar as he thinks that, at least sometimes, free and open discussion is the most promising way of getting the correct answer to any question. However, the qualification he adds is crucial. The instrumental promise of free speech only holds in some circumstances. And those circumstances in which we can expect the happy mechanics that Mill describes are, for Marcuse, certainly not the circumstances characteristic of late capitalism. Here, the role of ideological beliefs in sustaining systems of exploitation and oppression give certain views – namely, those that seek to justify the status quo – an unfair advantage in the marketplace of ideas. If people come across two opinions – one favorable to existing power structures and one opposed to them – they are more likely to discount the latter irrespective of the quality of the argument. In situations such as this, tolerating differences of opinion means extending tolerance to ‘manipulated and indoctrinated individuals who parrot, as their own, the opinions of their masters, for whom heteronomy has become autonomy’ (p.90). What is to be done? Marcuse suggests that giving all views a fair hearing would require a re-balancing process. We need ‘information slanted in the opposite direction’ (p.99), and what this suggests in practice is ‘the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, or which oppose the extension of public services, social security, medical care, etc.’ (p.100). For Marcuse, achieving emancipation from injustice may require intolerance towards those who defend it, perhaps by preventing them from speaking through tactical disruptions to events. Marcuse is often viewed as representing the very worst excesses of twentieth century Marxism. He is thought of as the stereotypical paranoid leftist, who sees imaginary enemies at every turn. But even Marcuse may not have envisaged a society in which an algorithm actively pushes right-wing opinions to the forefront of our attention in a site of public debate. Yet this is precisely the society that many of us have found ourselves in. A recent study by Twitter found that the timeline of users in six of seven countries from which data was collected promoted tweets by right-leaning news organizations and politicians at the expense of others. Who needs ideology when you can convince the masses to vote against their own interests with the help of computers? Marcuse himself presented a worry to his proposed strategy: how are we to know which views we should withdraw tolerance to? His answer is that individuals should decide this once they have been freed of ideological beliefs, but therein lies the problem – how are we to free ourselves of ideological beliefs if the trends Marcuse describes as sustaining ideology are still ongoing. His “solution” ‘presuppose[s] that which is still to be accomplished: the reversal of the trend’ (p.101). Fortunately, a parallel problem does not haunt the solutions to biased timelines and newsfeeds of online platforms. Assuming we can quantify the propensity of a given algorithm to promote certain views over others, a re-balancing exercise should be a fairly straightforward technical task. The real problem here, though, is one of power: can we really trust private companies to take the necessary steps to ensure that public debate achieves the Millian ideal? We have now moved much of our public sphere online, under the control of big tech, and no amount of ideology critique will be enough when the real barrier to human emancipation is the result of a computer program (Originally published: 12 August 2021)
Within a matter of years, a number of countries may have the capability to manufacture lethal autonomous weapons systems (LAWS). These weapons would rely on sophisticated AI to enable them to select and engage targets without direct human input. It might be thought that LAWS would have an advantage over human soldiers. Unlike humans, these machines would not feel anger, resentment, or other emotions that have led to atrocities in the past. But multiple worries have been raised about their use. Most obviously, one might wonder whether the first machines of this sort would be able to abide by international humanitarian war. Could LAWS really meet the widely-held principle of discrimination, for example, which requires belligerents in wars to discriminate between combatants and civilians, and target only the former? An apocryphal story illustrates the potential problem here. Supposedly, government researchers once tried to train an AI system to identify a tank by feeding it a number of pictures with tanks in and a number of pictures without. On the basis of this, the system inferred that a tank was anything with a forest in the background. Any attempts at bringing the system into real-world conflicts were thankfully swiftly abandoned. But there is a deeper worry about LAWS, one which remains even if AI technology can some day surpass human capacities in visual recognition and the like. Suppose that LAWS are deployed in a particular conflict, and apparently breach various ethical principles of war. Suppose that they target a group of civilians, for instance, because they determine that terrorizing the enemy’s population in this way is the most efficient path to victory. Who is to be held responsible for these deaths? While we might think that we can pin responsibility on the commanders who deployed them, on the developers who designed them, or (at least sometimes) on the military-industrial complex as a whole, some think that nobody can justly be held responsible. This responsibility gap, in turn, may render the use of lethal force morally questionable. International discussions over the past few years have focused on how to regulate the use of LAWS in order to minimize these sorts of moral costs. One of the central principles that has emerged is the requirement of meaningful human control: the idea is that, when LAWS are deployed, the outcomes caused by them must at all times be under the control of one or more human agents. This, it is thought, will both ensure that obvious mistakes caused by existing limitations of AI are avoided, and that responsibility for deaths can be maintained. But the qualifier “meaningful” is important here. It will be of little use giving a military commander a veto over the decisions of LAWS if that person lacks the time, knowledge, or confidence to override decisions when necessary. The requirement of meaningful human control might be thought to undermine the very reason for turning to LAWS in the first place. If humans have to be the ultimate source of decisions, would this mean that the use of AI is superfluous? However, as some contributions to this discussion have made clear, this need not be the case. If we understand that control need not (only) be exercised at the point of us, but (also) at earlier stages in the life-cycle of LAWS – including at the development phase and even by governments in implementing regulations – we might be able to combine human responsibility with the superior capacities of AI systems. For example, in certain sorts of predictable environments, designers may be able to program LAWS in a way that means the outcomes they produce may be predictable, even if each individual decision to engage that they make are not strictly done with direct human oversight. In submarine warfare, for example, where the battlefield is unlikely to be cluttered with civilians alongside military targets, letting LAWS make decisions without direct human input may lead only to outcomes where military vessels are targeted. While these insights are important, it is also crucial to consider more practical issues when developing international laws and norms. We could certainly develop regulations that allow complex networks of human control across the life-cycle of AI systems. But in doing so, we should consider the potential for manipulation and bad-faith interpretations by international actors that such a complex requirement would make possible. Locating responsibility across various different sites would no doubt invite buck-passing and thus no ultimate way of holding anyone accountable. ‘Where all are guilty,’ notes Hannah Arendt, ‘nobody is’. Philosophers have done valuable work in identifying the potential problems that the use of LAWS would give rise to, as well as providing insights into what solutions could be developed to deal with these. But as the UN seeks to develop concrete regulations for LAWS, a balance needs to be made between the complex principles provided by philosophy and the need for simplicity to avoid opportunistic readings of the rules that are developed on the basis of them. As with other ethical principles of warfare, the ideal regulations need to be designed, not with the best people in mind, but the worst. (Originally published: 2 January 2020)
One priority outlined by the Conservative government in last month’s Queen’s Speech was tougher sentences for terrorists. A forthcoming bill will bring in longer stretches in prison for those convicted of terrorist offences. This continues a global trend of treating terrorism as qualitatively different from similar sorts of crime, and consequently punishing it more harshly than these other offences. Such practices are found in legal systems across the world. Opposition to this legislative agenda is unlikely to be especially vocal. The general public show little concern for those involved in terrorism, and this tends to foster an arms race among politicians to present themselves as taking the toughest stance against it. Nonetheless, it makes sense to question whether terrorist sentencing enhancements (as they are often known) can be justified. For one thing, common intuitions suggest that there are legitimate upper bounds on how much people can be punished for given crimes. Punishing those who are convicted of terrorism too harshly might thus wrong them. For another thing, criminal punishment requires significant resources to be redirected from other projects, and we should therefore be sure that it is serving valuable social purposes. What possible justifications, then, might there be for giving terrorists longer prison sentences than other sorts of criminals? To answer this, we must consider what functions punishment is supposed to serve, and then examine whether those functions are best pursued by giving terrorists more punishment. One obvious goal that the criminal justice system aims at is the reduction of crime. It might do this in a number of ways. First, it might provide disincentives to would-be criminals, who may refrain from undertaking criminal acts because of the fear of going to prison. Second, it might simply ensure that people who are prone to committing crimes are prevented from doing so because they are locked up. Finally, it might reform those who have committed crimes to ensure that their risk of recidivism is reduced. Will longer sentences for terrorists be required to reduce this category of crime via the three identified routes? I think that this is far from clear. It may be argued that, because of the strength of commitment to their cause that terrorists typically exhibit, greater threats are needed to provide the necessary disincentives. But such an argument assumes a picture of terrorists making a cost-benefit analysis that does not match up with empirical realities. The decision to employ terrorist tactics is seldom made in such a rational manner, but instead results from deep-seated psychological and social factors. The fear of any punishment is unlikely to play much on their minds. If criminal punishment is also valued because it ensures that dangerous individuals are kept away from opportunities to commit crime, it might be suggested that longer sentences for terrorists are justified to reduce crime in this way. After all, someone who murders another person in a crime of passion may be very unlikely to re-offend when released, whereas we can suppose that a committed terrorist will almost certainly seek to continue their mission once they have done their time. But removing opportunities to commit crimes can be done through other means apart from prison. Monitoring individuals once they are released from prison, and stepping in if it is clear that they are going to re-offend. may be an option The fact that terrorists often operate as part of a network, communicating with co-conspirators through electronic means, suggests such a strategy will be particularly useful with respect to this class of criminals (although the increasing prominence of so-called “lone wolf” terrorists, who operate largely independently from others, complicates matters). Finally, perhaps longer sentences are needed because reforming terrorists will take a longer time than reforming other criminals. Terrorists motivations, which are political, religious, or ideological (as the term is defined in UK law), may require more concerted effort to be removed. On the other hand, since terrorists often link their cause with a wider group identity, this opens up the possibility of using moderate voices within their communities to encourage switching to non-violent tactics. Such strategies may not be available with other categories of criminal. The case for harsh sentences for terrorists based on the need to avoid crime, then, is not clear cut. But some legal philosophers suggest that criminal punishment is not only justified by its effects on crime rates. Retributivists claim that punishment is justified even when we set aside its overall consequences. Some argue, for instance, that it gives criminals what they deserve. It might be thought, then, that we should give terrorists longer sentences because they deserve to suffer more. In order to explain why terrorists deserve to suffer more than other criminals who commit similar crimes, we must explain why their actions are worse. The UK legal definition of terrorism distinguishes it from other criminal acts (in part) by requiring the intention of causing fear. If person A kills person B in an act of revenge, they are a murderer; if C kills D in order to intimidate the wider public, they are a terrorist. Because of the greater number of people negatively affected by their actions, we might think that terrorists are morally worse. This thought might be used to justify longer sentences for them. There are problems with this line of argument, though. Although causing fear may generally be morally wrong, it is not usually the sort of thing we think best responded to with criminal punishment. Politicians often cause fear by pointing out the (real or imagined) security threats facing their country and its population. While we may decry their cynicism, we may not be inclined for criminal charges to be brought against them. I suspect that this is, at least in part, because of the difficulty in determining intentions like these. One might avoid this issue by re-defining terrorism as an act that, in fact, causes fear, irrespective of the actors’ intentions. But then criminals would be punished more if they cause a high level of irrational fear in others, which we may find undeserved. If a Muslim who commits a crime will cause greater fear in the wider public because of the prejudiced belief that any Muslim who commits a crime is inevitably acting as part of a wider conspiracy to bring down the West, this line of argument would suggest tougher sentences are in order. Few would support such a policy. There is no simple reason for why terrorist offences should be sentenced more harshly that other, similar offences. As I have argued elsewhere, the term “terrorism” is of limited significance and usefulness. We would do better to judge each criminal action based on the specific context in which it is conducted, rather than whether it counts as terrorism on our favoured definition. (originally published October 11 2019)
There have been many arguments offered in favor of unregulated capitalism. Some, following John Locke, argue that individuals can gain ‘natural rights’ to property, and respecting those rights requires that we do not interfere with economic exchanges of consenting adults. Others point to the efficiency that capitalism brings about. Adam Smith famously argued that the ‘invisible hand’ of the market would lead to an optimal distribution of resources. Finally, some link capitalism with personal identity, claiming that property rights are an important precondition of individuals’ self-respect. In the Twentieth Century, in light of the often-disastrous attempts at central planning carried out in the Soviet Union, a number of authors turned to a new argument. Drawing heavily of the liberal tradition of the West, and the central place it affords individual freedom in its scheme of values, they argued that capitalism and liberty go hand in hand. If one supports the cause of individual liberty, one must also support largely unfettered capitalism. A classic argument along these lines appears in the first chapter of the economist Milton Friedman’s work Capitalism and Freedom (1962). Friedman highlights two ways in which capitalist economic arrangements can contribute toward individual freedom. First, and most straightforwardly, the freedom that capitalism involves – namely, the freedom to buy and sell in an open market – is itself a component of the overall freedom of an individual. Regulating markets invariably involves restricting the number of options open to people about what they do with their own property, and thus reduces their freedom. Second, this economic freedom can also play a role in promoting political freedom. Only when the concentration of political power in central government is offset by economic power in the hands of ordinary citizens can we be sure that our government will not degrade into tyranny. Without the ability to amass resources and use them to hold the government to account (by donating it to campaigns, and so on), state power may go unchecked in a dangerous way. While Friedman’s arguments about the connection between unregulated capitalism and freedom might have had some plausibility 60 years ago, more recent technological developments, and the way in which they have interacted with free markets, have started to undermine his assumptions. For one thing, a free market in which high-tech products have a central place may not be so friendly to the component of individual freedom that he pointed to in his first argument. This might be, for example, because of technological monopolies: when only one firm possess the technology necessary for a given product to be produced, consumers lack much of a say about what terms they will buy it on. At the same time, complementary products might further limit economic freedom; if one has to use a specific operating system when using a given computer, our capacity for choice is limited. Finally, compatibility issues may mean that our choice over products is further reduced. If everyone is using a specific program for word processing, I may not be able to work with them if I am using something else. Friedman’s ideological ally, Friedrich Hayek, once argued that ‘[o]ur freedom of choice in a capitalist economy rests on the fact that, if one person refuses to satisfy our wishes, we can turn to another’. Although this may make sense with products like umbrellas – if one seller cannot give me a good-quality umbrella, I can find another one further down the street – it cannot be said to apply to many technological products. If a popular social media site refuses to satisfy my privacy preferences, I cannot simply switch to another (unless I want to be on a social media site which nobody else uses). Friedman’s second argument, that economic freedom supports political freedom, has also begun to look suspect. New technologies have undoubtedly increased the possibilities for political action: campaign groups can now organize more effectively with the aid of the internet, for instance. But it has also created new opportunities for oppression by powerful interests. Cambridge Analytica have been able to directly target susceptible individuals with messages supporting given political positions. Given that rival positions are not given an opportunity to respond to these messages (since these “dark ads” are not publicly accessible), democratic debate has been stifled and, we might think, the autonomy of electorates has been undermined. Furthermore, as we spend more and more of our lives in virtual spaces (interacting with people on social media rather than in person, for example) the capacity of authoritarian regimes to track the actions of their populations has been increased. A few decades ago, to fully know what its people were up to, a government would have needed to place security cameras and voice recorders on every street corner and in every house. Now, all they need to do is to collect the online data of those people and use algorithms to identify undesirable elements. Entrepreneurs financial efforts to support various causes may be exactly the sort of thing that Friedman was thinking about when he talked of the capacity of economic power to offset political power, but the technology that these billionaires made their money from may at the same time set back the cause of political equality. New technologies can be used to harness greater freedom of individuals. But if they are to live up to this promise, the economic environment that was once deemed appropriate for simpler products to be bought and sold may need to be replaced. (Originally published 27 September 2019)
“Project fear” was the derogative name given to the warnings of commentators who talked about the likely negative consequences of leaving the European Union. Now, worried that Supreme Court judges and the sovereign parliament of the UK will prevent Brexit, a government minister has claimed that the country faces a “violent, popular uprising” comparable to the 1992 Los Angeles riots if a second referendum overturns the result. The hypocrisy of those who would employ this sort of strategy while dismissing fears raised by the other side may explain some of the revulsion many of us feel toward this. But we also need to consider whether this sort of political practice is, in itself, morally problematic. One way of criticizing it is through appealing to Robert E. Goodin’s book What’s Wrong with Terrorism? (2006). While we often think of terrorist acts as, paradigmatically, involving the killing of innocent civilians, Goodin argues that this fails to distinguish terrorism from ordinary murder. For Goodin, the distinctive wrong that terrorists commit is the spreading of fear for political purposes. And although this is often done through acts of murder, terrorist wrongs can be carried out by other means. Cyberterrorism targets computer systems rather than people, for instance. If we accept this definition of terrorism, as Goodin notes, warnings can count as acts of terrorism. And they can do so even when the warnings are well-founded. A politician who warns the public about an imminent attack by a paramilitary group, and does so because they think that this will help them advance their policy agenda (because, for example, it will lead the public to become more favorable to the heavy-handed policing proposals that the politician favors), is a terrorist, on Goodin’s account. So this might be the way to criticize the sort of fearmongering that is going on in British politics at the moment. Those who warn of civil unrest because this will ensure that Brexit will go ahead commit one of the wrongs that terrorists do, and can be viewed with the same moral disdain. There are a couple of problems with this line of argument, though. First, it does not give us any practical guidance. If civil unrest is a genuine possibility, then politicians should, it seems, warn people of this. While we can criticize the character of those who only do so because it advances their political agenda, we cannot have anything against the action itself. Second, and more importantly, at a time when political language is becoming highly inflammatory (with talk of “traitors”, “betrayal”, and a “surrender bill” being employed) we would do well not to introduce the term “terrorist” into the mix. As I have suggested elsewhere, it may be a good idea remove the language of terrorism from our political and legal vocabulary altogether because of its tendency to block off peaceful solutions to political conflicts. But, even if we do not want to call what politicians are doing acts of terrorism, we might at least criticize them for generating terror. And I say “terror”, rather than “fear”, because the term “terror” might have more specific connotations. Someone can have fears about the consequences of a decision that they make while still rationally thinking about the pros and cons of making it in one way or another. But some sorts of threats do not leave the capacity for rational deliberation intact. As Jeremy Waldron notes, someone who is covered in gasoline and told to open a safe or be set on fire is unlikely to think clearly about whether the robbers are likely to follow through with their threat. Their state of terror (rather than mere fear) may lead them to simply do what they are told unreflectively. In Hannah Arendt’s famous study, The Origins of Totalitarianism (1951), the creation of terror in a population is identified as one of the central means of control of totalitarian regimes. Creating mere fear by threatening punishment for disobedience of laws can only go so far in keeping an authoritarian government in power. But undermining a people’s capacity for thinking for themselves by penetrating all aspects of society and generating a system in which one can never know when they are being watched and why they will be arrested can create a more stable system in which people stop thinking too much about whether it is pragmatic to obey or to resist injustice, and automatically go along with whatever their leaders tell them. Warning about negative consequences of policies one disagrees with can be a valid political argument. But doing so in certain ways threatens to undermine rational and reasoned discussion of the issues at hand. Those who argue for the necessity of Brexit by the 31st October on any terms, or against a no-deal Brexit, based on the need to avoid civil unrest must tread carefully. They need to make their arguments in a way that does not create terror in a population, and maintains the conditions necessary for a full and measured examination of all evidence at hand. (Originally published: 22 August 2019)
Criminal justice systems have traditionally relied heavily on human decision-making. Juries are asked to decide whether a given defendant is guilty beyond reasonable doubt. Judges routinely make rulings about what severity of sentence to give convicted criminals. And parole boards are tasked with determining whether incarcerated individuals are sufficiently rehabilitated to rejoin society. Because of this reliance on judgement and reason at various stages, the system is vulnerable to being subverted by the various biases and irrationalities that are commonly found in human beings. A study of Israeli judges, for instance, found that they were more likely to give out favourable decisions in parole hearings held after lunch. Smart information systems (SISs) – which combine large sets of data and sophisticated machine learning processes – offer a way of removing these undesirable elements from decision-making. These tools take a wealth of information about individuals’ behaviour and characteristics, and process them using computer algorithms in order to make various predictions and recommendations. For example, public bodies have been assisted in their decisions over who to grant parole to by inputting data about prisoners into SISs and running algorithms which predict how likely they are to re-offend. Similar programs have been developed to help with decisions about whether to grant bail, whether to require rehabilitation rather than prison for someone who has broken the law, and how long a prison sentence should be given to defendants. The perils of relying on SISs to make decisions are becoming apparent. For one thing, these systems necessitate the widespread collection of personal data for them to function, and the way in which this data is collected might threaten individuals’ privacy. For another, algorithmic decision-making may often lead to recommendations that we would view as unfair and biased. Some programs, for example, have been known to falsely flag black defendants as more likely to re-offend at a higher rate than white defendants. But there is another potential drawback of using algorithms in the criminal justice system in particular: one that appears to have gone unnoticed in existing discussions of this emerging technology. To explore this issue, we need to first consider what the goals of criminal punishment should be more generally. Jeremy Bentham, the utilitarian philosopher, argued that punishment should ensure the greatest happiness of the greatest number. While punishing convicted criminals by locking them up in prison is likely to make them unhappy, the reduction in crime, and the increase in happiness that this will bring about in the population as a whole, may often be enough to offset this unhappiness. And when this is the case, Bentham claimed, punishment should be carried out. There are a number of ways in which criminal punishment might reduce crime. It might act as a deterrent for would-be law-breakers; it might ensure that dangerous people are kept off the streets; and it might lead to serial criminals deciding to change their ways. If we only think that punishment should aim at these outcomes, the use of algorithmic decision-making may in fact increase its efficacy. That is, we may be able to use computers to come to more accurate decisions about what sort and severity of punishment is necessary in order to have the desired deterrent, incapacitation and reforming effects. SISs can take into account much larger amounts of information than humans, and can also run much more complex calculations in order to arrive at more reliable predictions. But some philosophers dissent from Bentham’s simple view of criminal punishment. Jean Hampton argued that, although punishment should certainly seek to reduce crime, it should do so in a particular way. This is because humans are capable of responding to moral reasons, and we should respect that capacity in our social practices. While an animal that comes across an electric fence may learn not to cross it simply because of the pain experienced in failed attempts, humans, in addition, might start to reflect on the reasons why the fence is there. And they might, as a result, come to see that it is a good thing not to enter the land that lies across from the fence even if they could. Hampton thinks that punishment should work in a similar manner. Not only should it provide immediate disincentives to would-be criminals. It should also, she says, assist them in understanding the reasons they are being punished and, indirectly, the reasons underlying the laws that they have broken. If we want punishment to have this educative function, we might start to view the use of algorithms in this area as problematic. When a criminal is handed down a particular sentence by a judge, we can often see the working, as it were. The judge may provide reasons for giving a sentence at the higher or lower end of those permitted by law, and this might serve an educative goal. By explaining to someone, for example, that a harsher sentence has been given to them because they committed their crime without any mitigating factors, or with blatant disregard for public safety, or with no signs of remorse, a message is sent to the criminal about the wrongness of their actions. This may partially explain some of the appeal of the view that justice must not only be done, but also be seen to be done. The reasons that are routinely given at different places in the criminal justice system may be lost if all decision-making were made by a computer. Even the programmers of SISs often fail to fully understand how their creations function. How are criminals supposed to interpret the outcome of an algorithm if its inner workings are so obscure? While relying on humans may lead to a degree of injustice owing to the unconscious biases of those in positions of power, in passing over that power to algorithms we may compromise an important goal of punishment. Criminal justice should seek to educate, and not just discipline, those to whom it is meted out. (originally published 8 August 2019)
As the United Kingdom’s deadline for exiting the European Union draws closer (once again), it may be worth reflecting on the strategies used to get people to support policies that go against their own interests. While the use of sophisticated data analytics may have some role to play in the case of Brexit, the depressing truth is that there is also something simpler going on here: namely, old-fashioned political rhetoric. As is so often the case when we want to understand recent political developments, we can learn a lot about some of the rhetoric deployed in favor of leaving the EU by delving into the work of the English writer George Orwell. While the dystopian novel Nineteen Eighty-Four remains Orwell’s most famous, and perhaps most frighteningly relevant, piece, his short essay ‘Politics and the English Language’ – a study of how language has become vague and thus vulnerable to manipulation – is most directly useful to consider in the context of the Brexit campaign. One of Orwell’s gripes about how the English language is used relates to what he calls ‘dying metaphors’. By these, he means metaphors which have become so over-used that they fail to invoke the same sorts of vivid imagery that they might once have done. Phrases like ‘axe to grind’, ‘grist to the mill’, and ‘swan song’ are all dying in this sense. If we are told that so-and-so has an axe to grind about this or that issue, for example, English speakers are unlikely to think of the person in question literally sharpening a weapon in preparation for bloody revenge. The phrase has become so commonplace that they may simply translate the metaphor into what is literally meant (i.e. that so-and-so has a grievance) without first considering the image presented to get this point across. As a result of this lack of vividness, Orwell says, dying metaphors may also lead to a loss of attention among the audience. By reverting to these sorts of familiar patterns of speech, the speaker will cause their audience to go into auto-pilot (or ‘a reduced state of consciousness’), not really listening, much less evaluating, what is being said. This, of course, is why politicians so often use phrases like these. If, as Orwell thought, political speech is largely ‘defense of the indefensible’, its purpose will be a lot easier if those to whom it is directed can have their evaluative judgement neutered by comforting linguistic devices. Which brings us to Brexit, and more specifically the rhetoric of one of its leading supporters, now-Prime Minister Boris Johnson. Following his appointment as Foreign Secretary in 2016, Johnson outlined his preferred strategy for leaving the EU: ‘Our policy is having our cake and eating it.’ This, of course, is a play on the familiar English idiom “You can’t have your cake and eat it too”, which means that one cannot achieve two mutually incompatible things (such as maintaining possession of a cake and consuming it). But Johnson twists this phrase to say the exact opposite: we can have two seemingly incompatible things (presumably here referring to receiving the benefits of being connected with the EU without many of the associated costs). While some criticized Johnson at the time, he did not receive anything close to the level of derision that would be appropriate for a politician speaking nonsense. The reason, I want to suggest, is that by using the familiar linguistic device to get his point across, Johnson was able to put those listening into the pacified state that Orwell fears people revert to when they hear dying metaphors. To see this, imagine that Johnson had instead come up with an original idiom to claim that we can have two mutually incompatible things. He might have said: ‘Our policy is putting it on the slate and not being in debt.’ As will be obvious to everyone familiar with the practices of British pubs in years gone by (as Orwell was, incidentally), these two things cannot both be achieved. One cannot promise to pay for drinks at a later date (putting the amount owed ‘on the slate’) and simultaneously not be in debt. Had Johnson used this phrase, the sheer ridiculousness of what he was saying may be become more apparent. Or, even better, suppose that Johnson had dispensed with metaphorical talk altogether, said what he meant in plain terms: ‘Our policy is remaining part of the single market while gaining complete control over immigration into the UK.’ As anyone with a vague knowledge of European politics will know, such an outcome is highly unlikely: one of the guiding principles of the EU is the so-called inseparability of the four freedoms (of goods, capital, services, and labour). They come as an all or nothing package; countries cannot pick and choose, and the EU would act in a hugely surprising way if it allowed opt-outs in any circumstances. The proposed policy is therefore just about as infeasible as having a cake and eating it. But, had his point been made in this way, the contradictions of what Johnson was saying would have been much more apparent. By hiding behind metaphorical language, he was able to avoid the sort of scrutiny that may have faced him if he simply said what he meant. The blithering style that is the modus operandi of the current Prime Minister may mean good business for satirists, but there is something worrying about the vague and rambling language that is becoming commonplace in mainstream politics. We would do well here to heed Orwell’s warnings about how political and linguistic decay often go hand in hand. |
About
Here are blog posts originally published on my blog "Philosophers' Strike". I may occasionally blog here again in the future. ArchivesCategories |
RSS Feed