|
(Originally published: 5 September 2023)
In the future, high-stakes actions may be carried out by machines. If self-driving cars are granted greater autonomy, for instance, they may need to be programmed to respond in certain ways when faced with an inevitable accident. Should the car drive straight on, running over a number of pedestrians in the middle of the road, or should it swerve to one side, thereby endangering its passengers’ lives? If we develop autonomous weapons systems, which would utilize AI to select and engage targets without direct human input, it is vital that the relevant ethical rules of war are followed. Would attacking a given target be proportionate if a number of civilians in the vicinity are put at risk? How should we ensure that AI systems such as these are used ethically? What would this even mean? According to the influential “machine ethics” movement, we should make sure that the machines themselves will act ethically. According to one representative account of this idea, the goal is ‘to create a machine that itself follows an ideal ethical principle or set of principles; that is to say, it is guided by this principle or those principles in decisions it makes about possible courses of action it could take.’ But does it even make sense to talk about machines acting ethically? To determine this, we need to turn from ethics (which concerns questions about right and wrong) to metaethics (which concerns second-order questions about the nature of right and wrong). On certain metaethical views, I suggest, the whole idea of machine ethics is a non-starter. David Hume, the eighteenth-century Scottish philosopher, arguably puts forward one such view. Hume famously claims that one cannot derive an ethical conclusion from purely empirical premises. When we consider moral arguments, if they are well-formed, we will always find at least one normative premise, otherwise there will be an unjustifiable move from statements of fact to statements of what ought to be. An implication of this is that we cannot make well-founded moral judgements on the basis of reason alone. Does Hume’s law, as it is known, undermine the project of machine ethics? Not necessarily. Even if normative conclusions must arise from normative, as well as factual, premises, it may well be possible to ensure that robots follow the prescriptions contained within those conclusions. This could, for example, be done by programming in strict rules that the robots must follow when acting. Designers could, for instance, determine in advance the rules that self-driving cars should follow in accident scenarios and ensure that the car will, say, save the most lives in all these cases. Alternatively, we might use machine learning techniques to ensure that the AI systems within our machines will develop ethical constraints themselves, without the need for these being specified in advance. The latter technique might be necessary if we thing that deciding what the right thing to do is will require a sensitivity to particular features of the case at hand, rather than a simple disposition to follow a small number of ethical rules. But there is another, related, aspect of Hume’s thought which, if justified, might pose greater problems for the proponents of machine ethics. We have seen that Hume believed that ethical conclusions cannot be drawn on the basis of purely empirical premises. There must be a distinctively normative element in the support for these conclusions. But, if we want to draw well-founded ethical conclusions, where might we gain knowledge of this normativity? On one view, normative concepts (like good and bad) exist in the world, and we might gain knowledge of these in exactly the same way in which we gain knowledge about other elements of the world such as the material that a piece of furniture is made from: empirical investigation involving the use of our senses. This, however, is not Hume’s view. In his Treatise on Human Nature, Hume argues that, no matter how hard we search, we will not find normative elements (such as virtue or vice) in the external world: ‘Take any action allow’d to be vicious: Wilful murder for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In whichever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You can never find it, till you turn your reflection to your own breast, and find a sentiment of disapprobation, which arises in you, towards the action.’ One way of interpreting Hume here is as a projectivist. According to this view, moral properties do not exist in a mind-independent way in the world, but are rather “projected” onto the world by us. The crucial think to note here is that moral properties, for the projectivist, still exist and may still be of great importance. We might have reasons to be moral, even if projectivism is true. All the projectivist holds is that, in spite of the views about morality held by most people (the “vulgar”, as Hume puts it), these moral properties are subjective entities that we have a part in bringing into existence rather than objective things that are “out there” waiting to be discovered. But how do we humans give rise to moral properties, if their existence cannot be based on reason alone? In one of his early books, Oxford political theorist David Miller outlines the indispensable role of moral sentiments in Hume’s thought. It is only with the capacity for distinctive sorts of reactions – not entirely rational – to external states of affairs that, when combined with reason, allows us to make moral judgements. We are now in a position to see the potential problem for machine ethicists. Even if future machines will be able to match humans’ rational capacities (which they have at least been able to do in certain domains to date), they seem a long way from developing the sorts of moral sentiments that are also necessary to make well-founded moral judgements. Of course, they might still be able to act ethically if they respond appropriately to the moral properties that we project on the world. However, it may be thought that morality is too messy for us to simply follow the judgments of others and hope to act ethically. According to the sort of particularist ethics that Miller derives from Hume’s thought, acting morally requires not following a certain set of definite principles, but rather considering each new situation one faces and evaluating different course of action in terms of their moral merits. It requires judgement on a case-by-case basis. And, until such a point as we can simulate the moral sentiments in machines, they can have no such judgement on this Humean view. Does this mean that the actions of machines are beyond morality, not properly evaluable in terms of ethics? If Hume is right, this seems to be the correct conclusion to draw. But that is also not the end of the story. We also need to evaluate the actions of those who design and use advanced machines and, in doing so, abdicate responsibility for ethical dilemmas that they would otherwise face. The decisions to delegate are properly ethically evaluable, and we should potentially evaluate them negatively if they create an “ethics-free” zone of action. Passing off high-stakes tasks to machines will not let them off the hook.
0 Comments
Leave a Reply. |
About
Here are blog posts originally published on my blog "Philosophers' Strike". I may occasionally blog here again in the future. ArchivesCategories |
RSS Feed