|
(Originally published: 20 November 2022)
The name of this blog comes from Douglas Adams’ masterful work The Hitchhikers’ Guide to the Galaxy. Adams describes a race of “hyperintelligent, pan-dimensional beings” who, long ago, constructed a supercomputer called “Deep Thought” to work out the meaning of life. After Deep Thought is switched on, two philosophers burst in and angrily demand “demarcation”: if a computer can just get rid of any uncertainty around life’s ultimate questions, what role is there left for philosophy? Perhaps sensing their arguments are falling on deaf ears, the pair proceed to threaten the nuclear option: a national philosophers’ strike. In the original radio show, Deep Thought has the discourteousness to ask the philosophers who the strike will inconvenience. ‘Never you mind who it’ll inconvenience, you box of black legging binary bits! It’ll hurt, buster! It’ll hurt!’ responds the philosopher Majikthise. And while, of course, in the 1970s it was difficult to say much more than this, today a philosophers’ strike might be more noticed, at least by certain circles who rely on those with philosophical training to give their actions a stamp of approval. Organizations involved in AI research and development, for example, have over the past few years sought to bring those working on ethical aspects of the technology inside their tent and ensure ethical expertise is present in-house. When tensions arise among the ethicists and their employers, the threat of withdrawing labor by the latter may seem to have more bite. In 2020, the researcher Timnit Gebru says she was fired by Google after refusing to remove her name from a research paper. (Google maintains that she resigned.) The departure of course generated negative press for Google, of the sort that a philosophers’ strike in earlier eras would not. Of course, to have that sort of effect on powerful actors, one needs to make oneself indispensable. And, to do so, it may necessary to provide something valuable to them in the first place. The reason why big tech is hiring ethicists is presumably that they think the recommendations and criticisms provided by them can be dealt with within the status quo. There are, of course, different levels of idealization that we can do philosophy. That is, we might take more or less aspects of the world as we find them as fixed, and seek to mitigate injustice within the boundaries of that. Vigilance is called for in making these methodological commitments. The countless “AI ethics” guidelines that have been put forward might be more or less appropriately concessionary in their acceptance of various aspects of the world as fixed. But there is a second danger that the ethicist must look out for here: the danger that the ethical principles that are put forward will not have the desired effect – and might in fact be purposefully interpreted in bad faith by those who they are supposed to constrain. While there may be better or worse ways of formulating principles to avoid these dangers, any public engagement will inevitably create this danger. The worries about legitimating immoral practices, however, are not restricted to those who seek out impact. The case of the nineteenth century philosopher John Stuart Mill provides one cautionary tale. In his essay “A Few Words on Non-Intervention”, Mill gives a robust argument against interfering militarily in the affairs of sovereign states. Even when an unjust oppressor has turned against its subjects in a civil war, Mill says, we have pragmatic reasons to maintain a distance. “The only test possessing any real virtue of a people’s having become fit for popular institutions,’ says Mill, ‘is that they, or a sufficient portion of them to prevail in the contest, are able to brave labor and danger for their liberation.’ The one exception that Mill allows is in cases where another nation has already intervened; action in these cases can be understood as a simple case of “re-balancing”. Like so much of Mill’s thought, his claims here appear to have had a wide-ranging impact on our world. During the twentieth century, the injunction against external intervention in civil wars was recognized as a customary aspect of international law. But, on at least one occasion, Mill’s impact appeared to have an unexpected effect. During the Korean war, for example, both sides appealed to the international legal framework to justify their own position – and ultimately to extend the war. The US viewed the war as an act of international aggression by North Korea against the South (and consequently viewed their involvement as unproblematic). However, the Eastern block, who were backing Kim Il-Sung’s regime in the North, claimed that there were not two countries on the Korean peninsula but one. Consequently, they said, what was happening was an illegitimate intervention in a civil war by the US – an action ruled out by their own liberal hero. While Mill’s principle may have seemed clear, its application in practice allowed manipulation to justify opposing causes. The worries about normative work being abused by those it is supposed to apply to thus has a long history. Philosophers today – who increasingly seek out impact on the world – are stuck in a double bind. The more practically relevant their work gets, the more it loses its critical potential. Remaining useless – to the extent that the threat of withdrawing one’s labor will be met with a shrug of the shoulders – may not be what we hoped for when we started bringing philosophical tools to bear on real-world practical problems. But it might be a sign that we are uncovering inconvenient – but important – truths.
0 Comments
Leave a Reply. |
About
Here are blog posts originally published on my blog "Philosophers' Strike". I may occasionally blog here again in the future. ArchivesCategories |
RSS Feed