Responsibility Gaps
The use of AI to make high-stakes decisions has been thought to generate a "responsibility gap": a problematic situation in which nobody is responsible for the decision. Yet some argue that (i) responsibility gaps do not exist and (ii) even if they do, they can easily be removed with organizational or technological interventions. My ongoing research project in AI ethics evaluates these responses.
Regarding (ii), I have examined whether appealing to collective responsibility or utilizing explainable AI techniques can bridge responsibility gaps. I find significant limits to these strategies (here, here, and here). In the future, I plan to write a book (provisionally titled Responsibility Gaps) where I examine (i). In my view, there are different forms of responsibility that we care about, and we might care about them for reasons that are context sensitive. The use of AI in the public sector, for example, may necessitate certain types of responsibility to be maintained. It may matter, for example, that key decisions are made in a way that reflect our values.
Artificial Speech
Liberal democracies are characterized not only by the outcomes they produce, but also by the processes through which those outcomes are brought about. Many of these processes are linguistically structured: for liberal democrats, the legitimate state is a site of explanation, justification, and contestation, in which the authority to speak in the name of the political community is exercised by public officials with well-defined institutional roles. Yet, in many countries, these communicative functions are increasingly being delegated to artificial intelligence systems. In a future project, I plan to examine whether liberal-democratic legitimacy can survive the widespread “contracting out” of political speech to non-human agents. This work seeks to show that artificial intelligence does not merely reshape how democracies govern, but forces a rethinking of who (or what) can legitimately speak for a political community.
The use of AI to make high-stakes decisions has been thought to generate a "responsibility gap": a problematic situation in which nobody is responsible for the decision. Yet some argue that (i) responsibility gaps do not exist and (ii) even if they do, they can easily be removed with organizational or technological interventions. My ongoing research project in AI ethics evaluates these responses.
Regarding (ii), I have examined whether appealing to collective responsibility or utilizing explainable AI techniques can bridge responsibility gaps. I find significant limits to these strategies (here, here, and here). In the future, I plan to write a book (provisionally titled Responsibility Gaps) where I examine (i). In my view, there are different forms of responsibility that we care about, and we might care about them for reasons that are context sensitive. The use of AI in the public sector, for example, may necessitate certain types of responsibility to be maintained. It may matter, for example, that key decisions are made in a way that reflect our values.
Artificial Speech
Liberal democracies are characterized not only by the outcomes they produce, but also by the processes through which those outcomes are brought about. Many of these processes are linguistically structured: for liberal democrats, the legitimate state is a site of explanation, justification, and contestation, in which the authority to speak in the name of the political community is exercised by public officials with well-defined institutional roles. Yet, in many countries, these communicative functions are increasingly being delegated to artificial intelligence systems. In a future project, I plan to examine whether liberal-democratic legitimacy can survive the widespread “contracting out” of political speech to non-human agents. This work seeks to show that artificial intelligence does not merely reshape how democracies govern, but forces a rethinking of who (or what) can legitimately speak for a political community.