Moral Uncertainty and Our Relationships with Unknown Minds

Cambridge Quarterly of Healthcare Ethics 32 (4):482-495 (2023)
  Copy   BIBTEX

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.

Author's Profile

John Danaher
University College, Galway

Analytics

Added to PP
2023-04-01

Downloads
871 (#22,766)

6 months
165 (#19,953)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?