Could a robot feel pain?

AI and Society (forthcoming)
  Copy   BIBTEX

Abstract

Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. ) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. Similarly, under a precautionary approach (Sebo in Harvard Rev Philos 25:51–70, 2018. ), entities from foetuses to plants and robots are given the benefit of the doubt and assumed to be sentient. However, there is a growing consensus about the scientific criteria used to indicate pain and the ability to suffer in animals (Birch in Anim Sentience, 2017. ; Sneddon et al. in Anim Behav 97:201–212, 2014. ). These include the presence of a central nervous system, changed behaviour in response to pain, and the effects of analgesic pain relief. Few of these criteria are met by robots, and there are risks to assuming that they are sentient and capable of suffering pain. Since robots lack nervous systems and living bodies there is little reason to believe that future robots capable of feeling pain could (or should) be developed.

Author's Profile

Amanda Jane Caroline Sharkey
University of Sheffield

Analytics

Added to PP
2024-11-05

Downloads
31 (#105,275)

6 months
31 (#102,779)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?