Evaluating Risks of Astronomical Future Suffering: False Positives vs. False Negatives Regarding Artificial Sentience

Abstract

Failing to recognise sentience in AI systems (false negatives) poses a far greater risk of potentially astronomical suffering compared to mistakenly attributing sentience to non-sentient systems (false positives). This paper analyses the issue through the moral frameworks of longtermism, utilitarianism, and deontology, concluding that all three assign greater urgency to avoiding false negatives. Given the astronomical number of AIs that may exist in the future, even a small chance of overlooking sentience is an unacceptable risk. To address this, the paper proposes a comprehensive approach including research, field-building, and tentative policy development. Humanity must take steps to ensure the well-being of all sentient minds, both biological and artificial.

Author's Profile

Analytics

Added to PP
2024-04-04

Downloads
68 (#90,562)

6 months
68 (#66,682)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?