Group Prioritarianism: Why AI should not replace humanity

Philosophical Studies:1-19 (2024)
  Copy   BIBTEX

Abstract

If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism".

Author's Profile

Frank Hong
University of Hong Kong

Analytics

Added to PP
2024-07-14

Downloads
94 (#93,173)

6 months
94 (#63,125)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?