Switch to: References

Add citations

You must login to add citations.
  1. The Hard Problem of AI Alignment: Value Forks in Moral Judgment.Markus Kneer & Juri Viehoff - 2025 - Proceedings of the 2025 Acm Conference on Fairness, Accountability, and Transparency.
    Complex moral trade-offs are a basic feature of human life: for example, confronted with scarce medical resources, doctors must frequently choose who amongst equally deserving candidates receives medical treatment. But choosing what to do in moral trade-offs is no longer a ‘humans-only’ task, but often falls to AI agents. In this article, we report findings from a series of experiments (N=1029) intended to establish whether agent-type (Human vs. AI) matters for what should be done in moral trade-offs. We find that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Monstrous Conclusion.Luca Stroppa - 2024 - Synthese 203 (6):1-24.
    This paper introduces the Monstrous Conclusion, according to which, for any population, there is a better population consisting of just one individual (the Monster). The Monstrous Conclusion is deeply counterintuitive. I defend a version of Prioritarianism as a particularly promising population axiology that does not imply the Monstrous Conclusion. According to this version of Prioritarianism, which I call Asymptotic Prioritarianism, there is diminishing marginal moral importance of individual welfare that can get close to, but never quite reach, some upper limit. (...)
    Download  
     
    Export citation  
     
    Bookmark