Switch to: References

Add citations

You must login to add citations.
  1. Group Prioritarianism: Why AI should not replace humanity.Frank Hong - 2024 - Philosophical Studies:1-19.
    If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding the multidimensionality of sentience in interspecies welfare comparisons.Victor Carranza-Pinedo - manuscript
    Download  
     
    Export citation  
     
    Bookmark