Switch to: References

Add citations

You must login to add citations.
  1. Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anticipatory gaps challenge the public governance of heritable human genome editing.Jon Rueda, Seppe Segers, Jeroen Hopster, Karolina Kudlek, Belén Liedo, Samuela Marchiori & John Danaher - 2024 - Journal of Medical Ethics.
    Considering public moral attitudes is a hallmark of the anticipatory governance of emerging biotechnologies, such as heritable human genome editing. However, such anticipatory governance often overlooks that future morality is open to change and that future generations may perform different moral assessments on the very biotechnologies we are trying to govern in the present. In this article, we identify an ’anticipatory gap’ that has not been sufficiently addressed in the discussion on the public governance of heritable genome editing, namely, uncertainty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.Simon Myers & Jim A. C. Everett - 2025 - Cognition 256 (C):106028.
    Download  
     
    Export citation  
     
    Bookmark