Switch to: References

Add citations

You must login to add citations.
  1. Moral disagreement and artificial intelligence.Pamela Robinson - 2024 - AI and Society 39 (5):2425-2438.
    Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. _Moral solutions_ apply a moral theory or related principles and largely ignore the details of the disagreement. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The AI-design regress.Pamela Robinson - 2025 - Philosophical Studies 182 (1):229-255.
    How should we design AI systems that make moral decisions that affect us? When there is disagreement about which moral decisions should be made and which methods would produce them, we should avoid arbitrary design choices. However, I show that this leads to a regress problem similar to the one metanormativists face involving higher orders of uncertainty. I argue that existing strategies for handling this parallel problem give verdicts about where to stop in the regress that are either too arbitrary (...)
    Download  
     
    Export citation  
     
    Bookmark