Switch to: References

Add citations

You must login to add citations.
  1. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Value Sensitive Design for autonomous weapon systems – a primer.Christine Boshuijzen-van Burken - 2023 - Ethics and Information Technology 25 (1):1-14.
    Value Sensitive Design (VSD) is a design methodology developed by Batya Friedman and Peter Kahn (2003) that brings in moral deliberations in an early stage of a design process. It assumes that neither technology itself is value neutral, nor shifts the value-ladennes to the sole usage of technology. This paper adds to emerging literature onVSD for autonomous weapons systems development and discusses extant literature on values in autonomous systems development in general and in autonomous weapons development in particular. I identify (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Aesthetic Value and the AI Alignment Problem.Alice C. Helliwell - 2024 - Philosophy and Technology 37 (4):1-21.
    The threat from possible future superintelligent AI has given rise to discussion of the so-called “value alignment problem”. This is the problem of how to ensure artificially intelligent systems align with human values, and thus (hopefully) mitigate risks associated with them. Naturally, AI value alignment is often discussed in relation to morally relevant values, such as the value of human lives or human wellbeing. However, solutions to the value alignment problem target all human values, not only morally relevant ones. Is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Friendly AI.Barbro Fröding & Martin Peterson - 2020 - Ethics and Information Technology 23 (3):207-214.
    In this paper we discuss what we believe to be one of the most important features of near-future AIs, namely their capacity to behave in a friendly manner to humans. Our analysis of what it means for an AI to behave in a friendly manner does not presuppose that proper friendships between humans and AI systems could exist. That would require reciprocity, which is beyond the reach of near-future AI systems. Rather, we defend the claim that social AIs should be (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations