Switch to: References

Citations of:

Designing AI with Rights, Consciousness, Self-Respect, and Freedom

In Ethics of Artificial Intelligence. New York, NY, USA: pp. 459-479 (2020)

Add citations

You must login to add citations.
  1. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - forthcoming - American Philosophical Quarterly.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sentientism, Motivation, and Philosophical Vulcans.Luke Roelofs - 2023 - Pacific Philosophical Quarterly 104 (2):301-323.
    If moral status depends on the capacity for consciousness, what kind of consciousness matters exactly? Two popular answers are that any kind of consciousness matters (Broad Sentientism), and that what matters is the capacity for pleasure and suffering (Narrow Sentientism). I argue that the broad answer is too broad, while the narrow answer is likely too narrow, as Chalmers has recently argued by appeal to ‘philosophical Vulcans’. I defend a middle position, Motivational Sentientism, on which what matters is motivating consciousness: (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Download  
     
    Export citation  
     
    Bookmark