Switch to: References

Add citations

You must login to add citations.
  1. Could a robot feel pain?Amanda Sharkey - forthcoming - AI and Society:1-11.
    Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. https://doi.org/10.1007/s11948-019-00119-x) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - forthcoming - American Philosophical Quarterly.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sentience, Vulcans, and Zombies: The Value of Phenomenal Consciousness.Joshua Shepherd - forthcoming - AI and Society:1-11.
    Many think that a specific aspect of phenomenal consciousness – valenced or affective experience – is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper I consider the prospects for these views. I first consider the prospects (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is moral status done with words?Miriam Gorr - 2024 - Ethics and Information Technology 26 (1):1-11.
    This paper critically examines Coeckelbergh’s (2023) performative view of moral status. Drawing parallels to Searle’s social ontology, two key claims of the performative view are identified: (1) Making a moral status claim is equivalent to making a moral status declaration. (2) A successful declaration establishes the institutional fact that the entity has moral status. Closer examination, however, reveals flaws in both claims. The second claim faces a dilemma: individual instances of moral status declaration are likely to fail because they do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Legal personhood for the integration of AI systems in the social context: a study hypothesis.Claudio Novelli - forthcoming - AI and Society:1-13.
    In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Not Relational Enough? Towards an Eco-Relational Approach in Robot Ethics.Anna Puzio - 2024 - Philosophy and Technology 37 (2):1-24.
    With robots increasingly integrated into various areas of life, the question of relationships with them is gaining prominence. Are friendship and partnership with robots possible? While there is already extensive research on relationships with robots, this article critically examines whether the relationship with non-human entities is sufficiently explored on a deeper level, especially in terms of ethical concepts such as autonomy, agency, and responsibility. In robot ethics, ethical concepts and considerations often presuppose properties such as consciousness, sentience, and intelligence, which (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Should criminal law protect love relation with robots?Kamil Mamak - 2024 - AI and Society 39 (2):573-582.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can AI determine its own future?Aybike Tunç - forthcoming - AI and Society:1-12.
    This article investigates the capacity of artificial intelligence (AI) systems to claim the right to self-determination while exploring the prerequisites for individuals or entities to exercise control over their own destinies. The paper delves into the concept of autonomy as a fundamental aspect of self-determination, drawing a distinction between moral and legal autonomy and emphasizing the pivotal role of dignity in establishing legal autonomy. The analysis examines various theories of dignity, with a particular focus on Hannah Arendt’s perspective. Additionally, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Playing Brains: The Ethical Challenges Posed by Silicon Sentience and Hybrid Intelligence in DishBrain.Stephen R. Milford, David Shaw & Georg Starke - 2023 - Science and Engineering Ethics 29 (6):1-17.
    The convergence of human and artificial intelligence is currently receiving considerable scholarly attention. Much debate about the resulting _Hybrid Minds_ focuses on the integration of artificial intelligence into the human brain through intelligent brain-computer interfaces as they enter clinical use. In this contribution we discuss a complementary development: the integration of a functional in vitro network of human neurons into an _in silico_ computing environment. To do so, we draw on a recent experiment reporting the creation of silico-biological intelligence as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Clinicians’ criteria for fetal moral status: viability and relationality, not sentience.Lisa Campo-Engelstein & Elise Andaya - 2024 - Journal of Medical Ethics 50 (9):634-639.
    The antiabortion movement is increasingly using ostensibly scientific measurements such as ‘fetal heartbeat’ and ‘fetal pain’ to provide ‘objective’ evidence of the moral status of fetuses. However, there is little knowledge on how clinicians conceptualise and operationalise the moral status of fetuses. We interviewed obstetrician/gynaecologists and neonatologists on this topic since their practice regularly includes clinical management of entities of the same gestational age. Contrary to our expectations, there was consensus among clinicians about conceptions of moral status regardless of specialty. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark