Switch to: References

Add citations

You must login to add citations.
  1. Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege.Lantz Fleming Miller - 2015 - Human Rights Review 16 (4):369-391.
    As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Neurosurgical robots and ethical challenges to medicine.Arthur Saniotis & Maciej Henneberg - 2021 - Ethics in Science and Environmental Politics 21:25-30.
    Over the last 20 yr, neurosurgical robots have been increasingly assisting in neurosurgical procedures. Surgical robots are considered to have noticeable advantages over humans, such as reduction of procedure time, surgical dexterity, no experience of fatigue and improved healthcare outcomes. In recent years, neurosurgical robots have been developed to perform various procedures. Public demand is informing the direction of neurosurgery and placing greater pressure on neurosurgeons to use neurosurgical robots. The increasing diversity and sophistication of neurosurgical robots have received ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 'You have to put a lot of trust in me': autonomy, trust, and trustworthiness in the context of mobile apps for mental health.Regina Müller, Nadia Primc & Eva Kuhn - 2023 - Medicine, Health Care and Philosophy 26 (3):313-324.
    Trust and trustworthiness are essential for good healthcare, especially in mental healthcare. New technologies, such as mobile health apps, can affect trust relationships. In mental health, some apps need the trust of their users for therapeutic efficacy and explicitly ask for it, for example, through an avatar. Suppose an artificial character in an app delivers healthcare. In that case, the following questions arise: Whom does the user direct their trust to? Whether and when can an avatar be considered trustworthy? Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
    Download  
     
    Export citation  
     
    Bookmark   5 citations