Switch to: References

Add citations

You must login to add citations.
  1. Artificial Intelligent Systems and Ethical Agency.Reena Cheruvalath - 2023 - Journal of Human Values 29 (1):33-47.
    The article examines the challenges involved in the process of developing artificial ethical agents. The process involves the creators or designing professionals, the procedures to develop an ethical agent and the artificial systems. There are two possibilities available to create artificial ethical agents: (a) programming ethical guidance in the artificial Intelligence (AI)-equipped machines and/or (b) allowing AI-equipped machines to learn ethical decision-making by observing humans. However, it is difficult to fulfil these possibilities due to the subjective nature of ethical decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The notion of moral competence in the scientific literature: a critical review of a thin concept.Dominic Martin, Carl-Maria Mörch & Emmanuelle Figoli - 2023 - Ethics and Behavior 33 (6):461-489.
    This critical review accomplished two main tasks: first, the article provides scope for identifying the most common conceptions of moral competence in the scientific literature, as well as the different ways to measure this type of competence. Having moral judgment is the most popular element of moral competence, but the literature introduces many other elements. The review also shows there is a plethora of ways to measure moral competence, either in standardized tests providing scores or other non-standardized tests. As a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico?Esther D. Reed - 2022 - Studies in Christian Ethics 35 (1):68-86.
    This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, that new weapons technologies could (or do already) represent threats to the doing of justice in the theatre of war. These threats include weapons systems that deliver indiscriminate, disproportionate or otherwise unjust outcomes, or that are operated within (quasi-)legal frameworks marked by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • We need to talk about deception in social robotics!Amanda Sharkey & Noel Sharkey - 2020 - Ethics and Information Technology 23 (3):309-316.
    Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Technology as Terrorism: Police Control Technologies and Drone Warfare.Jessica Wolfendale - 2021 - In Scott Robbins, Alastair Reed, Seamus Miller & Adam Henschke (eds.), Counter-Terrorism, Ethics, and Technology: Emerging Challenges At The Frontiers Of Counter-Terrorism,. Springer. pp. 1-21.
    Debates about terrorism and technology often focus on the potential uses of technology by non-state terrorist actors and by states as forms of counterterrorism. Yet, little has been written about how technology shapes how we think about terrorism. In this chapter I argue that technology, and the language we use to talk about technology, constrains and shapes our understanding of the nature, scope, and impact of terrorism, particularly in relation to state terrorism. After exploring the ways in which technology shapes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neuromodulación para la mejora de la agencia moral: el neurofeedback.Paloma J. García Díaz - 2021 - Dilemata 34:105-119.
    This article aims to pay heed to the rational and deliberative dimensions of moral agency within the project of moral enhancement. In this sense, it is presented how the technique of neurofeedback might contribute to the enhancement of moral deliberations and autonomy. Furthermore, this brain-computer interface is thought as a possible element of a Socratic moral assistant interested in improving moral enhancement within a model of full interaction between moral agents and such a moral assistant. This proposal does not embrace (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapons systems, killer robots and human dignity.Amanda Sharkey - 2019 - Ethics and Information Technology 21 (2):75-87.
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Value preference profiles and ethical compliance quantification: a new approach for ethics by design in technology-assisted dementia care.Eike Buhr, Johannes Welsch & M. Salman Shaukat - forthcoming - AI and Society:1-17.
    Monitoring and assistive technologies (MATs) are being used more frequently in healthcare. A central ethical concern is the compatibility of these systems with the moral preferences of their users—an issue especially relevant to participatory approaches within the ethics-by-design debate. However, users’ incapacity to communicate preferences or to participate in design processes, e.g., due to dementia, presents a hurdle for participatory ethics-by-design approaches. In this paper, we explore the question of how the value preferences of users in the field of dementia (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ethics and the banality of evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
    In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation