Results for 'Moral machine'

977 found
Order:
  1. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  2. Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  4. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Machines as Moral Patients We Shouldn’t Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  6. Building machines that learn and think about morality.Christopher Burr & Geoff Keeling - 2018 - In Christopher Burr & Geoff Keeling (eds.), Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Three Moral Themes of Leibniz's Spiritual Machine Between "New System" and "New Essays".Markku Roinila - 2023 - le Present Est Plein de L’Avenir, Et Chargé du Passé : Vorträge des Xi. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023.
    The advance of mechanism in science and philosophy in the 17th century created a great interest to machines or automata. Leibniz was no exception - in an early memoir Drôle de pensée he wrote admiringly about a machine that could walk on water, exhibited in Paris. The idea of automatic processing in general had a large role in his thought, as can be seen, for example, in his invention of the binary code and the so-called Calculemus!-model for solving controversies. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  15
    Método neutrosófico multicriterio para evaluar la efectividad de la musicoterapia en el tratamiento en la depresión del adulto mayor.María Fernanda Morales Gómez de la Torre, Elisabeth Germania Vilema Vizuete & Elizabeth Angelita Lalaleo Chicaiza - 2024 - Neutrosophic Computing and Machine Learning 35 (1):181-189.
    La musicoterapia es una intervención que reduce los niveles de depresión en los adultos mayores y ayuda a promover la salud utilizando rutinas musicales que permiten cambios en el contorno físico, cognitivo, psicosocial, y demás habilidades mejorando su autoestima, razón por lo cual esta investigación tuvo por objetivo determinar la efectividad de la musicoterapia en el tratamiento de la depresión del adulto mayor. La presente investigación propone un método neutrosófico multicriterio para evaluar la efectividad de la musicoterapia en el tratamiento (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Machine Grading and Moral Learning.Joshua Schulz - 2014 - New Atlantis: A Journal of Technology and Society 41 (Winter):2014.
    Download  
     
    Export citation  
     
    Bookmark  
  19. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. 8 Rightful Machines.Ava Thomas Wright - 2022 - In Hyeongjoo Kim & Dieter Schönecker (eds.), Kant and Artificial Intelligence. De Gruyter. pp. 223-238.
    In this paper, I set out a new Kantian approach to resolving conflicts between moral obligations for highly autonomous machine agents. First, I argue that efforts to build explicitly moral autonomous machine agents should focus on what Kant refers to as duties of right, which are duties that everyone could accept, rather than on duties of virtue (or “ethics”), which are subject to dispute in particular cases. “Moral” machines must first be rightful machines, I argue. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  24. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Distributed responsibility in human–machine interactions.Anna Strasser - 2021 - AI and Ethics.
    Artificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. rethinking machine ethics in the era of ubiquitous technology.Jeffrey White (ed.) - 2015 - Hershey, PA, USA: IGI.
    Table of Contents Foreword .................................................................................................... ......................................... xiv Preface .................................................................................................... .............................................. xv Acknowledgment .................................................................................................... .......................... xxiii Section 1 On the Cusp: Critical Appraisals of a Growing Dependency on Intelligent Machines Chapter 1 Algorithms versus Hive Minds and the Fate of Democracy ................................................................... 1 Rick Searle, IEET, USA Chapter 2 We Can Make Anything: Should We? .................................................................................................. 15 Chris Bateman, University of Bolton, UK Chapter 3 Grounding Machine Ethics within the Natural System ........................................................................ 30 Jared Gassen, JMG Advising, USA Nak Young Seong, Independent Scholar, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Can machines be people? Reflections on the Turing triage test.Robert Sparrow - 2011 - In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. pp. 301-315.
    In, “The Turing Triage Test”, published in Ethics and Information Technology, I described a hypothetical scenario, modelled on the famous Turing Test for machine intelligence, which might serve as means of testing whether or not machines had achieved the moral standing of people. In this paper, I: (1) explain why the Turing Triage Test is of vital interest in the context of contemporary debates about the ethics of AI; (2) address some issues that complexify the application of this (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  30. Moral Bio-enhancement, Freedom, Value and the Parity Principle.Jonathan Pugh - 2019 - Topoi 38 (1):73-86.
    A prominent objection to non-cognitive moral bio-enhancements is that they would compromise the recipient’s ‘freedom to fall’. I begin by discussing some ambiguities in this objection, before outlining an Aristotelian reading of it. I suggest that this reading may help to forestall Persson and Savulescu’s ‘God-Machine’ criticism; however, I suggest that the objection still faces the problem of explaining why the value of moral conformity is insufficient to outweigh the value of the freedom to fall itself. I (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  31. Manipulation, machine induction, and bypassing.Gabriel De Marco - 2022 - Philosophical Studies 180 (2):487-507.
    A common style of argument in the literature on free will and moral responsibility is the Manipulation Argument. These tend to begin with a case of an agent in a deterministic universe who is manipulated, say, via brain surgery, into performing some action. Intuitively, this agent is not responsible for that action. Yet, since there is no relevant difference, with respect to whether an agent is responsible, between the manipulated agent and a typical agent in a deterministic universe, responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the mental (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Ex Machina: Testing Machines for Consciousness and Socio-Relational Machine Ethics.Harrison S. Jackson - 2022 - Journal of Science Fiction and Philosophy 5.
    Ex Machina is a 2014 science-fiction film written and directed by Alex Garland, centered around the creation of a human-like artificial intelligence (AI) named Ava. The plot focuses on testing Ava for consciousness by offering a unique reinterpretation of the Turing Test. The film offers an excellent thought experiment demonstrating the consequences of various approaches to a potentially conscious AI. In this paper, I will argue that intelligence testing has significant epistemological shortcomings that necessitate an ethical approach not reliant on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2024 - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the (...). The second is when one can analyze or explain the robot's behavior only by ascribing to it some predisposition or 'intention' to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  38. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Frowe's Machine Cases.Simkulet William - 2015 - Filosofiska Notiser 2 (2): 93-104.
    Helen Frowe (2006/2010) contends that there is a substantial moral difference between killing and letting die, arguing that in Michael Tooley's infamous machine case it is morally wrong to flip a coin to determine who lives or dies. Here I argue that Frowe fails to show that killing and letting die are morally inequivalent. However, I believe that she has succeeded in showing that it is wrong to press the button in Tooley's case, where pressing the button will (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. A Kantian Course Correction for Machine Ethics.Ava Thomas Wright - 2023 - In Gregory Robson & Jonathan Y. Tsou (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York, NY, USA: Routledge. pp. 141-151.
    The central challenge of “machine ethics” is to build autonomous machine agents that act morally rightly. But how can we build autonomous machine agents that act morally rightly, given reasonable disputes over what is right and wrong in particular cases? In this chapter, I argue that Immanuel Kant’s political philosophy can provide an important part of the answer.
    Download  
     
    Export citation  
     
    Bookmark  
  47. More Human Than All Too Human: Challenges in Machine Ethics for Humanity Becoming a Spacefaring Civilization.Guy Pierre Du Plessis - 2023 - Qeios.
    It is indubitable that machines with artificial intelligence (AI) will be an essential component in humans’ quest to become a spacefaring civilization. Most would agree that long-distance space travel and the colonization of Mars will not be possible without adequately developed AI. Machines with AI have a normative function, but some argue that it can also be evaluated from the perspective of ethical norms. This essay is based on the assumption that machine ethics is an essential philosophical perspective in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  76
    Privacy and Machine Learning- Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations.Haleh Asgarinia - 2024 - Dissertation, Department of Philisophy, University of Twente
    This dissertation consists of five chapters, each written as independent research papers that are unified by an overarching concern regarding information privacy and machine learning-based artificial intelligence (AI). This dissertation addresses the issues concerning privacy and AI by responding to the following three main research questions (RQs): RQ1. ‘How does an AI system affect privacy?’; RQ2. ‘How effectively does the General Data Protection Regulation (GDPR) assess and address privacy issues concerning both individuals and groups?’; and RQ3. ‘How can the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible (...) agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
1 — 50 / 977