Results for 'Moral machines'

961 found
Order:
  1. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  2. Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  4. Machines as Moral Patients We Shouldn’t Care About : The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  5. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Building machines that learn and think about morality.Christopher Burr & Geoff Keeling - 2018 - In Christopher Burr & Geoff Keeling (eds.), Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Three Moral Themes of Leibniz's Spiritual Machine Between "New System" and "New Essays".Markku Roinila - 2023 - le Present Est Plein de L’Avenir, Et Chargé du Passé : Vorträge des Xi. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023.
    The advance of mechanism in science and philosophy in the 17th century created a great interest to machines or automata. Leibniz was no exception - in an early memoir Drôle de pensée he wrote admiringly about a machine that could walk on water, exhibited in Paris. The idea of automatic processing in general had a large role in his thought, as can be seen, for example, in his invention of the binary code and the so-called Calculemus!-model for solving controversies. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Machine Grading and Moral Learning.Joshua Schulz - 2014 - New Atlantis: A Journal of Technology and Society 41 (Winter):2014.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  19. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. 8 Rightful Machines.Ava Thomas Wright - 2022 - In Hyeongjoo Kim & Dieter Schönecker (eds.), Kant and Artificial Intelligence. De Gruyter. pp. 223-238.
    In this paper, I set out a new Kantian approach to resolving conflicts between moral obligations for highly autonomous machine agents. First, I argue that efforts to build explicitly moral autonomous machine agents should focus on what Kant refers to as duties of right, which are duties that everyone could accept, rather than on duties of virtue (or “ethics”), which are subject to dispute in particular cases. “Moralmachines must first be rightful machines, I argue. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  22. Can machines be people? Reflections on the Turing triage test.Robert Sparrow - 2011 - In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. pp. 301-315.
    In, “The Turing Triage Test”, published in Ethics and Information Technology, I described a hypothetical scenario, modelled on the famous Turing Test for machine intelligence, which might serve as means of testing whether or not machines had achieved the moral standing of people. In this paper, I: (1) explain why the Turing Triage Test is of vital interest in the context of contemporary debates about the ethics of AI; (2) address some issues that complexify the application of this (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  23. Distributed responsibility in human–machine interactions.Anna Strasser - 2021 - AI and Ethics.
    Artificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human interaction partners (including producers of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. rethinking machine ethics in the era of ubiquitous technology.Jeffrey White (ed.) - 2015 - Hershey, PA, USA: IGI.
    Table of Contents Foreword .................................................................................................... ......................................... xiv Preface .................................................................................................... .............................................. xv Acknowledgment .................................................................................................... .......................... xxiii Section 1 On the Cusp: Critical Appraisals of a Growing Dependency on Intelligent Machines Chapter 1 Algorithms versus Hive Minds and the Fate of Democracy ................................................................... 1 Rick Searle, IEET, USA Chapter 2 We Can Make Anything: Should We? .................................................................................................. 15 Chris Bateman, University of Bolton, UK Chapter 3 Grounding Machine Ethics within the Natural System ........................................................................ 30 Jared Gassen, JMG Advising, USA Nak Young Seong, Independent Scholar, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Moral Bio-enhancement, Freedom, Value and the Parity Principle.Jonathan Pugh - 2019 - Topoi 38 (1):73-86.
    A prominent objection to non-cognitive moral bio-enhancements is that they would compromise the recipient’s ‘freedom to fall’. I begin by discussing some ambiguities in this objection, before outlining an Aristotelian reading of it. I suggest that this reading may help to forestall Persson and Savulescu’s ‘God-Machine’ criticism; however, I suggest that the objection still faces the problem of explaining why the value of moral conformity is insufficient to outweigh the value of the freedom to fall itself. I also (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  28. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  29. Manipulation, machine induction, and bypassing.Gabriel De Marco - 2022 - Philosophical Studies 180 (2):487-507.
    A common style of argument in the literature on free will and moral responsibility is the Manipulation Argument. These tend to begin with a case of an agent in a deterministic universe who is manipulated, say, via brain surgery, into performing some action. Intuitively, this agent is not responsible for that action. Yet, since there is no relevant difference, with respect to whether an agent is responsible, between the manipulated agent and a typical agent in a deterministic universe, responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Frowe's Machine Cases.Simkulet William - 2015 - Filosofiska Notiser 2 (2): 93-104.
    Helen Frowe (2006/2010) contends that there is a substantial moral difference between killing and letting die, arguing that in Michael Tooley's infamous machine case it is morally wrong to flip a coin to determine who lives or dies. Here I argue that Frowe fails to show that killing and letting die are morally inequivalent. However, I believe that she has succeeded in showing that it is wrong to press the button in Tooley's case, where pressing the button will change (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  32. Ex Machina: Testing Machines for Consciousness and Socio-Relational Machine Ethics.Harrison S. Jackson - 2022 - Journal of Science Fiction and Philosophy 5.
    Ex Machina is a 2014 science-fiction film written and directed by Alex Garland, centered around the creation of a human-like artificial intelligence (AI) named Ava. The plot focuses on testing Ava for consciousness by offering a unique reinterpretation of the Turing Test. The film offers an excellent thought experiment demonstrating the consequences of various approaches to a potentially conscious AI. In this paper, I will argue that intelligence testing has significant epistemological shortcomings that necessitate an ethical approach not reliant on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  35. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - forthcoming - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. And Then the Hammer Broke: Reflections on Machine Ethics from Feminist Philosophy of Science.Andre Ye - forthcoming - Pacific University Philosophy Conference.
    Vision is an important metaphor in ethical and political questions of knowledge. The feminist philosopher Donna Haraway points out the “perverse” nature of an intrusive, alienating, all-seeing vision (to which we might cry out “stop looking at me!”), but also encourages us to embrace the embodied nature of sight and its promises for genuinely situated knowledge. Current technologies of machine vision – surveillance cameras, drones (for war or recreation), iPhone cameras – are usually construed as instances of the former rather (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. More Human Than All Too Human: Challenges in Machine Ethics for Humanity Becoming a Spacefaring Civilization.Guy Pierre Du Plessis - 2023 - Qeios.
    It is indubitable that machines with artificial intelligence (AI) will be an essential component in humans’ quest to become a spacefaring civilization. Most would agree that long-distance space travel and the colonization of Mars will not be possible without adequately developed AI. Machines with AI have a normative function, but some argue that it can also be evaluated from the perspective of ethical norms. This essay is based on the assumption that machine ethics is an essential philosophical perspective (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. A Kantian Course Correction for Machine Ethics.Ava Thomas Wright - 2023 - In Gregory Robson & Jonathan Y. Tsou (eds.), Technology Ethics: A Philosophical Introduction and Readings. New York, NY, USA: Routledge. pp. 141-151.
    The central challenge of “machine ethics” is to build autonomous machine agents that act morally rightly. But how can we build autonomous machine agents that act morally rightly, given reasonable disputes over what is right and wrong in particular cases? In this chapter, I argue that Immanuel Kant’s political philosophy can provide an important part of the answer.
    Download  
     
    Export citation  
     
    Bookmark  
  48. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
1 — 50 / 961