Results for 'Artificial moral agent'

969 found
Order:
  1. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  2. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  4. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  5. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  11. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  12. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  15.  80
    Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  18. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of (...) reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  95
    Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  25. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  26. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  28. ¿Automatizando la mejora moral? La inteligencia artificial para la ética.Jon Rueda - 2023 - Daimon: Revista Internacional de Filosofía 89:199-209.
    ¿Puede la inteligencia artificial (IA) hacernos más morales o ayudarnos a tomar decisiones más éticas? El libro Más (que) humanos. Biotecnología, inteligencia artificial y ética de la mejora, editado por Francisco Lara y Julian Savulescu (2021), puede inspirarnos filosóficamente sobre este debate contemporáneo. En esta nota crítica, contextualizo la aportación general del volumen y analizo los dos últimos capítulos de Monasterio-Astobiza y de Lara y Deckers, quienes argumentan a favor del uso de la IA para hacernos mejores agentes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. (1 other version)Artificial evil and the foundation of computer ethics.L. Floridi & J. Sanders - 2000 - Etica E Politica 2 (2).
    Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  30. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  31. A Theodicy for Artificial Universes: Moral Considerations on Simulation Hypotheses.Stefano Gualeni - 2021 - International Journal of Technoethics 12 (1):21-31.
    ‘Simulation Hypotheses’ are imaginative scenarios that are typically employed in philosophy to speculate on how likely it is that we are currently living within a simulated universe as well as on our possibility for ever discerning whether we do in fact inhabit one. These philosophical questions in particular overshadowed other aspects and potential uses of simulation hypotheses, some of which are foregrounded in this article. More specifically, “A Theodicy for Artificial Universes” focuses on the moral implications of simulation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - forthcoming - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. understanding and augmenting human morality: the actwith model of conscience.Jeffrey White - 2009 - In L. Magnani (ed.), computational intelligence.
    Abstract. Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sci- ences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with tra- ditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Etica Funzionale. Considerazioni filosofiche sulla teoria dell'agire morale artificiale.Fabio Fossa - 2020 - Filosofia 55:91-106.
    The purpose of Machine Ethics is to develop autonomous technologies that are able to manage not just the technical aspects of a tasks, but also the ethical ones. As a consequence, the notion of Artificial Moral Agent (AMA) has become a fundamental element of the discussion. Its meaning, however, remains rather unclear. Depending on the author or the context, the same expression stands for essentially different concepts. This casts a suspicious light on the philosophical significance of Machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as (...) agents in the same manner as human beings. Additionally, the article sheds light on the practical implications within human-machine relations and their effect on human friendships. The lack of friendliness or its equivalent in interactions with machines directly impacts the quality and depth of human relations. This concerningly suggests the potential replacement or compromise of genuine interpersonal connections due to limitations in human-machine interactions. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  40. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible (...) agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  43. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in the language and science (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  75
    Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Artificial Agency and the Game of Semantic Extension.Fossa Fabio - 2021 - Interdisciplinary Science Reviews 46 (4):440-457.
    Artificial agents are commonly described by using words that traditionally belong to the semantic field of organisms, particularly of animal and human life. I call this phenomenon the game of semantic extension. However, the semantic extension of words as crucial as “autonomous”, “intelligent”, “creative”, “moral”, and so on, is often perceived as unsatisfactory, which is signalled with the extensive use of inverted commas or other syntactical cues. Such practice, in turn, has provoked harsh criticism that usually refers back (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Ethics of Driving Automation. Artificial Agency and Human Values.Fabio Fossa - 2023 - Cham: Springer.
    This book offers a systematic and thorough philosophical analysis of the ways in which driving automation crosses path with ethical values. Upon introducing the different forms of driving automation and examining their relation to human autonomy, it provides readers with in-depth reflections on safety, privacy, moral judgment, control, responsibility, sustainability, and other ethical issues. Driving is undoubtedly a moral activity as a human act. Transferring it to artificial agents such as connected and automated vehicles necessarily raises many (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  50. Kantian Ethics in the Age of Artificial Intelligence and Robotics.Ozlem Ulgen - 2017 - Questions of International Law 1 (43):59-83.
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (eg unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (eg lethal autonomous weapons killing human com- batants). Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 969