Results for 'Moral Status of Artificial Systems'

955 found
Order:
  1. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a (...) patient is a necessary condition for an entity to qualify as a moral agent. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  43
    On the Moral Status of Artificial Cognition to Natural Cognition.Jianhua Xie - 2024 - Journal of Human Cognition 8 (2):17-28.
    Artificial Cognition (AC) has provoked a great deal of controversy in recent years. Concerns over its development have revolved around the questions of whether or not a moral status may be ascribed to AC and, if so, how could it be characterized? This paper provides an analysis of consciousness as a means to query the moral status of AC. This method suggests that the question of moral status of artificial cognition depends upon (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  89
    AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Presumptuous aim attribution, conformity, and the ethics of artificial social cognition.Owen C. King - 2020 - Ethics and Information Technology 22 (1):25-37.
    Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  11. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  12. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Towards broadening the perspective on lethal autonomous weapon systems ethics and regulations.Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho - 2020 - In Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho (eds.), Rio Seminar on Autonomous Weapons Systems. Brasília: Alexandre de Gusmão Foundation. pp. 133-158.
    Our reflections on LAWS issues are the result of the work of our research group on AI and ethics at the Informatics Center in partnership with the Information Science Department, both from the Federal University of Pernambuco, Brazil. In particular, our propositions and provocations are tied to Bianca Ximenes’s ongoing doctoral thesis, advised by Prof. Geber Ramalho, from the area of computer science, and co-advised by Prof. Diego Salcedo, from the humanities. Our research group is interested in answering two tricky (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  17. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The Measurement Problem of Consciousness.Heather Browning & Walter Veit - 2020 - Philosophical Topics 48 (1):85-108.
    This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  20. Non-Human Moral Status: Problems with Phenomenal Consciousness.Joshua Shepherd - 2023 - American Journal of Bioethics Neuroscience 14 (2):148-157.
    Consciousness-based approaches to non-human moral status maintain that consciousness is necessary for (some degree or level of) moral status. While these approaches are intuitive to many, in this paper I argue that the judgment that consciousness is necessary for moral status is not secure enough to guide policy regarding non-humans, that policies responsive to the moral status of non-humans should take seriously the possibility that psychological features independent of consciousness are sufficient for (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  21. The Minimal Cognitive Grid: A Tool to Rank the Explanatory Status of Cognitive Artificial Systems.Antonio Lieto - 2022 - Proceedings of AISC 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  22. Organoid Sentience.Shourya Verma - manuscript
    Recent advances in stem cell-derived human brain organoids and microelectrode array (MEA) tech- nology raise profound questions about the potential for these systems to give rise to sentience. Brain organoids are 3D tissue constructs that recapitulate key aspects of brain development and function, while MEAs enable bidirectional communication with neuronal cultures. As brain organoids become more sophisticated and integrated with MEAs, the question arises: Could such a system support not only intelligent computation, but subjective experience? This paper explores the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. What You Are and Its Affects on Moral Status: Godman's Epistemology and Morality of Human Kinds, Gunkel's Robot Rights, and Schneider on Artificial You.Lantz Fleming Miller - 2021 - Human Rights Review 22 (4):525-531.
    Thanks to mounting discussion about projected technologies’ possibly altering the species mentally and physically, philosophical investigation of what human beings are proceeds robustly. Many thinkers contend that whatever we are has little to do with how we should behave. Yet, tampering with what the human being is may tread upon human rights to be whatever one is. Rights given in widely recognized documents such as the U.N. Declaration of the Rights of Indigenous Peoples assume what humans are and need depends (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Moral significance of phenomenal consciousness.Neil Levy & Julian Savulescu - 2009 - Progress in Brain Research.
    Recent work in neuroimaging suggests that some patients diagnosed as being in the persistent vegetative state are actually conscious. In this paper, we critically examine this new evidence. We argue that though it remains open to alternative interpretations, it strongly suggests the presence of consciousness in some patients. However, we argue that its ethical significance is less than many people seem to think. There are several different kinds of consciousness, and though all kinds of consciousness have some ethical significance, different (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  26. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  28. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  29. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. (1 other version)Xinxue (The philosophy of mind) System.Cheng Gong - manuscript
    Xinxue (The philosophy of mind) founded by ancient Chinese philosopher Wang Yangming of the Ming Dynasty for over 700 years. Its ideas have deeply influenced East Asian countries such as China, Japan, and Korea in the field of social philosophy, and even indirectly promoted Japan's Meiji Restoration movement. At the same time, scholars from all over the world have conducted numerous studies and explorations on it, but overall, there is a lack of systematic exploration and research on it. This article (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Disentangling Human Nature from Moral Status: Lessons for and from Philip K. Dick.James Okapal - 2023 - Journal of Science Fiction and Philosophy 6.
    A common interpretation of Philip K. Dick’s texts _Do Androids Dream of Electric Sheep?_ and _We Can Build You_ is that they attempt to answer the question “What does it mean to be human?” -/- Unfortunately, these interpretations fail to deal with the fact that the term “human” has both metaphysical and moral connotations. Metaphysical meanings associated with theories of human nature and moral meanings associated with theories of moral status are thus blurred in the novels (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  34. Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The Problem of Musical Creativity and its Relevance for Ethical and Legal Decisions towards Musical AI.Ivano Zanzarella - manuscript
    Because of its non-representational nature, music has always had familiarity with computational and algorithmic methodologies for automatic composition and performance. Today, AI and computer technology are transforming systems of automatic music production from passive means within musical creative processes into ever more autonomous active collaborators of human musicians. This raises a large number of interrelated questions both about the theoretical problems of artificial musical creativity and about its ethical consequences. Considering two of the most urgent ethical problems of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  97
    Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  38. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  41. Artificial Consciousness Is Morally Irrelevant.Bruce P. Blackshaw - 2023 - American Journal of Bioethics Neuroscience 14 (2):72-74.
    It is widely agreed that possession of consciousness contributes to an entity’s moral status, even if it is not necessary for moral status (Levy and Savulescu 2009). An entity is considered to have...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Moral Agency in Artificial Intelligence (Robots).Saleh Gorbanian - 2020 - Ethical Reflections, 1 (1):11-32.
    Growing technological advances in intelligent artifacts and bitter experiences of the past have emphasized the need to use and operate ethics in this field. Accordingly, it is vital to discuss the ethical integrity of having intelligent artifacts. Concerning the method of gathering materials, the current study uses library and documentary research followed by attribution style. Moreover, descriptive analysis is employed in order to analyze data. Explaining and criticizing the opposing views in this field and reviewing the related literature, it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Amoral, im/moral and dis/loyal: Children’s moral status in child welfare.Zlatana Knezevic - 2017 - Childhood 4 (24):470-484.
    This article is a discursive examination of children’s status as knowledgeable moral agents within the Swedish child welfare system and in the widely used assessment framework BBIC. Departing from Fricker’s concept of epistemic injustice, three discursive positions of children’s moral status are identified: amoral, im/moral and dis/loyal. The findings show the undoubtedly moral child as largely missing and children’s agency as diminished, deviant or rendered ambiguous. Epistemic injustice applies particularly to disadvantaged children with difficult (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Reliability of Motivation and the Moral Value of Actions.Paula Satne - 2013 - Studia Kantiana 14:5-33.
    Kant famously made a distinction between actions from duty and actions in conformity with duty claiming that only the former are morally worthy. Kant’s argument in support of this thesis is taken to rest on the claim that only the motive of duty leads non-accidentally or reliably to moral actions. However, many critics of Kant have claimed that other motives such as sympathy and benevolence can also lead to moral actions reliably, and that Kant’s thesis is false. In (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. The Info-Computational Turn in Bioethics.Constantin Vică - 2018 - In Emilian Mihailov, Tenzin Wangmo, Victoria Federiuc & Bernice S. Elger (eds.), Contemporary Debates in Bioethics: European Perspectives. [Berlin]: De Gruyter Open. pp. 108-120.
    Our technological lifeworld has become an info-computational media populated by data and algorithms, an artificial environment for life and shared experiences. In this chapter, I tried to sketch three new assumptions for bioethics – it is hardly possible to substantiate ethical guidelines or an idea of normativity in an aprioristic manner; moral status is a function of data entities, not something solely human; agency is plural and thus is shared or sometimes delegated – in order to chart (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Analyzing the Explanatory Power of Bionic Systems With the Minimal Cognitive Grid.Antonio Lieto - 2022 - Frontiers in Robotics and AI 9.
    In this article, I argue that the artificial components of hybrid bionic systems do not play a direct explanatory role, i.e., in simulative terms, in the overall context of the systems in which they are embedded in. More precisely, I claim that the internal procedures determining the output of such artificial devices, replacing biological tissues and connected to other biological tissues, cannot be used to directly explain the corresponding mechanisms of the biological component(s) they substitute (and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligence.Mazarian Alireza - 2019 - Journal of Philosophical Theological Research 21 (1):165-190.
    There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings (in an imaginary future), by designing a new argument (2015). In this paper, after an introduction, the author reviews and analyzes the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Download  
     
    Export citation  
     
    Bookmark  
  50. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 955