Switch to: References

Add citations

You must login to add citations.
  1. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Editors' Overview: Moral Responsibility in Technology and Engineering.Neelke Doorn & Ibo van de Poel - 2012 - Science and Engineering Ethics 18 (1):1-11.
    Editors’ Overview: Moral Responsibility in Technology and Engineering Content Type Journal Article Category Original Paper Pages 1-11 DOI 10.1007/s11948-011-9285-z Authors Neelke Doorn, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Ibo van de Poel, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Journal Science and Engineering Ethics Online ISSN 1471-5546 Print ISSN 1353-3452 Journal Volume Volume 18 Journal Issue Volume 18, (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives.Dmytro Mykhailov & Nicola Liberati - 2023 - Foundations of Science 28 (3):841-857.
    This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Philosophy of technology.Maarten Franssen - 2010 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Skepticism and Information.Eric T. Kerr & Duncan Pritchard - 2012 - In Hilmi Demir, Philosophy of Engineering and Technology Volume 8. Springer.
    Philosophers of information, according to Luciano Floridi (The philosophy of information. Oxford University Press, Oxford, 2010, p 32), study how information should be “adequately created, processed, managed, and used.” A small number of epistemologists have employed the concept of information as a cornerstone of their theoretical framework. How this concept can be used to make sense of seemingly intractable epistemological problems, however, has not been widely explored. This paper examines Fred Dretske’s information-based epistemology, in particular his response to radical epistemological (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.Danielle Swanepoel & Daniel Corks - 2024 - Science and Engineering Ethics 30 (2):1-16.
    Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Beyond the skin bag: On the moral responsibility of extended agencies.F. Allan Hanson - 2009 - Ethics and Information Technology 11 (1):91-99.
    The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within the value sensitive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • (1 other version)Philosophical evaluation of the conceptualisation of trust in the NHS Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics Recent Issues 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the end. Lastly, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of rationally justified trust that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Blame as participant anger: Extending moral claimant competence to young children and nonhuman animals.Dorna Behdadi - 2024 - Philosophical Psychology:1-24.
    Following the social conception of moral agency, this paper claims that many beings commonly exempted from moral responsibility, like young children, adults with late-stage dementia, and nonhuman animals, may nevertheless qualify as participants in moral responsibility practices. Blame and other moral responsibility responses are understood according to the communicative emotion account of the reactive attitudes. To blame someone means having an emotion episode that acts as a vehicle for conveying a particular moral content. Therefore, moral agency is argued to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Mathematics, ethics and purism: an application of MacIntyre’s virtue theory.Paul Ernest - 2020 - Synthese 199 (1-2):3137-3167.
    A traditional problem of ethics in mathematics is the denial of social responsibility. Pure mathematics is viewed as neutral and value free, and therefore free of ethical responsibility. Applications of mathematics are seen as employing a neutral set of tools which, of themselves, are free from social responsibility. However, mathematicians are convinced they know what constitutes good mathematics. Furthermore many pure mathematicians are committed to purism, the ideology that values purity above applications in mathematics, and some historical reasons for this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare.Kieran M. Brayford - forthcoming - AI and Society:1-9.
    Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations