Switch to: References

Add citations

You must login to add citations.
  1. Argumentation schemes: From genetics to international relations to environmental science policy to AI ethics.Nancy L. Green - 2021 - Argument and Computation 12 (3):397-416.
    Argumentation schemes have played a key role in our research projects on computational models of natural argument over the last decade. The catalogue of schemes in Walton, Reed and Macagno’s 2008 book, Argumentation Schemes, served as our starting point for analysis of the naturally occurring arguments in written text, i.e., text in different genres having different types of author, audience, and subject domain, for different argument goals, and for different possible future applications. We would often first attempt to analyze the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”.Bartek Chomanski - 2020 - Science and Engineering Ethics 26 (6):3469-3481.
    In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735, 2019), Aimee van Wynsberghe and Scott Robbins mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Embedded ethics: some technical and ethical challenges.Vincent Bonnemains, Claire Saurel & Catherine Tessier - 2018 - Ethics and Information Technology 20 (1):41-58.
    This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The problem of machine ethics in artificial intelligence.Rajakishore Nath & Vineet Sahu - 2020 - AI and Society 35 (1):103-111.
    The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence advocates is that there is no distinction between mind and machines and thus they argue that there are (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The use of software tools and autonomous bots against vandalism: eroding Wikipedia’s moral order?Paul B. de Laat - 2015 - Ethics and Information Technology 17 (3):175-188.
    English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Formalizing preference utilitarianism in physical world models.Caspar Oesterheld - 2016 - Synthese 193 (9).
    Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine ethics and the idea of a more-than-human moral world.Steve Torrance - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press. pp. 115.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Ethics, Finance, and Automation: A Preliminary Survey of Problems in High Frequency Trading. [REVIEW]Michael Davis, Andrew Kumiega & Ben Vliet - 2013 - Science and Engineering Ethics 19 (3):851-874.
    All of finance is now automated, most notably high frequency trading. This paper examines the ethical implications of this fact. As automation is an interdisciplinary endeavor, we argue that the interfaces between the respective disciplines can lead to conflicting ethical perspectives; we also argue that existing disciplinary standards do not pay enough attention to the ethical problems automation generates. Conflicting perspectives undermine the protection those who rely on trading should have. Ethics in finance can be expanded to include organizational and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Computational Meta-Ethics: Towards the Meta-Ethical Robot.Gert-Jan C. Lokhorst - 2011 - Minds and Machines 21 (2):261-274.
    It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral Machines and the Threat of Ethical Nihilism.Anthony F. Beavers - 2011 - In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The status of machine ethics: A report from the AAAI symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.Danielle Swanepoel & Daniel Corks - 2024 - Science and Engineering Ethics 30 (2):1-16.
    Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Consideraciones éticas para el uso académico de sistemas de Inteligencia Artificial.Oscar-Yecid Aparicio-Gómez - 2024 - Revista Internacional de Filosofía Teórica y Práctica 4 (1):175-198.
    Este artículo explora las consideraciones éticas que rodean el uso de la Inteligencia Artificial (IA) en la academia. Se establecen principios éticos generales para la IA en este ámbito, como la transparencia, la equidad, la responsabilidad, la privacidad y la integridad académica. En cuanto a la educación asistida por IA, se enfatiza la importancia de la accesibilidad, la no discriminación y la evaluación crítica de los resultados. Se recomienda que la IA se use para complementar y no para reemplazar la (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 32 (4):683-715.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reviewing the Case of Online Interpersonal Trust.Mirko Tagliaferri - 2023 - Foundations of Science 28 (1):225-254.
    The aim of this paper is to better qualify the problem of online trust. The problem of online trust is that of evaluating whether online environments have the proper design to enable trust. This paper tries to better qualify this problem by showing that there is no unique answer, but only conditional considerations that depend on the conception of trust assumed and the features that are included in the environments themselves. In fact, the major issue concerning traditional debates surrounding online (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ethics and the banality of evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
    In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Editorial: Shaping Ethical Futures in Brain-Based and Artificial Intelligence Research.Elisabeth Hildt, Kelly Laas & Monika Sziron - 2020 - Science and Engineering Ethics 26 (5):2371-2379.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Morally Contentious Technology-Field Intersections: The Case of Biotechnology in the United States. [REVIEW]Benjamin M. Cole & Preeta M. Banerjee - 2013 - Journal of Business Ethics 115 (3):555-574.
    Technologies can be not only contentious—overthrowing existing ways of doing things—but also morally contentious—forcing deep reflection on personal values and societal norms. This article investigates that what may impede the acceptance of a technology and/or the development of the field that supports or exploits it, the lines between which often become blurred in the face of morally contentious content. Using a unique dataset with historically important timing—the United States Biotechnology Study fielded just 9 months after the public announcement of the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Enforcing ethical goals over reinforcement-learning policies.Guido Governatori, Agata Ciabattoni, Ezio Bartocci & Emery A. Neufeld - 2022 - Ethics and Information Technology 24 (4):1-19.
    Recent years have yielded many discussions on how to endow autonomous agents with the ability to make ethical decisions, and the need for explicit ethical reasoning and transparency is a persistent theme in this literature. We present a modular and transparent approach to equip autonomous agents with the ability to comply with ethical prescriptions, while still enacting pre-learned optimal behaviour. Our approach relies on a normative supervisor module, that integrates a theorem prover for defeasible deontic logic within the control loop (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Neuromodulación para la mejora de la agencia moral: el neurofeedback.Paloma J. García Díaz - 2021 - Dilemata 34:105-119.
    This article aims to pay heed to the rational and deliberative dimensions of moral agency within the project of moral enhancement. In this sense, it is presented how the technique of neurofeedback might contribute to the enhancement of moral deliberations and autonomy. Furthermore, this brain-computer interface is thought as a possible element of a Socratic moral assistant interested in improving moral enhancement within a model of full interaction between moral agents and such a moral assistant. This proposal does not embrace (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Wenn Ethik zum Programm wird: Eine risikoethische Analyse moralischer Dilemmata des autonomen Fahrens.Vanessa Schäffner - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (1):27-49.
    Wie sollen sich autonome Fahrzeuge verhalten, wenn ein Unfall nicht mehr abwendbar ist? Die Komplexität spezifischer moralischer Dilemmata, die in diesem Kontext auftreten können, lässt bewährte ethische Denktraditionen an ihre Grenzen stoßen. Dieser Aufsatz versteht sich als Versuch, neue Lösungsperspektiven mithilfe einer risikoethischen Sichtweise auf die Problematik zu eröffnen und auf diese Weise deren Relevanz für die Programmierung von ethischen Unfallalgorithmen aufzuzeigen. Im Zentrum steht dabei die Frage, welche Implikationen sich aus einer Auffassung von Dilemma-Situationen als risikoethische Verteilungsprobleme im Hinblick (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The carousel of ethical machinery.Luís Moniz Pereira - 2021 - AI and Society 36 (1):185-196.
    Human beings have been aware of the risks associated with knowledge or its associated technologies since the dawn of time. Not just in Greek mythology, but in the founding myths of Judeo-Christian religions, there are signs and warnings against these dangers. Yet, such warnings and forebodings have never made as much sense as they do today. This stems from the emergence of machines capable of cognitive functions performed exclusively by humans until recently. Besides those technical problems associated with its design (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The possibility of deliberate norm-adherence in AI.Danielle Swanepoel - 2020 - Ethics and Information Technology 23 (2):157-163.
    Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence, we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral Orthoses: A New Approach to Human and Machine Ethics.Marius Dorobantu & Yorick Wilks - 2019 - Zygon 54 (4):1004-1021.
    Machines are increasingly involved in decisions with ethical implications, which require ethical explanations. Current machine learning algorithms are ethically inscrutable, but not in a way very different from human behavior. This article looks at the role of rationality and reasoning in traditional ethical thought and in artificial intelligence, emphasizing the need for some explainability of actions. It then explores Neil Lawrence's embodiment factor as an insightful way of looking at the differences between human and machine intelligence, connecting it to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Issues in robot ethics seen through the lens of a moral Turing test.Anne Gerdes & Peter Øhrstrøm - 2015 - Journal of Information, Communication and Ethics in Society 13 (2):98-109.
    Purpose – The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent. Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Information Societies, Ethical Enquiries.Mariarosaria Taddeo & Elizabeth Buchanan - 2015 - Philosophy and Technology 28 (1):5-10.
    The special issue collects a selection of papers presented during the Computer Ethics: Philosophical Enquiries 2013 conference. This is a series of conferences organized by the International Association for Ethics and Information Technology , a professional organization formed in 2001 and which gathers experts in information and computer ethics prompting interdisciplinary research and discussions on ethical problems related to design and deployment of information and communication technologies . During the past two decades, CEPE conferences have been a focal point for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Drones in humanitarian contexts, robot ethics, and the human–robot interaction.Aimee van Wynsberghe & Tina Comes - 2020 - Ethics and Information Technology 22 (1):43-53.
    There are two dominant trends in the humanitarian care of 2019: the ‘technologizing of care’ and the centrality of the humanitarian principles. The concern, however, is that these two trends may conflict with one another. Faced with the growing use of drones in the humanitarian space there is need for ethical reflection to understand if this technology undermines humanitarian care. In the humanitarian space, few agree over the value of drone deployment; one school of thought believes drones can provide a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Sopholab: Experimental computational philosophy.V. Wiegel - 2007 - Dissertation,
    In this book, the extend to which we can equip artificial agents with moral reasoning capacity is investigated. Attempting to create artificial agents with moral reasoning capabilities challenges our understanding of morality and moral reasoning to its utmost. It also helps philosophers dealing with the inherent complexity of modern organizations. Modern society with large multi-national organizations and extensive information infrastructures provides a backdrop for moral theories that is hard to encompass through mere theorising. Computerized support for theorising is needed to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations