Switch to: References

Add citations

You must login to add citations.
  1. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics.Mark Coeckelbergh - 2014 - Philosophy and Technology 27 (1):61-77.
    Should we give moral standing to machines? In this paper, I explore the implications of a relational approach to moral standing for thinking about machines, in particular autonomous, intelligent robots. I show how my version of this approach, which focuses on moral relations and on the conditions of possibility of moral status ascription, provides a way to take critical distance from what I call the “standard” approach to thinking about moral status and moral standing, which is based on properties. It (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Computing and moral responsibility.Merel Noorman - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Information technology and moral values.John Sullins - forthcoming - Stanford Encyclopedia of Philosophy.
    A encyclopedia entry on the moral impacts that happen when information technologies are used to record, communicate and organize information. including the moral challenges of information technology, specific moral and cultural challenges such as online games, virtual worlds, malware, the technology transparency paradox, ethical issues in AI and robotics, and the acceleration of change in technologies. It concludes with a look at information technology as a model for moral change, moral systems and moral agents.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making.Danielle Swanepoel & Daniel Corks - 2024 - Science and Engineering Ethics 30 (2):1-16.
    Determining the agency-status of machines and AI has never been more pressing. As we progress into a future where humans and machines more closely co-exist, understanding hallmark features of agency affords us the ability to develop policy and narratives which cater to both humans and machines. This paper maintains that decision-making processes largely underpin agential action, and that in most instances, these processes yield good results in terms of making good choices. However, in some instances, when faced with two (or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Implementations in Machine Ethics: A Survey.Suzanne Tolmeijer, Markus Kneer, Cristina Sarasua, Markus Christen & Abraham Bernstein - 2020 - ACM Computing Surveys 53 (6):1–38.
    Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxonomy to analyze machine ethics implementations with respect to their object (ethical theories), as well as their nontechnical and technical aspects. Second, an exhaustive selection and description of relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Introduction – Social Robotics and the Good Life.Janina Loh & Wulf Loh - 2022 - In Janina Loh & Wulf Loh (eds.), Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots. Transcript Verlag. pp. 7-22.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Systems in Society and War : Philosophical Inquiries.Linda Johansson - 2013 - Dissertation, Royal Institute of Technology, Stockholm
    The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the end. Lastly, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ethics of information warfare.Luciano Floridi & Mariarosaria Taddeo (eds.) - 2014 - Springer International Publishing.
    This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robots and Moral Agency.Linda Johansson - 2011 - Dissertation, Stockholm University
    Machine ethics is a field of applied ethics that has grown rapidly in the last decade. Increasingly advanced autonomous robots have expanded the focus of machine ethics from issues regarding the ethical development and use of technology by humans to a focus on ethical dimensions of the machines themselves. This thesis contains two essays, both about robots in some sense, representing these different perspectives of machine ethics. The first essay, “Is it Morally Right to use UAVs in War?” concerns an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • The Cambridge Handbook of Information and Computer Ethics, ed. Luciano Floridi , 327 pp., 978-0-521-88898-1. [REVIEW]Richard A. Spinello - 2013 - Business Ethics Quarterly 23 (1):154-161.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Of Animals, Robots and Men.Christine Tiefensee & Johannes Marx - 2015 - Historical Social Research 40 (4):70-91.
    Domesticated animals need to be treated as fellow citizens: only if we conceive of domesticated animals as full members of our political communities can we do justice to their moral standing—or so Sue Donaldson and Will Kymlicka argue in their widely discussed book Zoopolis. In this contribution, we pursue two objectives. Firstly, we reject Donaldson and Kymlicka’s appeal for animal citizenship. We do so by submitting that instead of paying due heed to their moral status, regarding animals as citizens misinterprets (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications.Markus Christen, Thomas Burri, Joseph O. Chapa, Raphael Salvi, Filippo Santoni de Sio & John P. Sullins - 2017 - University of Zurich Digital Society Initiative White Paper Series, No. 1.
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   219 citations  
  • When Should We Use Care Robots? The Nature-of-Activities Approach.Filippo Santoni de Sio & Aimee van Wynsberghe - 2016 - Science and Engineering Ethics 22 (6):1745-1760.
    When should we use care robots? In this paper we endorse the shift from a simple normative approach to care robots ethics to a complex one: we think that one main task of a care robot ethics is that of analysing the different ways in which different care robots may affect the different values at stake in different care practices. We start filling a gap in the literature by showing how the philosophical analysis of the nature of healthcare activities can (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Is Collective Agency a Coherent Idea? Considerations from the Enactive Theory of Agency.Mog Stapleton & Tom Froese - 1st ed. 2015 - In Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag. pp. 219-236.
    Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robowarfare: Can robots be more ethical than humans on the battlefield? [REVIEW]John P. Sullins - 2010 - Ethics and Information Technology 12 (3):263-275.
    Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Who Gets to Choose? On the Socio-algorithmic Construction of Choice.Dan M. Kotliar - 2021 - Science, Technology, and Human Values 46 (2):346-375.
    This article deals with choice-inducing algorithms––algorithms that are explicitly designed to affect people’s choices. Based on an ethnographic account of three Israeli data analytics companies, I explore how algorithms are being designed to drive people into choice-making and examine their co-constitution by an assemblage of specifically positioned human and nonhuman agents. I show that the functioning, logic, and even ethics of choice-inducing algorithms are deeply influenced by the epistemologies, meaning systems, and practices of the individuals who devise and use them (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. [REVIEW]Mark Coeckelbergh - 2021 - Minds and Machines 31 (3):337-360.
    The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Will It Be Possible for Artificial Intelligence Robots to Acquire Free Will and Believe in God?Mustafa ÇEVİK - 2017 - Beytulhikme An International Journal of Philosophy 7 (2):75-87.
    Bu yazı yapay zekâ robotlarının kendiliğinden gelecekte bilinç ve özgür irade edinip edinemeyeceklerini ele almaktadır. Yapay zekâ hakkındaki genel algı ve bu algının geçerliliği ve rasyonel değeri de tartışılacaktır. Ardından önceden programlanmış yapay zeka robotlarının yapısı ile doğadaki varlıkların yapısı arasında karşılaştırma yapılacaktır. Yapay zekâ robotlarının duygu, özgür irade ve seçim konusunda insan ile karşılaştırılması yapıldıktan sonra meleklerin robotlar ile olan benzerlikleri ele alınacaktır. Son olarak da robotların özgür irade kullanamayacaklarının gerekçeleri üzerinde durulacaktır.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Vicarious liability: a solution to a problem of AI responsibility?Matteo Pascucci & Daniela Glavaničová - 2022 - Ethics and Information Technology 24 (3):1-11.
    Who is responsible when an AI machine causes something to go wrong? Or is there a gap in the ascription of responsibility? Answers range from claiming there is a unique responsibility gap, several different responsibility gaps, or no gap at all. In a nutshell, the problem is as follows: on the one hand, it seems fitting to hold someone responsible for a wrong caused by an AI machine; on the other hand, there seems to be no fitting bearer of responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney - 2022 - Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations