Switch to: References

Citations of:

Moral Machines: Teaching Robots Right From Wrong

New York, US: Oxford University Press (2008)

Add citations

You must login to add citations.
  1. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 was not a peculiar enough year, its fifth month saw the relatively quiet publication of a preprint describing the most powerful natural language processing (NLP) system to date—GPT-3 (Generative Pre-trained Transformer-3)—created by the Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this artificial intelligence can comprehend prompts in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI recognition of differences among book-length texts.Stephen J. DeCanio - 2020 - AI and Society 35 (1):135-146.
    Can an Artificial Intelligence make distinctions among major works of politics, philosophy, and fiction without human assistance? In this paper, latent semantic analysis is used to find patterns in a relatively small sample of notable works archived by Project Gutenberg. It is shown that an LSA-equipped AI can distinguish quite sharply between fiction and non-fiction works, and can detect some differences between political philosophy and history, and between conventional fiction and fantasy/science fiction. It is conjectured that this capability is a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Designing a machine to learn about the ethics of robotics: the N-reasons platform. [REVIEW]Peter Danielson - 2010 - Ethics and Information Technology 12 (3):251-261.
    We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   95 citations  
  • Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection to using (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Impact of Ethics Instruction and Internship on Students’ Ethical Perceptions About Social Media, Artificial Intelligence, and ChatGPT.I. -Huei Cheng & Seow Ting Lee - 2024 - Journal of Media Ethics 39 (2):114-129.
    Communication programs seek to cultivate students who become professionals not only with expertise in their chosen field, but also ethical awareness. The current study investigates how exposure to ethics instruction and internship experiences may influence communication students’ ethical perceptions, including ideological orientations on idealism and relativism, as well as awareness of contemporary ethical issues related to social media and artificial intelligence (AI). The effects were also assessed on students’ support for general uses of AI for communication practices and adoption of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Human Side of Artificial Intelligence.Matthew A. Butkus - 2020 - Science and Engineering Ethics 26 (5):2427-2437.
    Artificial moral agents raise complex ethical questions both in terms of the potential decisions they may make as well as the inputs that create their cognitive architecture. There are multiple differences between human and artificial cognition which create potential barriers for artificial moral agency, at least as understood anthropocentrically and it is unclear that artificial moral agents should emulate human cognition and decision-making. It is conceptually possible for artificial moral agency to emerge that reflects alternative ethical methodologies without creating ontological (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being.Jason Borenstein & Ron Arkin - 2016 - Science and Engineering Ethics 22 (1):31-46.
    Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to “nudge” their human users in the direction of being “more ethical”. More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture “socially just” tendencies in their human counterparts. Designing technological artifacts in such a (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Robots and the changing workforce.Jason Borenstein - 2011 - AI and Society 26 (1):87-93.
    The use of robotic workers is likely to continue to increase as time passes. Hence it is crucial to examine the types of effects this occurrence could have on employment patterns. Invariably, as new job opportunities emerge due to robotic innovations, others will be closed off. Further, the characteristics of the workforce in terms of age, education, and income could profoundly change as a result.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Nudging for good: robots and the ethical appropriateness of nurturing empathy and charitable behavior.Borenstein Jason & C. Arkin Ronald - 2017 - AI and Society 32 (4):499-507.
    An under-examined aspect of human–robot interaction that warrants further exploration is whether robots should be permitted to influence a user’s behavior for that person’s own good. Yet an even more controversial practice could be on the horizon, which is allowing a robot to “nudge” a user’s behavior for the good of society. In this article, we examine the feasibility of creating companion robots that would seek to nurture a user’s empathy toward other human beings. As more and more computing devices (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Embedded ethics: some technical and ethical challenges.Vincent Bonnemains, Claire Saurel & Catherine Tessier - 2018 - Ethics and Information Technology 20 (1):41-58.
    This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Eight Kinds of Critters: A Moral Taxonomy for the Twenty-Second Century.Michael Bess - 2018 - Journal of Medicine and Philosophy 43 (5):585-612.
    Over the coming century, the accelerating advance of bioenhancement technologies, robotics, and artificial intelligence (AI) may significantly broaden the qualitative range of sentient and intelligent beings. This article proposes a taxonomy of such beings, ranging from modified animals to bioenhanced humans to advanced forms of robots and AI. It divides these diverse beings into three moral and legal categories—animals, persons, and presumed persons—describing the moral attributes and legal rights of each category. In so doing, the article sets forth a framework (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (De)constructing ethics for autonomous cars: A case study of Ethics Pen-Testing towards “AI for the Common Good”.Bettina Berendt - 2020 - International Review of Information Ethics 28.
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, the concept of Ethics Pen-Testing identifies challenges (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The synthetization of human voices.Oliver Bendel - 2019 - AI and Society 34 (1):83-89.
    The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system, an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough to be able to imitate a speaker convincingly in all kinds of statements. This article abstracts from actual products (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Government of Evil Machines: an Application of Romano Guardini’s Thought on Technology.Enrico Beltramini - 2021 - Scientia et Fides 9 (1):257-281.
    In this article I propose a theological reflection on the philosophical assumptions behind the idea that intelligent machine can be governed through ethical protocols, which may apply either to the people who develop the machines or to the machines themselves, or both. This idea is particularly relevant in the case of machines’ extreme wrongdoing, a wrongdoing that becomes an existential risk for humankind. I call this extreme wrong-doing, ‘evil.’ Thus, this article is a theological account on the philosophical assumptions behind (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evil and roboethics in management studies.Enrico Beltramini - 2019 - AI and Society 34 (4):921-929.
    In this article, I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
    Download  
     
    Export citation  
     
    Bookmark  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Social choice ethics in artificial intelligence.Seth D. Baum - 2020 - AI and Society 35 (1):165-176.
    A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics.Hutan Ashrafian - 2015 - Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The “big red button” is too late: an alternative model for the ethical evaluation of AI systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
    As a way to address both ominous and ordinary threats of artificial intelligence, researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Against the moral Turing test: accountable design and the moral reasoning of autonomous systems.Thomas Arnold & Matthias Scheutz - 2016 - Ethics and Information Technology 18 (2):103-115.
    This paper argues against the moral Turing test as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions :251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning :98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Machine consciousness: A manifesto for robotics.Antonio Chella & Riccardo Manzotti - 2009 - International Journal of Machine Consciousness 1 (1):33-51.
    Machine consciousness is not only a technological challenge, but a new way to approach scientific and theoretical issues which have not yet received a satisfactory solution from AI and robotics. We outline the foundations and the objectives of machine consciousness from the standpoint of building a conscious robot.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The hard problem of AI rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness.Mike Ananny - 2016 - Science, Technology, and Human Values 41 (1):93-117.
    Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Guilt Without Fault: Accidental Agency in the Era of Autonomous Vehicles.Fernando Aguiar, Ivar R. Hannikainen & Pilar Aguilar - 2022 - Science and Engineering Ethics 28 (2):1-22.
    The control principle implies that people should not feel guilt for outcomes beyond their control. Yet, the so-called ‘agent and observer puzzles’ in philosophy demonstrate that people waver in their commitment to the control principle when reflecting on accidental outcomes. In the context of car accidents involving conventional or autonomous vehicles, Study 1 established that judgments of responsibility are most strongly associated with expressions of guilt–over and above other negative emotions, such as sadness, remorse or anger. Studies 2 and 3 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Information technology and moral values.John Sullins - forthcoming - Stanford Encyclopedia of Philosophy.
    A encyclopedia entry on the moral impacts that happen when information technologies are used to record, communicate and organize information. including the moral challenges of information technology, specific moral and cultural challenges such as online games, virtual worlds, malware, the technology transparency paradox, ethical issues in AI and robotics, and the acceleration of change in technologies. It concludes with a look at information technology as a model for moral change, moral systems and moral agents.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
    In order to present a hopefully comprehensive framework of what is the stake of the growing use of robot soldiers, the paper focuses on: the different impact of robots on legal systems, e.g., contractual obligations and tort liability; how robots affect crucial notions as causality, predictability and human culpability in criminal law and, finally, specific hypotheses of robots employed in “just wars.” By using the traditional distinction between causes that make wars just and conduct admissible on the battlefield, the aim (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Neuroscience of Moral Judgment: Empirical and Philosophical Developments.Joshua May, Clifford I. Workman, Julia Haas & Hyemin Han - 2022 - In Felipe de Brigard & Walter Sinnott-Armstrong (eds.), Neuroscience and philosophy. Cambridge, Massachusetts: The MIT Press. pp. 17-47.
    We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Matter of Robot Minds.Brian P. McLaughlin & David Rose - forthcoming - Oxford Studies in Experimental Philosophy.
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Artificial consciousness: A perspective from the free energy principle.Wanja Wiese - manuscript
    Could a sufficiently detailed computer simulation of consciousness replicate consciousness? In other words, is performing the right computations sufficient for artificial consciousness? Or will there remain a difference between simulating and being a conscious system, because the right computations must be implemented in the right way? From the perspective of Karl Friston's free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots should be slaves.Joanna J. Bryson - 2010 - In Yorick Wilks (ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues. John Benjamins Publishing. pp. 63-74.
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Is Collective Agency a Coherent Idea? Considerations from the Enactive Theory of Agency.Mog Stapleton & Tom Froese - 1st ed. 2015 - In Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag. pp. 219-236.
    Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations