Switch to: References

Add citations

You must login to add citations.
  1. Consciousness and Moral Status.Joshua Shepherd - 2018 - New York: Routledge.
    It seems obvious that phenomenally conscious experience is something of great value, and that this value maps onto a range of important ethical issues. For example, claims about the value of life for those in a permanent vegetative state, debates about treatment and study of disorders of consciousness, controversies about end-of-life care for those with advanced dementia, and arguments about the moral status of embryos, fetuses, and non-human animals arguably turn on the moral significance of various facts about consciousness. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life.John Danaher - 2017 - Science and Engineering Ethics 23 (1):41-64.
    Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • A Defense of the Rights of Artificial Intelligences.Eric Schwitzgebel & Mara Garza - 2015 - Midwest Studies in Philosophy 39 (1):98-119.
    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Is Collective Agency a Coherent Idea? Considerations from the Enactive Theory of Agency.Mog Stapleton & Tom Froese - 1st ed. 2015 - In Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag. pp. 219-236.
    Whether collective agency is a coherent concept depends on the theory of agency that we choose to adopt. We argue that the enactive theory of agency developed by Barandiaran, Di Paolo and Rohde (2009) provides a principled way of grounding agency in biological organisms. However the importance of biological embodiment for the enactive approach might lead one to be skeptical as to whether artificial systems or collectives of individuals could instantiate genuine agency. To explore this issue we contrast the concept (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI and law: ethical, legal, and socio-political implications.John-Stewart Gordon - 2021 - AI and Society 36 (2):403-404.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. [REVIEW]Mark Coeckelbergh - 2021 - Minds and Machines 31 (3):337-360.
    The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Shifting Perspectives.David J. Gunkel - 2020 - Science and Engineering Ethics 26 (5):2527-2532.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • The hard problem of AI rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • The possibility of deliberate norm-adherence in AI.Danielle Swanepoel - 2020 - Ethics and Information Technology 23 (2):157-163.
    Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence, we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Human Rights of Users of Humanlike Care Automata.Lantz Fleming Miller - 2020 - Human Rights Review 21 (2):181-205.
    Care is more than dispensing pills or cleaning beds. It is about responding to the entire patient. What is called “bedside manner” in medical personnel is a quality of treating the patient not as a mechanism but as a being—much like the caregiver—with desires, ideas, dreams, aspirations, and the gamut of mental and emotional character. As automata, answering an increasing functional need in care, are designed to enact care, the pressure is on their becoming more humanlike to carry out the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • To-Do Is to Be: Foucault, Levinas, and Technologically Mediated Subjectivation.Jan Peter Bergen & Peter-Paul Verbeek - 2021 - Philosophy and Technology 34 (2):325-348.
    The theory of technological mediation aims to take technological artifacts seriously, recognizing the constitutive role they play in how we experience the world, act in it, and how we are constituted as (moral) subjects. Its quest for a compatible ethics has led it to Foucault’s “care of the self,” i.e., a transformation of the self by oneself through self-discipline. In this regard, technologies have been interpreted as power structures to which one can relate through Foucaultian “technologies of the self” or (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Hybrids and the Boundaries of Moral Considerability or Revisiting the Idea of Non-Instrumental Value.Magdalena Holy-Luczaj & Vincent Blok - 2019 - Philosophy and Technology 34 (2):223-242.
    The transgressive ontological character of hybrids—entities crossing the ontological binarism of naturalness and artificiality, e.g., biomimetic projects—calls for pondering the question of their ethical status, since metaphysical and moral ideas are often inextricably linked. The example of it is the concept of “moral considerability” and related to it the idea of “intrinsic value” understood as a non-instrumentality of a being. Such an approach excludes hybrids from moral considerations due to their instrumental character. In the paper, we revisit the boundaries of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots, rape, and representation.Robert Sparrow - 2017 - International Journal of Social Robotics 9 (4):465-477.
    Sex robots are likely to play an important role in shaping public understandings of sex and of relations between the sexes in the future. This paper contributes to the larger project of understanding how they will do so by examining the ethics of the “rape” of robots. I argue that the design of realistic female robots that could explicitly refuse consent to sex in order to facilitate rape fantasy would be unethical because sex with robots in these circumstances is a (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Facing the Pariah of Science: The Frankenstein Myth as a Social and Ethical Reference for Scientists.Peter Nagy, Ruth Wylie, Joey Eschrich & Ed Finn - 2020 - Science and Engineering Ethics 26 (2):737-759.
    Since its first publication in 1818, Mary Shelley’s Frankenstein; or, The Modern Prometheus has transcended genres and cultures to become a foundational myth about science and technology across a multitude of media forms and adaptations. Following in the footsteps of the brilliant yet troubled Victor Frankenstein, professionals and practitioners have been debating the scientific ethics of creating life for decades, never before have powerful tools for doing so been so widely available. This paper investigates how engaging with the Frankenstein myth (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Genomic Obsolescence: What Constitutes an Ontological Threat to Human Nature?Michal Klincewicz & Lily Frank - 2019 - American Journal of Bioethics 19 (7):39-40.
    Volume 19, Issue 7, July 2019, Page 39-40.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Undecidability of Legal and Technological Regulation.Peter Kalulé - 2019 - Law and Critique 30 (2):137-158.
    Generally, regulation is thought of as a constant that carries with it both a formative and conservative power, a power that standardises, demarcates and forms an order, through procedures, rules and precedents. It is dominantly thought that the singularity and formalisation of structures like rules is what enables regulation to achieve its aim of identifying, apprehending, sanctioning and forestalling/pre-empting threats and crime or harm. From this point of view, regulation serves to firmly establish fixed and stable categories of what norms, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robots, Autonomy, and Responsibility.Raul Hakli & Pekka Mäkelä - 2016 - In Johanna Seibt, Marco Nørskov & Søren Schack Andersen (eds.), What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. IOS Press. pp. 145-154.
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?Lily Frank & Sven Nyholm - 2017 - Artificial Intelligence and Law 25 (3):305-323.
    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Environmental Ethics in Poland.Dominika Dzwonkowska - 2017 - Journal of Agricultural and Environmental Ethics 30 (1):135-151.
    In the 1960s, western societies discovered that unlimited technological progress has a very high price that the environment pays. This was also the beginning of the discussions on the role of ethics in the protection of the environment and the moral aspects of nature exploitation. Even though the state of nature was not better in Poland, it took Polish philosophers a few decades to recognize the moral problem and to address it. The prevailing communistic propaganda of progress had blurred the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Response to “The Problem of the Question About Animal Ethics” by Michal Piekarski.Mark Coeckelbergh & David J. Gunkel - 2016 - Journal of Agricultural and Environmental Ethics 29 (4):717-721.
    In this brief article we reply to Michal Piekarski’s response to our article ‘Facing Animals’ published previously in this journal. In our article we criticized the properties approach to defining the moral standing of animals, and in its place proposed a relational and other-oriented concept that is based on a transcendental and phenomenological perspective, mainly inspired by Heidegger, Levinas, and Derrida. In this reply we question and problematize Piekarski’s interpretation of our essay and critically evaluate “the ethics of commitment” that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Facing Animals: A Relational, Other-Oriented Approach to Moral Standing.Mark Coeckelbergh & David J. Gunkel - 2014 - Journal of Agricultural and Environmental Ethics 27 (5):715-733.
    In this essay we reflect critically on how animal ethics, and in particular thinking about moral standing, is currently configured. Starting from the work of two influential “analytic” thinkers in this field, Peter Singer and Tom Regan, we examine some basic assumptions shared by these positions and demonstrate their conceptual failings—ones that have, despite efforts to the contrary, the general effect of marginalizing and excluding others. Inspired by the so-called “continental” philosophical tradition , we then argue that what is needed (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics.Mark Coeckelbergh - 2014 - Philosophy and Technology 27 (1):61-77.
    Should we give moral standing to machines? In this paper, I explore the implications of a relational approach to moral standing for thinking about machines, in particular autonomous, intelligent robots. I show how my version of this approach, which focuses on moral relations and on the conditions of possibility of moral status ascription, provides a way to take critical distance from what I call the “standard” approach to thinking about moral status and moral standing, which is based on properties. It (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • (1 other version)Towards a bioinformational understanding of AI.Rahul D. Gautam & Balaganapathi Devarakonda - 2022 - AI and Society 37:1-23.
    The article seeks to highlight the relation between ontology and communication while considering the role of AI in society and environment. Bioinformationalism is the technical term that foregrounds this relationality. The study reveals instructive consequences for philosophy of technology in general and AI in particular. The first section introduces the bioinformational approach to AI, focusing on three critical features of the current AI debate: ontology of information, property-based vs. relational AI, and ontology vs. constitution of AI. When applied to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Socially robotic: making useless machines.Ceyda Yolgormez & Joseph Thibodeau - 2022 - AI and Society 37 (2):565-578.
    As robots increasingly become part of our everyday lives, questions arise with regards to how to approach them and how to understand them in social contexts. The Western history of human–robot relations revolves around competition and control, which restricts our ability to relate to machines in other ways. In this study, we take a relational approach to explore different manners of socializing with robots, especially those that exceed an instrumental approach. The nonhuman subjects of this study are built to explore (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • 反思機器人的道德擬人主義.Tsung-Hsing Ho - 2020 - EurAmerica 50 (2):179-205.
    如果機器人的發展要能如科幻想像一般,在沒有人類監督下自動地工作,就必須確定機器人不會做出道德上錯誤的行為。 根據行為主義式的道德主體觀,若就外顯行為來看,機器人在道德上的表現跟人類一般,機器人就可被視為道德主體。從這很自然地引伸出機器人的道德擬人主義:凡適用於人類的道德規則就適用於機器人。我反對道德擬人主義 ,藉由史特勞森對於人際關係與反應態度的洞見,並以家長主義行為為例,我論述由於機器人缺乏人格性,無法參與人際關係,因此在關於家長主義行為上,機器人應該比人類受到更嚴格的限制。.
    Download  
     
    Export citation  
     
    Bookmark   2 citations