Switch to: References

Add citations

You must login to add citations.
  1. Cascading Morality After Dewey: A Proposal for a Pluralist Meta-Ethics with a Subsidiarity Hierarchy.Mark Coeckelbergh - 2021 - Contemporary Pragmatism 18 (1):18-35.
    In response to challenges to moral philosophy presented by other disciplines and facing a diversity of approaches to the foundation and focus of morality, this paper argues for a pluralist meta-ethics that is methodologically hierarchical and guided by the principle of subsidiarity. Inspired by Deweyan pragmatism, this novel and original application of the subsidiarity principle and the related methodological proposal for a cascading meta-ethical architecture offer a “dirty” and instrumentalist understanding of meta-ethics that promises to work, not only in moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Does kindness towards robots lead to virtue? A reply to Sparrow’s asymmetry argument.Mark Coeckelbergh - 2021 - Ethics and Information Technology 1 (Online first):649-656.
    Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots.Anna Puzio - forthcoming - AI and Society 1:1-17.
    Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hyperintensionality and Normativity.Federico L. G. Faroldi - 2019 - Cham, Switzerland: Springer Verlag.
    Presenting the first comprehensive, in-depth study of hyperintensionality, this book equips readers with the basic tools needed to appreciate some of current and future debates in the philosophy of language, semantics, and metaphysics. After introducing and explaining the major approaches to hyperintensionality found in the literature, the book tackles its systematic connections to normativity and offers some contributions to the current debates. The book offers undergraduate and graduate students an essential introduction to the topic, while also helping professionals in related (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney - 2022 - Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A fictional dualism model of social robots.Paula Sweeney - 2021 - Ethics and Information Technology 23 (3):465-472.
    In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents.Travis N. Rieder, Brian Hutler & Debra J. H. Mathews - 2020 - American Journal of Bioethics Neuroscience 11 (2):120-127.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is It Possible That Robots Will Not One Day Become Persons?Michael J. Reiss - 2023 - Zygon 58 (4):1062-1075.
    That robots might become persons is increasingly explored in popular fiction and films and is receiving growing academic analysis. Here, I ask what would be necessary for robots not to become persons at some point. After examining the meanings of “robots” and “persons,” I discuss whether robots might not become persons from a range of perspectives: evolution (which has led over time from species that do not exhibit personhood to species that do), development (personhood is something into which each of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From posthumanism to ethics of artificial intelligence.Rajakishore Nath & Riya Manna - 2023 - AI and Society 38 (1):185-196.
    Posthumanism is one of the well-known and significant concepts in the present day. It impacted numerous contemporary fields like philosophy, literary theories, art, and culture for the last few decades. The movement has been concentrated around the technological development of present days due to industrial advancement in society and the current proliferated daily usage of technology. Posthumanism indicated a deconstruction of our radical conception of ‘human’, and it further shifts our societal value alignment system to a novel dimension. The majority (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Investigating user perceptions of commercial virtual assistants: A qualitative study.Leilasadat Mirghaderi, Monika Sziron & Elisabeth Hildt - 2022 - Frontiers in Psychology 13.
    As commercial virtual assistants become an integrated part of almost every smart device that we use on a daily basis, including but not limited to smartphones, speakers, personal computers, watches, TVs, and TV sticks, there are pressing questions that call for the study of how participants perceive commercial virtual assistants and what relational roles they assign to them. Furthermore, it is crucial to study which characteristics of commercial virtual assistants are perceived as important for establishing affective interaction with commercial virtual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Employing Robots.Carl David Https://Orcidorg191X Mildenberger - 2019 - Disputatio 11 (53):89-110.
    In this paper, I am concerned with what automation—widely considered to be the “future of work”—holds for the artificially intelligent agents we aim to employ. My guiding question is whether it is normatively problematic to employ artificially intelligent agents like, for example, autonomous robots as workers. The answer I propose is the following. There is nothing inherently normatively problematic about employing autonomous robots as workers. Still, we must not put them to perform just any work, if we want to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Embodiment and intelligence, a levinasian perspective.James Mensch - forthcoming - Phenomenology and the Cognitive Sciences:1-14.
    Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Rights for Robots: Artificial Intelligence, Animal and Environmental Law (2020) by Joshua Gellers. [REVIEW]Kamil Mamak - 2021 - Science and Engineering Ethics 27 (3):1-4.
    Download  
     
    Export citation  
     
    Bookmark  
  • Military robots should not look like a humans.Kamil Mamak & Kaja Kowalczewska - 2023 - Ethics and Information Technology 25 (3):1-10.
    Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What Does It Mean to Empathise with a Robot?Joanna K. Malinowska - 2021 - Minds and Machines 31 (3):361-376.
    Given that empathy allows people to form and maintain satisfying social relationships with other subjects, it is no surprise that this is one of the most studied phenomena in the area of human–robot interaction (HRI). But the fact that the term ‘empathy’ has strong social connotations raises a question: can it be applied to robots? Can we actually use social terms and explanations in relation to these inanimate machines? In this article, I analyse the range of uses of the term (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Symbiosis with artificial intelligence via the prism of law, robots, and society.Stamatis Karnouskos - 2021 - Artificial Intelligence and Law 30 (1):93-115.
    The rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI text generator.Martin Hinton & Jean H. M. Wagemans - 2023 - Argument and Computation 14 (1):59-74.
    In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Review of Artificial Intelligence: Reflections in Philosophy, Theology and the Social Sciences by Benedikt P. Göcke and Astrid Rosenthal-von der Pütten. [REVIEW]John-Stewart Gordon - 2021 - AI and Society 36 (2):655-659.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI.Marilyn Giroux, Jungkeun Kim, Jacob C. Lee & Jongwon Park - 2022 - Journal of Business Ethics 178 (4):1027-1041.
    Several technological developments, such as self-service technologies and artificial intelligence, are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Towards a bioinformational understanding of AI.Rahul D. Gautam & Balaganapathi Devarakonda - 2022 - AI and Society 37:1-23.
    The article seeks to highlight the relation between ontology and communication while considering the role of AI in society and environment. Bioinformationalism is the technical term that foregrounds this relationality. The study reveals instructive consequences for philosophy of technology in general and AI in particular. The first section introduces the bioinformational approach to AI, focusing on three critical features of the current AI debate: ontology of information, property-based vs. relational AI, and ontology vs. constitution of AI. When applied to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. [REVIEW]Mark Coeckelbergh - 2020 - Minds and Machines 31 (3):337-360.
    The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • To-Do Is to Be: Foucault, Levinas, and Technologically Mediated Subjectivation.Jan Peter Bergen & Peter-Paul Verbeek - 2021 - Philosophy and Technology 34 (2):325-348.
    The theory of technological mediation aims to take technological artifacts seriously, recognizing the constitutive role they play in how we experience the world, act in it, and how we are constituted as (moral) subjects. Its quest for a compatible ethics has led it to Foucault’s “care of the self,” i.e., a transformation of the self by oneself through self-discipline. In this regard, technologies have been interpreted as power structures to which one can relate through Foucaultian “technologies of the self” or (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - forthcoming - In David J. Gunkel (ed.), Handbook of the Ethics of AI. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sexual Robots: The Social-Relational Approach and the Concept of Subjective Reference.Piercosma Bisconti & Susanna Piermattei - 2020 - Lecture Notes in Computer Science.
    In this paper we propose the notion of “subjective reference” as a conceptual tool that explains how and why human-robot sexual interactions could reframe users approach to human-human sexual interactions. First, we introduce the current debate about Sexual Robotics, situated in the wider discussion about Social Robots, stating the urgency of a regulative framework. We underline the importance of a social-relational approach, mostly concerned about Social Robots impact in human social structures. Then, we point out the absence of a precise (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - forthcoming - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2023 - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Future of Value Sensitive Design.Batya Friedman, David Hendry, Steven Umbrello, Jeroen Van Den Hoven & Daisy Yoo - 2020 - Paradigm Shifts in ICT Ethics: Proceedings of the 18th International Conference ETHICOMP 2020.
    In this panel, we explore the future of value sensitive design (VSD). The stakes are high. Many in public and private sectors and in civil society are gradually realizing that taking our values seriously implies that we have to ensure that values effectively inform the design of technology which, in turn, shapes people’s lives. Value sensitive design offers a highly developed set of theory, tools, and methods to systematically do so.
    Download  
     
    Export citation  
     
    Bookmark  
  • On the margins: personhood and moral status in marginal cases of human rights.Helen Ryland - 2020 - Dissertation, University of Birmingham
    Most philosophical accounts of human rights accept that all persons have human rights. Typically, ‘personhood’ is understood as unitary and binary. It is unitary because there is generally supposed to be a single threshold property required for personhood. It is binary because it is all-or-nothing: you are either a person or you are not. A difficulty with binary views is that there will typically be subjects, like children and those with dementia, who do not meet the threshold, and so who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics in Artificial Intelligence : How Relativism is Still Relevant.Loukas Piloidis - unknown
    This essay tries to demarcate and analyse Artificial Intelligence ethics. Going away from the traditional distinction in normative, meta, and applied ethics, a different split is executed, inspired by the three most prominent schools of thought: deontology, consequentialism, and Aristotelian virtue ethics. The reason behind this alternative approach is to connect all three schools back to ancient Greek philosophy. Having proven that the majority of arguments derive from some ancient Greek scholars, a new voice is initiated into the discussion, Protagoras (...)
    Download  
     
    Export citation  
     
    Bookmark