View topic on PhilPapers for more information
Related categories

390 found
Order:
More results on PhilPapers
1 — 50 / 390
Moral Status of Artificial Systems
  1. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls Are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):453-476.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology:1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Existential Risk From AI and Orthogonality: Can We Have It Both Ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio:1-12.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology:1-9.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. On Human Genome Manipulation and 'Homo Technicus': The Legal Treatment of Non-Natural Human Subjects.Tyler L. Jaynes - 2021 - AI and Ethics 1 (3):331-345.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Towards a Middle-Ground Theory of Agency for Artificial Intelligence.Louis Longin - 2020 - In M. Nørskov, J. Seibt & O. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam, Netherlands: pp. 17-26.
    The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - forthcoming - AI and Society.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Empathy and Instrumentalization: Late Ancient Cultural Critique and the Challenge of Apparently Personal Robots.Jordan Joseph Wales - 2020 - In Marco Nørskov, Johanna Seibt & Oliver Santiago Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam: IOS Press. pp. 114-124.
    According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - 2021 - Minds and Machines 31 (3):395-419.
    Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design approach to technology design, this paper extends its application to care robots by integrating the values of care, values that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   21 citations  
  18. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  24. Gods of Transhumanism.Alex V. Halapsis - 2019 - Anthropological Measurements of Philosophical Research 16:78-90.
    Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Genomic Obsolescence: What Constitutes an Ontological Threat to Human Nature?Michal Klincewicz & Lily Frank - 2019 - American Journal of Bioethics 19 (7):39-40.
    Volume 19, Issue 7, July 2019, Page 39-40.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  33. Lethal Autonomous Weapons: Designing War Machines with Values.Steven Umbrello - 2019 - Delphi: Interdisciplinary Review of Emerging Technologies 1 (2):30-34.
    Lethal Autonomous Weapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that the philosophical arguments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Of Animals, Robots and Men.Christine Tiefensee & Johannes Marx - 2015 - Historical Social Research 40:70-91.
    Domesticated animals need to be treated as fellow citizens: only if we conceive of domesticated animals as full members of our political communities can we do justice to their moral standing—or so Sue Donaldson and Will Kymlicka argue in their widely discussed book Zoopolis. In this contribution, we pursue two objectives. Firstly, we reject Donaldson and Kymlicka’s appeal for animal citizenship. We do so by submitting that instead of paying due heed to their moral status, regarding animals as citizens misinterprets (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   11 citations  
  37. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
  38. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  39. AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - 2019 - In Proceedings of the AAAI/ACM 2019 Conference on AIES. pp. 507-513.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Fare e funzionare. Sull'analogia di robot e organismo.Fabio Fossa - 2018 - InCircolo - Rivista di Filosofia E Culture 6:73-88.
    In this essay I try to determine the extent to which it is possible to conceive robots and organisms as analogous entities. After a cursory preamble on the long history of epistemological connections between machines and organisms I focus on Norbert Wiener’s cybernetics, where the analogy between modern machines and organisms is introduced most explicitly. The analysis of issues pertaining to the cybernetic interpretation of the analogy serves then as a basis for a critical assessment of its reprise in contemporary (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  41. AI4People—an Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   56 citations  
  42. Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   11 citations  
  43. Etica Multicultural y sociedad en red.Miguel Angel Perez Alvarez - 2017 - Dissertation, UNAM
    This work was my theses to get the MA in Philosophy. Their focus is the ethics implied in digital culture and networked society. Themes are Ethics, culture, technology, political control, autonomous systems.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  44. Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics.Fabio Fossa - 2018 - In Mark Coeckelbergh, Janina Loh, Michael Funk, Joanna Seibt & Marco Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and, Public Space. Amsterdam: pp. 103-111.
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  47. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   21 citations  
  48. Why We Should Create Artificial Offspring: Meaning and the Collective Afterlife.John Danaher - 2018 - Science and Engineering Ethics 24 (4):1097-1118.
    This article argues that the creation of artificial offspring could make our lives more meaningful. By ‘artificial offspring’ I mean beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful and valuable. The first is that the existence of a collective afterlife—i.e. a set of human-like lives that continue after we die—is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Robots, Autonomy, and Responsibility.Raul Hakli & Pekka Mäkelä - 2016 - In Johanna Seibt, Marco Nørskov & Søren Schack Andersen (eds.), What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. Amsterdam, The Netherlands: IOS Press. pp. 145-154.
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Investigation Into Ethical Issues of Intelligent Systems.Marziyah Davoodabadi & Zahra Khazaei - 2008 - Journal of Philosophical Theological Research 10 (37):95-120.
    Despite of the undeniable advantages and surprising applications of them in training and industry as well as cultures of different countries, there have been many ethical issues concerning intelligent and computer systems. Presenting a definition of artificial intelligence and intelligent systems, the research paper deals with the shared ethical issues of intelligent systems, computer systems as well as the global network; and then it concentrates on the most important ethical issues of two types of intelligent systems, i.e. data-analysis system and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 390