View topic on PhilPapers for more information
Related categories

246 found
Order:
More results on PhilPapers
1 — 50 / 246
Moral Status of Artificial Systems
  1. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Gods of Transhumanism.Alex V. Halapsis - 2019 - Anthropological Measurements of Philosophical Research 16:78-90.
    Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - forthcoming - Science and Engineering Ethics:1-27.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  10. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - AIES: AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Genomic Obsolescence: What Constitutes an Ontological Threat to Human Nature?Michal Klincewicz & Lily Frank - 2019 - American Journal of Bioethics 19 (7):39-40.
    Volume 19, Issue 7, July 2019, Page 39-40.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Lethal Autonomous Weapons: Designing War Machines with Values.Steven Umbrello - 2019 - Delphi: Interdisciplinary Review of Emerging Technologies 1 (2):30-34.
    Lethal Autonomous Weapons (LAWs) have becomes the subject of continuous debate both at national and international levels. Arguments have been proposed both for the development and use of LAWs as well as their prohibition from combat landscapes. Regardless, the development of LAWs continues in numerous nation-states. This paper builds upon previous philosophical arguments for the development and use of LAWs and proposes a design framework that can be used to ethically direct their development. The conclusion is that the philosophical arguments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Challenges for an Ontology of Artificial Intelligence.Scott H. Hawley - 2019 - Perspectives on Science and Christian Faith 71 (2):83-95.
    Of primary importance in formulating a response to the increasing prevalence and power of artificial intelligence (AI) applications in society are questions of ontology. Questions such as: What “are” these systems? How are they to be regarded? How does an algorithm come to be regarded as an agent? We discuss three factors which hinder discussion and obscure attempts to form a clear ontology of AI: (1) the various and evolving definitions of AI, (2) the tendency for pre-existing technologies to be (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Of Animals, Robots and Men.Christine Tiefensee & Johannes Marx - 2015 - Historical Social Research 40:70-91.
    Domesticated animals need to be treated as fellow citizens: only if we conceive of domesticated animals as full members of our political communities can we do justice to their moral standing—or so Sue Donaldson and Will Kymlicka argue in their widely discussed book Zoopolis. In this contribution, we pursue two objectives. Firstly, we reject Donaldson and Kymlicka’s appeal for animal citizenship. We do so by submitting that instead of paying due heed to their moral status, regarding animals as citizens misinterprets (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  18. Toward an Ethics of AI Assistants: An Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - 2019 - In Proceedings of the AAAI/ACM 2019 Conference on AIES. pp. 507-513.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Fare e funzionare. Sull'analogia di robot e organismo.Fabio Fossa - 2018 - InCircolo - Rivista di Filosofia E Culture 6:73-88.
    In this essay I try to determine the extent to which it is possible to conceive robots and organisms as analogous entities. After a cursory preamble on the long history of epistemological connections between machines and organisms I focus on Norbert Wiener’s cybernetics, where the analogy between modern machines and organisms is introduced most explicitly. The analysis of issues pertaining to the cybernetic interpretation of the analogy serves then as a basis for a critical assessment of its reprise in contemporary (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  22. Etica Multicultural y sociedad en red.Miguel Angel Perez Alvarez - 2017 - Dissertation, UNAM
    This work was my theses to get the MA in Philosophy. Their focus is the ethics implied in digital culture and networked society. Themes are Ethics, culture, technology, political control, autonomous systems.
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. Legal Fictions and the Essence of Robots: Thoughts on Essentialism and Pragmatism in the Regulation of Robotics.Fabio Fossa - 2018 - In Mark Coeckelbergh, Janina Loh, Michael Funk, Joanna Seibt & Marco Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and, Public Space. Amsterdam: pp. 103-111.
    The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (1.), (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  27. Why We Should Create Artificial Offspring: Meaning and the Collective Afterlife.John Danaher - 2018 - Science and Engineering Ethics 24 (4):1097-1118.
    This article argues that the creation of artificial offspring could make our lives more meaningful. By ‘artificial offspring’ I mean beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful and valuable. The first is that the existence of a collective afterlife—i.e. a set of human-like lives that continue after we die—is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Robots, Autonomy, and Responsibility.Raul Hakli & Pekka Mäkelä - 2016 - In Johanna Seibt, Marco Nørskov & Søren Schack Andersen (eds.), What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. Amsterdam, The Netherlands: IOS Press. pp. 145-154.
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Investigation Into Ethical Issues of Intelligent Systems.Marziyah Davoodabadi & Zahra Khazaei - 2008 - Journal of Philosophical Theological Research 10 (37):95-120.
    Despite of the undeniable advantages and surprising applications of them in training and industry as well as cultures of different countries, there have been many ethical issues concerning intelligent and computer systems. Presenting a definition of artificial intelligence and intelligent systems, the research paper deals with the shared ethical issues of intelligent systems, computer systems as well as the global network; and then it concentrates on the most important ethical issues of two types of intelligent systems, i.e. data-analysis system and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Robots Like Me: Challenges and Ethical Issues in Aged Care.Ipke Wachsmuth - 2018 - Frontiers in Psychology 9 (432).
    This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.Patrick Lin, Keith Abney & Ryan Jenkins (eds.) - 2017 - Oxford University Press.
    As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique et (...)
    Remove from this list   Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  37. Utopia Without Work? Myth, Machines and Public Policy.Edmund Byrne - 1985 - In P. T. Durbin (ed.), Research in Philosophy and Technology VIII. Greenwich, CT: JAI Press. pp. 133-148.
    A critique of the prediction that technology will end humans' direct involvement in work. Contentions: a workless world is not without qualification desirable; it is not attainable by technology alone; the end sought does not in and by itself justify present job ending applications. Underlying these contentions: a claim that utopian visions with regard to work function as ideologies. Evidence for this claim derived from revisiting past non-industrial and industrial fantasies regarding a work-free utopia.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Field Creativity and Post-Anthropocentrism.Stanislav Roudavski - 2016 - Digital Creativity 27 (1):7-23.
    Can matter, things, nonhuman organisms, technologies, tools and machines, biota or institutions be seen as creative? How does such creativity reposition the visionary activities of humans? This article is an elaboration of such questions as well as an attempt at a partial response. It was written as an editorial for the special issue of the Digital Creativity journal that interrogates the conception of Post-Anthropocentric Creativity. However, the text below is a rather unconventional editorial. It does not attempt to provide an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. In Defense of Artificial Replacement.Derek Shiller - 2017 - Bioethics 31 (2):393-399.
    If it is within our power to provide a significantly better world for future generations at a comparatively small cost to ourselves, we have a strong moral reason to do so. One way of providing a significantly better world may involve replacing our species with something better. It is plausible that in the not-too-distant future, we will be able to create artificially intelligent creatures with whatever physical and psychological traits we choose. Granted this assumption, it is argued that we should (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. A Trilemma for Teleological Individualism.John Basl - 2017 - Synthese 194 (4):1027-1029.
    This paper addresses the foundations of Teleological Individualism, the view that organisms, even non-sentient organisms, are goal-oriented systems while biological collectives, such as ecosystems or conspecific groups, are mere assemblages of organisms. Typical defenses of Teleological Individualism ground the teleological organization of organisms in the workings of natural selection. This paper shows that grounding teleological organization in natural selection is antithetical to Teleological Individualism because such views assume a view about the units of selection on which it is only individual (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law.Duncan MacIntosh - 2016 - Temple International and Comparative Law Journal 30 (1):99-117.
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. The Ethics of Algorithms: Mapping the Debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   39 citations  
  43. The Social Robot as ‘Charismatic Leader’: A Phenomenology of Human Submission to Nonhuman Power.Matthew E. Gladden - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014. IOS Press. pp. 329-339.
    Much has been written about the possibility of human trust in robots. In this article we consider a more specific relationship: that of a human follower’s obedience to a social robot who leads through the exercise of referent power and what Weber described as ‘charismatic authority.’ By studying robotic design efforts and literary depictions of robots, we suggest that human beings are striving to create charismatic robot leaders that will either (1) inspire us through their display of superior morality; (2) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Future Progress in Artificial Intelligence: A Survey of Expert Opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   13 citations  
  45. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, Xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Our Responsibility to Manage Evaluative Diversity.Christopher Santos-Lang - 2014 - Acm Sigcas Computers and Society 44 (2):16-19.
    The ecosystem approach to computer system development is similar to management of biodiversity. Instead of modeling machines after a successful individual, it models machines after successful teams. It includes measuring the evaluative diversity of human teams (i.e. the disparity in ways members conduct the evaluative aspect of decision-making), adding similarly diverse machines to those teams, and monitoring the impact on evaluative balance. This article reviews new research relevant to this approach, especially the validation of a survey instrument for measuring computational (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
1 — 50 / 246