Switch to: Citations

Add references

You must login to add references.
  1. Responsibility.Garrath Williams - 2012 - In Ruth Chadwick (ed.), Encyclopedia of Applied Ethics (Second Edition). pp. 821-828.
    Discusses what is involved in describing a person as responsible: she has responsibilities that she is duty-bound to undertake, and may be held responsible when she fails to fulfill these. Considers why societies and organizations divide responsibilities between persons. Also considers how questions of responsibility arise in the spheres of morality, law, organizational life and politics, and how different modes of holding responsible may be appropriate in each. Concludes with a brief discussion of some questions about collective responsibility.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. [REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Creativity, the Turing test, and the (better) Lovelace test.Selmer Bringsjord, P. Bello & David A. Ferrucci - 2001 - Minds and Machines 11 (1):3-27.
    The Turing Test is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • I Am a Strange Loop.Douglas R. Hofstadter - 2007 - New York, NY, USA: Basic Books.
    Can thought arise out of matter? Can self, soul, consciousness, “I” arise out of mere matter? If it cannot, then how can you or I be here? I Am a Strange Loop argues that the key to understanding selves and consciousness is the “strange loop”—a special kind of abstract feedback loop inhabiting our brains. The most central and complex symbol in your brain is the one called “I.” The “I” is the nexus in our brain, one of many symbols seeming (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Artificial Intelligence: A Modern Approach.Stuart Jonathan Russell & Peter Norvig (eds.) - 1995 - Prentice-Hall.
    Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence. Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence. According to an article in The New York Times, the course on artificial intelligence is (...)
    Download  
     
    Export citation  
     
    Bookmark   278 citations  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Intending to err: the ethical challenge of lethal, autonomous systems. [REVIEW]Mark S. Swiatek - 2012 - Ethics and Information Technology 14 (4):241-254.
    Current precursors in the development of lethal, autonomous systems (LAS) point to the use of biometric devices for assessing, identifying, and verifying targets. The inclusion of biometric devices entails the use of a probabilistic matching program that requires the deliberate targeting of noncombatants as a statistically necessary function of the system. While the tactical employment of the LAS may be justified on the grounds that the deliberate killing of a smaller number of noncombatants is better than the accidental killing of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   215 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   173 citations  
  • Industrial challenges of military robotics.George R. Lucas - 2011 - Journal of Military Ethics 10 (4):274-295.
    Abstract This article evaluates the ?drive toward greater autonomy? in lethally-armed unmanned systems. Following a summary of the main criticisms and challenges to lethal autonomy, both engineering and ethical, raised by opponents of this effort, the article turns toward solutions or responses that defense industries and military end users might seek to incorporate in design, testing and manufacturing to address these concerns. The way forward encompasses a two-fold testing procedure for reliability incorporating empirical, quantitative benchmarks of performance in compliance with (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? [REVIEW]Arthur Kuflik - 1999 - Ethics and Information Technology 1 (3):173-184.
    To what extent should humans transfer, or abdicate, responsibility to computers? In this paper, I distinguish six different senses of responsible and then consider in which of these senses computers can, and in which they cannot, be said to be responsible for deciding various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater control of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • Review of Douglas Richard Hofstadter: Godel, Escher, Bach: An Eternal Golden Braid[REVIEW]Russell Hardin - 1980 - Ethics 90 (2):310-311.
    Download  
     
    Export citation  
     
    Bookmark   134 citations  
  • Gödel, Escher, Bach: An Eternal Golden Braid.Judson C. Webb - 1979 - Journal of Symbolic Logic 48 (3):864-871.
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Gödel, Eschery Bach: An Eternal Golden Braid by Douglas R. Hofstadter. [REVIEW]Jonathan Lieberson - 1980 - Journal of Philosophy 77 (1):45-52.
    Download  
     
    Export citation  
     
    Bookmark   146 citations  
  • Godel, Escher, Bach: An Eternal Golden Braid.Douglas Richard Hofstadter - 1979 - Hassocks, England: Basic Books.
    A young scientist and mathematician explores the mystery and complexity of human thought processes from an interdisciplinary point of view.
    Download  
     
    Export citation  
     
    Bookmark   507 citations  
  • Connectionist learning procedures.Geoffrey E. Hinton - 1989 - Artificial Intelligence 40 (1-3):185-234.
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Beyond the skin bag: On the moral responsibility of extended agencies.F. Allan Hanson - 2009 - Ethics and Information Technology 11 (1):91-99.
    The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   290 citations  
  • Responsibility.Garrath Williams - 2006 - Internet Encyclopedia of Philosophy.
    We evaluate people and groups as responsible or not, depending on how seriously they take their responsibilities. Often we do this informally, via moral judgment. Sometimes we do this formally, for instance in legal judgment. This article considers mainly moral responsibility, and focuses largely upon individuals. Later sections also comment on the relation between legal and moral responsibility, and on the responsibility of collectives.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Moral responsibility.Andrew Eshleman - 2008 - Stanford Encyclopedia of Philosophy.
    When a person performs or fails to perform a morally significant action, we sometimes think that a particular kind of response is warranted. Praise and blame are perhaps the most obvious forms this reaction might take. For example, one who encounters a car accident may be regarded as worthy of praise for having saved a child from inside the burning car, or alternatively, one may be regarded as worthy of blame for not having used one's mobile phone to call for (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Autonomy in moral and political philosophy.John Christman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   103 citations  
  • What should we want from a robot ethic.Peter M. Asaro - 2006 - International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Subsymbolic computation and the chinese room.David J. Chalmers - 1992 - In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum. pp. 25--48.
    More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • A computational foundation for the study of cognition.David Chalmers - 2011 - Journal of Cognitive Science 12 (4):323-357.
    Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation (...)
    Download  
     
    Export citation  
     
    Bookmark   94 citations  
  • Autonomous Military Robotics: Risk, Ethics, and Design.Patrick Lin, George Bekey & Keith Abney - unknown
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • The many forms of hypercomputation.Toby Ord - 178 - Journal of Applied Mathematics and Computation 178:142-153.
    This paper surveys a wide range of proposed hypermachines, examining the resources that they require and the capabilities that they possess. 2005 Elsevier Inc. All rights reserved.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • The Nature, Importance, and Difficulty of Machine Ethics.James Moor - 2006 - IEEE Intelligent Systems 21:18-21.
    Download  
     
    Export citation  
     
    Bookmark   113 citations  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Prospects for a Kantian machine.Thomas M. Powers - 2006 - IEEE Intelligent Systems 21 (4):46-51.
    This paper is reprinted in the book Machine Ethics, eds. M. Anderson and S. Anderson, Cambridge University Press, 2011.
    Download  
     
    Export citation  
     
    Bookmark   41 citations