Switch to: Citations

Add references

You must login to add references.
  1. (6 other versions)Robot: Mere Machine to Transcendent Mind.Hans P. Moravec - 1998 - Oup Usa.
    Machines will attain human levels of intelligence by the year 2040, predicts robotics expert Hans Moravec. And by 2050, they will have far surpassed us. In this mind-bending new book, Hans Moravec takes the reader on a roller coaster ride packed with such startling predictions. He tells us, for instance, that in the not-too-distant future, an army of robots will displace workers, causing massive, unprecedented unemployment. But then, says Moravec, a period of very comfortable existence will follow, as humans benefit (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Computer Ethics and Professional Responsibility.Terrell Ward Bynum & Simon Rogerson (eds.) - 1998 - Wiley-Blackwell.
    This clear and accessible textbook and its associated website offer a state of the art introduction to the burgeoning field of computer ethics and professional responsibility. Includes discussion of hot topics such as the history of computing; the social context of computing; methods of ethical analysis; professional responsibility and codes of ethics; computer security, risks and liabilities; computer crime, viruses and hacking; data protection and privacy; intellectual property and the “open source” movement; global ethics and the internet Introduces key issues (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Mechanism and responsibility.Daniel C. Dennett - 1973 - In Ted Honderich (ed.), Essays on Freedom of Action. Boston,: Routledge and Kegan Paul. pp. 157--84.
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • Social robots-emotional agents: Some remarks on naturalizing man-machine interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • A Strawsonian Defense of Corporate Moral Responsibility.David Silver - 2005 - American Philosophical Quarterly 42 (4):279 - 293.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • “Ain’t No One Here But Us Social Forces”: Constructing the Professional Responsibility of Engineers. [REVIEW]Michael Davis - 2012 - Science and Engineering Ethics 18 (1):13-34.
    There are many ways to avoid responsibility, for example, explaining what happens as the work of the gods, fate, society, or the system. For engineers, “technology” or “the organization” will serve this purpose quite well. We may distinguish at least nine (related) senses of “responsibility”, the most important of which are: (a) responsibility-as-causation (the storm is responsible for flooding), (b) responsibility-as-liability (he is the person responsible and will have to pay), (c) responsibility-as-competency (he’s a responsible person, that is, he’s rational), (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • What is computer ethics?James H. Moor - 1985 - Metaphilosophy 16 (4):266-275.
    Download  
     
    Export citation  
     
    Bookmark   149 citations  
  • Compatibilism.Michael McKenna - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Moral responsibility.Andrew Eshleman - 2008 - Stanford Encyclopedia of Philosophy.
    When a person performs or fails to perform a morally significant action, we sometimes think that a particular kind of response is warranted. Praise and blame are perhaps the most obvious forms this reaction might take. For example, one who encounters a car accident may be regarded as worthy of praise for having saved a child from inside the burning car, or alternatively, one may be regarded as worthy of blame for not having used one's mobile phone to call for (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   188 citations  
  • Computer systems and responsibility: A normative look at technological complexity.Deborah G. Johnson & Thomas M. Powers - 2005 - Ethics and Information Technology 7 (2):99-107.
    In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Freedom and privacy in ambient intelligence.Philip Brey - 2005 - Ethics and Information Technology 7 (3):157-166.
    This paper analyzes ethical aspects of the new paradigm of Ambient Intelligence, which is a combination of Ubiquitous Computing and Intelligent User Interfaces (IUI’s). After an introduction to the approach, two key ethical dimensions will be analyzed: freedom and privacy. It is argued that Ambient Intelligence, though often designed to enhance freedom and control, has the potential to limit freedom and autonomy as well. Ambient Intelligence also harbors great privacy risks, and these are explored as well.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Ethics for things.Alison Adam - 2008 - Ethics and Information Technology 10 (2-3):149-154.
    This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence.Andy Clark - 2003 - Oxford University Press. Edited by Alberto Peruzzi.
    In Natural-Born Cyborgs, Clark argues that what makes humans so different from other species is our capacity to fully incorporate tools and supporting cultural ...
    Download  
     
    Export citation  
     
    Bookmark   324 citations  
  • The Nature, Importance, and Difficulty of Machine Ethics.James Moor - 2006 - IEEE Intelligent Systems 21:18-21.
    Download  
     
    Export citation  
     
    Bookmark   118 citations  
  • Artificial Intelligence: A Modern Approach.Stuart Jonathan Russell & Peter Norvig (eds.) - 1995 - Prentice-Hall.
    Artificial Intelligence: A Modern Approach, 3e offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence. Dr. Peter Norvig, contributing Artificial Intelligence author and Professor Sebastian Thrun, a Pearson author are offering a free online course at Stanford University on artificial intelligence. According to an article in The New York Times, the course on artificial intelligence is (...)
    Download  
     
    Export citation  
     
    Bookmark   275 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   187 citations  
  • (1 other version)Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ goes further, by (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Robots in War: Issues of Risk and Ethics.Patrick Lin, George A. Bekey & Keith Abney - unknown
    War robots clearly hold tremendous advantages-from saving the lives of our own soldiers, to safely defusing roadside bombs, to operating in inaccessible and dangerous environments such as mountainside caves and underwater. Without emotions and other liabilities on the battlefield, they could conduct warfare more ethically and effectively than human soldiers who are susceptible to overreactions, anger, vengeance, fatigue, low morale, and so on. But the use of robots, especially autonomous ones, raises a a host of ethical and risk issues. This (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Computationalism: New Directions.Matthias Scheutz (ed.) - 2002 - MIT Press.
    A new computationalist view of the mind that takes into account real-world issues of embodiment, interaction, physical implementation, and semantics.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Emotion Machine: Commensense Thinking, Artificial Intelligence, and the Future of the Human Mind.Marvin Lee Minsky (ed.) - 2006 - Simon & Schuster.
    A leading contributor to artificial intelligence offers insight into the numerous ways in which the mind works to demonstrate how emotions and feelings are just ...
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Technology and ethics.Kristin Shrader-Frechette - 2010 - In Craig Hanks (ed.), Technology and values: essential readings. Malden, MA: Wiley-Blackwell.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Ethics and Robotics.Raphael Capurro & Michael Nagenborg (eds.) - 2009 - Akademische Verlagsgesellschaft.
    P. M. Asaro: What should We Want from a Robot Ethic? G. Tamburrini: Robot Ethics: A View from the Philosophy of Science B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles J. Weber: Robotic warfare, Human Rights & The Rhetorics of Ethical Machines T. (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial Morality goes further, by (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Learning robots and human responsibility.Dante Marino & Guglielmo Tamburrini - 2006 - International Review of Information Ethics 6:46-51.
    Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead us (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • (1 other version)The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Autonomous Military Robotics: Risk, Ethics, and Design.Patrick Lin, George Bekey & Keith Abney - unknown
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Moral Machines and the Threat of Ethical Nihilism.Anthony F. Beavers - 2011 - In Patrick Lin, Keith Abney & George A. Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • A pragmatic evaluation of the theory of information ethics.Mikko Siponen - 2004 - Ethics and Information Technology 6 (4):279-290.
    It has been argued that moral problems in relation to Information Technology (IT) require new theories of ethics. In recent years, an interesting new theory to address such concerns has been proposed, namely the theory of Information Ethics (IE). Despite the promise of IE, the theory has not enjoyed public discussion. The aim of this paper is to initiate such discussion by critically evaluating the theory of IE.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The limits of precaution.Sven Ove Hansson - 1997 - Foundations of Science 2 (2):293-306.
    The maximin rule can be used as a formal version of the precautionary principle. This paper evaluates the feasibility and the intuitive plausibility of this decision rule. The major conclusions are: (1) Precaution has to be applied symmetrically. (2) Precaution is only possible when outcomes are comparable in terms of value, so that it can be determined which outcome is worst. (3) Precaution is sensitive to standards of possibility. Far-away scenarios have to be excluded, and it is difficult to find (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Editorial: Ethics and Engineering Design.Peter-Paul Verbeek & Ibo van de Poel - 2006 - Science, Technology, and Human Values 31 (3):223-236.
    Engineering ethics and science and technology studies have until now developed as separate enterprises. The authors argue that they can learn a lot from each other. STS insights can help make engineering ethics open the black box of technology and help discern ethical issues in engineering design. Engineering ethics, on the other hand, might help STS to overcome its normative sterility. The contributions in this special issue show in various ways how the gap between STS and engineering ethics might be (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Computers, information and ethics: A review of issues and literature. [REVIEW]Dr Carl Mitcham - 1995 - Science and Engineering Ethics 1 (2):113-132.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Computers, information and ethics: A review of issues and literature. [REVIEW]Carl Mitcham - 1995 - Science and Engineering Ethics 1 (2):113-132.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Sharing Moral Responsibility with Robots: A Pragmatic Approach.Gordana Dodig Crnkovic & Daniel Persson - 2008 - In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books.
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Invisibility and the meaning of ambient intelligence.Cecile Km Crutzen - 2006 - International Review of Information Ethics 6 (12):52-62.
    A vision of future daily life is explored in Ambient Intelligence . It contains the assumption that intelligent technology should disappear into our environment to bring humans an easy and entertaining life. The mental, physical, methodical invisibility of AmI will have an effect on the relation between design and use activities of both users and designers. Especially the ethics discussions of AmI, privacy, identity and security are moved into the foreground. However in the process of using AmI, it will go (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Delegating and distributing morality: Can we inscribe privacy protection in a machine? [REVIEW]Alison Adam - 2005 - Ethics and Information Technology 7 (4):233-242.
    This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. [REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations