Switch to: Citations

Add references

You must login to add references.
  1. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures.Karl F. MacDorman, Sandosh K. Vasudevan & Chin-Chang Ho - 2009 - AI and Society 23 (4):485-510.
    Japan has more robots than any other country with robots contributing to many areas of society, including manufacturing, healthcare, and entertainment. However, few studies have examined Japanese attitudes toward robots, and none has used implicit measures. This study compares attitudes among the faculty of a US and a Japanese university. Although the Japanese faculty reported many more experiences with robots, implicit measures indicated both faculties had more pleasant associations with humans. In addition, although the US faculty reported people were more (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Ethical regulations on robotics in Europe.Michael Nagenborg, Rafael Capurro, Jutta Weber & Christoph Pingel - 2008 - AI and Society 22 (3):349-366.
    There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to “responsibility and autonomous robots”, “machines as a replacement for humans”, and “tele-presence”. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Imagining a non-biological machine as a legal person.David J. Calverley - 2008 - AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Granny and the robots: ethical issues in robot care for the elderly.Amanda Sharkey & Noel Sharkey - 2012 - Ethics and Information Technology 14 (1):27-40.
    The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...)
    Download  
     
    Export citation  
     
    Bookmark   117 citations  
  • A legal analysis of human and electronic agents.Steffen Wettig & Eberhard Zehender - 2004 - Artificial Intelligence and Law 12 (1-2):111-135.
    Currently, electronic agents are being designed and implemented that, unprecedentedly, will be capable of performing legally binding actions. These advances necessitate a thorough treatment of their legal consequences. In our paper, we first demonstrate that electronic agents behave structurally similar to human agents. Then we study how declarations of intention stated by an electronic agent are related to ordinary declarations of intention given by natural persons or legal entities, and also how the actions of electronic agents in this respect have (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   223 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   183 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   90 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • (1 other version)Benchmarks for evaluating socially assistive robotics.David Feil-Seifer, Kristine Skinner & Maja J. Matarić - 2007 - Interaction Studies 8 (3):423-439.
    Socially assistive robotics is a growing area of research. Evaluating SAR systems presents novel challenges. Using a robot for a socially assistive task can have various benefits and ethical implications. Many questions are important to understanding whether a robot is effective for a given application domain. This paper describes several benchmarks for evaluating SAR systems. There exist numerous methods for evaluating the many factors involved in a robot’s design. Benchmarks from psychology, anthropology, medicine, and human–robot interaction are proposed as measures (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Benchmarks for evaluating socially assistive robotics.David Feil-Seifer, Kristine Skinner & Maja J. Matarić - 2007 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 8 (3):423-439.
    Socially assistive robotics is a growing area of research. Evaluating SAR systems presents novel challenges. Using a robot for a socially assistive task can have various benefits and ethical implications. Many questions are important to understanding whether a robot is effective for a given application domain. This paper describes several benchmarks for evaluating SAR systems. There exist numerous methods for evaluating the many factors involved in a robot’s design. Benchmarks from psychology, anthropology, medicine, and human–robot interaction are proposed as measures (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (2 other versions)Robotics, philosophy and the problems of autonomy.Willem F. G. Haselager - 2005 - Pragmatics and Cognition 13 (3):515-532.
    Robotics can be seen as a cognitive technology, assisting us in understanding various aspects of autonomy. In this paper I will investigate a difference between the interpretations of autonomy that exist within robotics and philosophy. Based on a brief review of some historical developments I suggest that within robotics a technical interpretation of autonomy arose, related to the independent performance of tasks. This interpretation is far removed from philosophical analyses of autonomy focusing on the capacity to choose goals for oneself. (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • (2 other versions)Robotics, philosophy and the problems of autonomy.Willem F. G. Haselager - 2005 - Pragmatics and Cognition 13 (3):515-532.
    Robotics can be seen as a cognitive technology, assisting us in understanding various aspects of autonomy. In this paper I will investigate a difference between the interpretations of autonomy that exist within robotics and philosophy. Based on a brief review of some historical developments I suggest that within robotics a technical interpretation of autonomy arose, related to the independent performance of tasks. This interpretation is far removed from philosophical analyses of autonomy focusing on the capacity to choose goals for oneself. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Intelligent agents and liability: Is it a doctrinal problem or merely a problem of explanation? [REVIEW]Emad Abdel Rahim Dahiyat - 2010 - Artificial Intelligence and Law 18 (1):103-121.
    The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is possible to share (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Did Hal committ murder?Daniel C. Dennett - 1997 - In David G. Stork (ed.), Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press.
    The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated 12/9/81 from the Philadelphia Inquirer--not the National Enquirer--with the headline: Robot killed repairman, Japan reports The story was an anti-climax: at the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, crushing him to death. The repairman had failed to follow proper instructions for shutting down the arm before entering the workspace. Why, indeed, had (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Roboethics: Ethics Applied to Robotics.Gianmarco Veruggio, Jorje Solis & Machiel Van der Loos - 2011 - IEEE Robotics and Automation Magazine 1 (March):21-22.
    This special issue deals with the emerging debate on robo- ethics, the human ethics ap- plied to robotics. Is a specific ethic applied to robotics truly neces- sary? Or, conversely, are not the gen- eral principles of ethics adequate to answer many of the issues raised by our field’s applications? In our opin- ion, and according to many roboticists and human scientists, many novel issues that emerge and many more that will show up in the immediate future, arising from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (2 other versions)Law as a Social System.Niklas Luhmann - 2004 - Oxford University Press UK.
    In this volume, Niklas Luhmann, the leading exponent of systems theory, explores its implications for our understanding of law. The volume provides a rigorous application to law of a theory that offers profound insights into the relationships between law and other aspects of contemporary society, including politics, the economy, the media, education, and religion.Readership: Academics and students of sociology, law, philosophy, and legal philosophy.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • (1 other version)The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   182 citations  
  • Lifting the Burden of Women's Care Work: Should Robots Replace the “Human Touch”?Jennifer A. Parks - 2010 - Hypatia 25 (1):100-120.
    This paper treats the political and ethical issues associated with the new caretaking technologies. Given the number of feminists who have raised serious concerns about the future of care work in the United States, and who have been critical of the degree to which society “free rides” on women's caretaking labor, I consider whether technology may provide a solution to this problem. Certainly, if we can create machines and robots to take on particular tasks, we may lighten the care burden (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Legal personhood for artificial intelligences.Lawrence B. Solum - 1992 - North Carolina Law Review 70:1231.
    Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Technology as a Subject for Ethics.Hans Jonas - 1982 - Social Research: An International Quarterly 49.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Contracting agents: Legal personality and representation. [REVIEW]Francisco Andrade, Paulo Novais, José Machado & José Neves - 2007 - Artificial Intelligence and Law 15 (4):357-373.
    The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Luhmann, N. Social Systems. [REVIEW]N. Luhmann, John Bednarz & Dirk Baecker - 1998 - Human Studies 21 (2):227-234.
    Download  
     
    Export citation  
     
    Bookmark   252 citations  
  • Incremental Machine Ethics.Thomas M. Powers - 2011 - IEEE Robotics and Automation 18 (1):51-58.
    Approaches to programming ethical behavior for computer systems face challenges that are both technical and philosophical in nature. In response, an incrementalist account of machine ethics is developed: a successive adaptation of programmed constraints to new, morally relevant abilities in computers. This approach allows progress under conditions of limited knowledge in both ethics and computer systems engineering and suggests reasons that we can circumvent broader philosophical questions about computer intelligence and autonomy.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning robots and human responsibility.Dante Marino & Guglielmo Tamburrini - 2006 - International Review of Information Ethics 6:46-51.
    Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, reasoning, and planning - (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Caregiving robots and ethical reflection: the perspective of interdisciplinary technology assessment. [REVIEW]Michael Decker - 2008 - AI and Society 22 (3):315-330.
    Autonomous robots that are capable of learning are being developed to make it easier for human actors to achieve their goals. As such, robots are primarily a means to an end and replace human actions. An interdisciplinary technology assessment was carried out to determine the extent to which a replacement of this kind makes ethical sense in terms of technology, economics and legal aspects. Proceeding from an ethical perspective, derived from Kant’s formula of humanity, in this article we analyse the (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations