Switch to: Citations

Add references

You must login to add references.
  1. The birth of roboethics.Gianmarco Veruggio - 2005 - ICRA 2005, IEEE International Conference on Robotics and Automation, Workshop on Roboethics.
    The importance, and urgency, of a Roboethics lay in the lesson of our recent history. Two of the front rank fields of science and technology, Nuclear Physics and Genetic Engineering, have already been forced to face the ethical consequences of their research’s applications under the pressure of dramatic and troubling events. In many countries, public opinion, shocked by some of these effects, urged to either halt the whole applications, or to seriously control them. Robotics is rapidly becoming one of the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Motivation reconsidered: The concept of competence.Robert W. White - 1959 - Psychological Review 66 (5):297-333.
    Download  
     
    Export citation  
     
    Bookmark   130 citations  
  • Distributed morality in an information society.Luciano Floridi - 2013 - Science and Engineering Ethics 19 (3):727-743.
    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    Download  
     
    Export citation  
     
    Bookmark   308 citations  
  • Machine morality: bottom-up and top-down approaches for modelling human moral faculties. [REVIEW]Wendell Wallach, Colin Allen & Iva Smit - 2008 - AI and Society 22 (4):565-582.
    The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • (4 other versions)Groundwork for the metaphysics of morals.Immanuel Kant - 1785 - New York: Oxford University Press. Edited by Thomas E. Hill & Arnulf Zweig.
    In this classic text, Kant sets out to articulate and defend the Categorical Imperative - the fundamental principle that underlies moral reasoning - and to lay the foundation for a comprehensive account of justice and human virtues. This new edition and translation of Kant's work is designed especially for students. An extensive and comprehensive introduction explains the central concepts of Groundwork and looks at Kant's main lines of argument. Detailed notes aim to clarify Kant's thoughts and to correct some common (...)
    Download  
     
    Export citation  
     
    Bookmark   1054 citations  
  • Human nature and the limits of science.John Dupré - 2001 - New York: Oxford University Press.
    John Dupre warns that our understanding of human nature is being distorted by two faulty and harmful forms of pseudo-scientific thinking. Not just in the academic world but in everyday life, we find one set of experts who seek to explain the ends at which humans aim in terms of evolutionary theory, while the other set uses economic models to give rules of how we act to achieve those ends. Dupre demonstrates that these theorists' explanations do not work and that, (...)
    Download  
     
    Export citation  
     
    Bookmark   137 citations  
  • (1 other version)Groundwork of the metaphysics of morals.Immanuel Kant - 2007 - In Elizabeth Schmidt Radcliffe, Richard McCarty, Fritz Allhoff & Anand Vaidya (eds.), Late modern philosophy: essential readings with commentary. Oxford: Wiley-Blackwell.
    Immanuel Kant's Groundwork of the Metaphysics of Morals ranks alongside Plato's Republic and Aristotle's Nicomachean Ethics as one of the most profound and influential works in moral philosophy ever written. In Kant's own words its aim is to search for and establish the supreme principle of morality, the categorical imperative. Kant argues that every human being is an end in himself or herself, never to be used as a means by others, and that moral obligation is an expression of the (...)
    Download  
     
    Export citation  
     
    Bookmark   833 citations  
  • Toward the ethical robot.James Gips - 1994 - In Kenneth M. Ford, Clark N. Glymour & Patrick J. Hayes (eds.), Android Epistemology. MIT Press. pp. 243--252.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Intelligence without representation.Rodney A. Brooks - 1991 - Artificial Intelligence 47 (1--3):139-159.
    Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity (...)
    Download  
     
    Export citation  
     
    Bookmark   659 citations  
  • Human autonomy, technological automation.Simona Chiodo - 2022 - AI and Society 37 (1):39-48.
    We continuously talk about autonomous technologies. But how can words qualifying technologies be the very same words chosen by Kant to define what is essentially human, i.e. being autonomous? The article focuses on a possible answer by reflecting upon both etymological and philosophical issues, as well as upon the case of autonomous vehicles. Most interestingly, on the one hand, we have the notion of “autonomy”, meaning that there is a “law” that is “self-given”, and, on the other hand, we have (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)Regulation or Responsibility? Autonomy, Moral Imagination, and Engineering.Mark Coeckelbergh - 2006 - Science, Technology, and Human Values 31 (3):237-260.
    A prima facie analysis suggests that there are essentially two, mutually exclusive, ways in which risk arising from engineering design can be managed: by imposing external constraints on engineers or by engendering their feelings of responsibility and respect their autonomy. The author discusses the advantages and disadvantages of both approaches. However, he then shows that this opposition is a false one and that there is no simple relation between regulation and autonomy. Furthermore, the author argues that the most pressing need (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Human Nature and the Limits of Science.John Dupré - 2004 - Revue Philosophique de la France Et de l'Etranger 194 (1):134-135.
    Download  
     
    Export citation  
     
    Bookmark   96 citations  
  • Vehicles: Experiments in Synthetic Psychology.Valentino Braitenberg - 1986 - Philosophical Review 95 (1):137-139.
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • Roboethics.Spyros G. Tzafestas - 2016 - Springer.
    This volume explores the ethical questions that arise in the development, creation and use of robots that are capable of semiautonomous or autonomous decision making and human-like action. It examines how ethical and moral theories can and must be applied to address the complex and critical issues of the application of these intelligent robots in society. Coverage first presents fundamental concepts and provides a general overview of ethics, artificial intelligence and robotics. Next, the book studies all principal ethical applications of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Prospects for a Kantian machine.Thomas M. Powers - 2006 - IEEE Intelligent Systems 21 (4):46-51.
    This paper is reprinted in the book Machine Ethics, eds. M. Anderson and S. Anderson, Cambridge University Press, 2011.
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • Social robots and the risks to reciprocity.Aimee van Wynsberghe - 2022 - AI and Society 37 (2):479-485.
    A growing body of research can be found in which roboticists are designing for reciprocity as a key construct for successful human–robot interaction (HRI). Given the centrality of reciprocity as a component for our moral lives (for moral development and maintaining the just society), this paper confronts the possibility of what things would look like if the benchmark to achieve perceived reciprocity were accomplished. Through an analysis of the value of reciprocity from the care ethics tradition the richness of reciprocity (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)Regulation or responsibility?Mark Coeckelbergh - 2006 - Science, Technology and Human Values 31 (3):237-260.
    A prima facie analysis suggests that there are essentially two, mutually exclusive, ways in which risk arising from engineering design can be managed: by imposing external constraints on engineers or by engendering their feelings of responsibility and respect their autonomy. The author discusses the advantages and disadvantages of both approaches. However, he then shows that this opposition is a false one and that there is no simple relation between regulation and autonomy. Furthermore, the author argues that the most pressing need (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Ethics in artificial intelligence: introduction to the special issue.Virginia Dignum - 2018 - Ethics and Information Technology 20 (1):1-3.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • The Singularity and Machine Ethics.Luke Muehlhauser & Louie Helm - 2012 - In Amnon H. Eden & James H. Moor (eds.), Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. pp. 101-125.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Prospects for a Kantian Machine.Thomas M. Powers - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press. pp. 464-475.
    A rule-based ethical theory is a good candidate for the practical reasoning of machine ethics because it generates duties or rules for action, and rules are computationally tractable. Among principle- or rule-based theories, the first formulation of Kant's categorical imperative offers a formalizable procedure. We explore a version of machine ethics along the lines of Kantian formalist ethics, both to suggest what computational structures such a view would require and to see what challenges remain for its successful implementation. In reformulating (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Ethical consequences of autonomous AI. Challenges to empiricist and rationalist philosophy of mind.Patrizio Lo Presti - forthcoming - Humana. Mente.
    The possibility of autonomous artificially intelligent systems has awaken a well-known worry in the scientific community as well as in popular imaginary: the possibility that beings which have gained autonomous intelligence either turn against their creators or at least make the moral and ethical superiority of creators with respect to the created questionable. The present paper argues that such worries are wrong-headed. Specifically, if AAIs raise a worry about human ways of life or human value it is a worry for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation