Switch to: Citations

Add references

You must login to add references.
  1. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.Joshua David Greene - 2013 - New York: Penguin Press.
    Our brains were designed for tribal life, for getting along with a select group of others and for fighting off everyone else. But modern times have forced the world’s tribes into a shared space, resulting in epic clashes of values along with unprecedented opportunities. As the world shrinks, the moral lines that divide us become more salient and more puzzling. We fight over everything from tax codes to gay marriage to global warming, and we wonder where, if at all, we (...)
    Download  
     
    Export citation  
     
    Bookmark   211 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a Dialogue with Ethicists.Susan Leigh Anderson & Michael Anderson - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.Jonathan Haidt - 2001 - Psychological Review 108 (4):814-834.
    Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done (...)
    Download  
     
    Export citation  
     
    Bookmark   1585 citations  
  • Designing Robots for Care: Care Centered Value-Sensitive Design.Aimee van Wynsberghe - 2013 - Science and Engineering Ethics 19 (2):407-433.
    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship—what I refer to as care robots—require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What’s more, given the stage of their development and lack of (...)
    Download  
     
    Export citation  
     
    Bookmark   93 citations  
  • Designing Robots for Care: Care Centered Value-Sensitive Design. [REVIEW]Aimee Wynsberghe - 2013 - Science and Engineering Ethics 19 (2):407-433.
    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship—what I refer to as care robots—require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What’s more, given the stage of their development and lack of (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Four Kinds of Ethical Robots.James Moor - 2009 - Philosophy Now 72:12-14.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
    In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  • Four Faces of Moral Realism.Stephen Finlay - 2007 - Philosophy Compass 2 (6):820-849.
    This essay explains for a general philosophical audience the central issues and strategies in the contemporary moral realism debate. It critically surveys the contribution of some recent scholarship, representing expressivist and pragmatist nondescriptivism, subjectivist and nonsubjectivist naturalism, nonnaturalism and error theory. Four different faces of ‘ moral realism ’ are distinguished: semantic, ontological, metaphysical, and normative. The debate is presented as taking shape under dialectical pressure from the demands of capturing the moral appearances and reconciling morality with our understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • A Darwinian dilemma for realist theories of value.Sharon Street - 2006 - Philosophical Studies 127 (1):109-166.
    Contemporary realist theories of value claim to be compatible with natural science. In this paper, I call this claim into question by arguing that Darwinian considerations pose a dilemma for these theories. The main thrust of my argument is this. Evolutionary forces have played a tremendous role in shaping the content of human evaluative attitudes. The challenge for realist theories of value is to explain the relation between these evolutionary influences on our evaluative attitudes, on the one hand, and the (...)
    Download  
     
    Export citation  
     
    Bookmark   622 citations  
  • Ethical disagreement, ethical objectivism and moral indeterminacy.Russ Shafer-Landau - 1994 - Philosophy and Phenomenological Research 54 (2):331-344.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Nothing more than feelings? The role of emotions in moral judgment.David Pizarro - 2000 - Journal for the Theory of Social Behaviour 30 (4):355–375.
    In this paper, I review the primary arguments for the traditional position that holds emotions as antagonistic to moral judgments. I argue that this position is untenable given the information about emotions and emotional processes that has emerged in the psychological literature of recent years. I then offer a theoret- ical model of emotive moral judgment that takes a closer look at how emotions, specifically empathy, play an integral role in the process of moral judgment. I argue that emotions should (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Virtue ethics and situationist personality psychology.Maria Merritt - 2000 - Ethical Theory and Moral Practice 3 (4):365-383.
    In this paper I examine and reply to a deflationary challenge brought against virtue ethics. The challenge comes from critics who are impressed by recent psychological evidence suggesting that much of what we take to be virtuous conduct is in fact elicited by narrowly specific social settings, as opposed to being the manifestation of robust individual character. In answer to the challenge, I suggest a conception of virtue that openly acknowledges the likelihood of its deep, ongoing dependence upon particular social (...)
    Download  
     
    Export citation  
     
    Bookmark   119 citations  
  • The role of trust in knowledge.John Hardwig - 1991 - Journal of Philosophy 88 (12):693-708.
    Most traditional epistemologists see trust and knowledge as deeply antithetical: we cannot know by trusting in the opinions of others; knowledge must be based on evidence, not mere trust. I argue that this is badly mistaken. Modern knowers cannot be independent and self-reliant. In most disciplines, those who do not trust cannot know. Trust is thus often more epistemically basic than empirical evidence or logical argument, for the evidence and the argument are available only through trust. Finally, since the reliability (...)
    Download  
     
    Export citation  
     
    Bookmark   266 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   296 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   622 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • Toward the ethical robot.James Gips - 1994 - In Kenneth M. Ford, Clark N. Glymour & Patrick J. Hayes (eds.), Android Epistemology. MIT Press. pp. 243--252.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • I, Robot.Isaac Asimov - 1950 - Doubleday.
    A classic collection of interlocking tales chronicles the near-future development of the robot and features models that have the ability to read minds, experience human emotions, and take over the world--and, perhaps, render humankind itself obsolete. Reprint. 25,000 first printing.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • The Nature, Importance, and Difficulty of Machine Ethics.James Moor - 2006 - IEEE Intelligent Systems 21:18-21.
    Download  
     
    Export citation  
     
    Bookmark   120 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • Governing lethal behavior in autonomous robots.Ronald C. Arkin - 2009 - .
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   105 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Persons, situations, and virtue ethics.John M. Doris - 1998 - Noûs 32 (4):504-530.
    Download  
     
    Export citation  
     
    Bookmark   228 citations  
  • Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead us (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  • Wendell Wallach and Colin Allen: moral machines: teaching robots right from wrong: Oxford University Press, 2009, 273 pp, ISBN: 978-0-19-537404-9. [REVIEW]Vincent Wiegel - 2010 - Ethics and Information Technology 12 (4):359-361.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Service robots, care ethics, and design.A. van Wynsberghe - 2016 - Ethics and Information Technology 18 (4):311-321.
    It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Ethicist as Designer: A Pragmatic Approach to Ethics in the Lab.Aimee van Wynsberghe & Scott Robbins - 2014 - Science and Engineering Ethics 20 (4):947-961.
    Contemporary literature investigating the significant impact of technology on our lives leads many to conclude that ethics must be a part of the discussion at an earlier stage in the design process i.e., before a commercial product is developed and introduced. The problem, however, is the question regarding how ethics can be incorporated into an earlier stage of technological development and it is this question that we argue has not yet been answered adequately. There is no consensus amongst scholars as (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Machine Metaethics.Susan Leigh Anderson - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press.
    Download  
     
    Export citation  
     
    Bookmark   12 citations