Switch to: Citations

References in:

Moral Mechanisms

Philosophy and Technology 27 (1):47-60 (2014)

Add references

You must login to add references.
  1. Wild Justice: The Moral Lives of Animals.Marc Bekoff & Jessica Pierce - 2009 - University of Chicago Press.
    Scientists have long counseled against interpreting animal behavior in terms of human emotions, warning that such anthropomorphizing limits our ability to understand animals as they really are. Yet what are we to make of a female gorilla in a German zoo who spent days mourning the death of her baby? Or a wild female elephant who cared for a younger one after she was injured by a rambunctious teenage male? Or a rat who refused to push a lever for food (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • Braintrust: What Neuroscience Tells Us About Morality.Patricia S. Churchland - 2011 - Princeton University Press.
    What is morality? Where does it come from? And why do most of us heed its call most of the time? In Braintrust, neurophilosophy pioneer Patricia Churchland argues that morality originates in the biology of the brain. She describes the "neurobiological platform of bonding" that, modified by evolutionary pressures and cultural values, has led to human styles of moral behavior. The result is a provocative genealogy of morals that asks us to reevaluate the priority given to religion, absolute rules, and (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   293 citations  
  • Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False.Thomas Nagel - 2012 - New York, US: Oxford University Press.
    The modern materialist approach to life has conspicuously failed to explain such central mind-related features of our world as consciousness, intentionality, meaning, and value. This failure to account for something so integral to nature as mind, argues philosopher Thomas Nagel, is a major problem, threatening to unravel the entire naturalistic world picture, extending to biology, evolutionary theory, and cosmology. Since minds are features of biological systems that have developed through evolution, the standard materialist version of evolutionary biology is fundamentally incomplete. (...)
    Download  
     
    Export citation  
     
    Bookmark   162 citations  
  • Computationalism: Still the Only Game in Town: A Reply to Swiatczak’s “Conscious Representations: An Intractable Problem for the Computational Theory of Mind”. [REVIEW]David Davenport - 2012 - Minds and Machines 22 (3):183-190.
    Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computational theory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself a rather (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Machine Question: Critical Perspectives on Ai, Robots, and Ethics.David J. Gunkel - 2012 - MIT Press.
    One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a fundamental (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations