Switch to: Citations

Add references

You must login to add references.
  1. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Moral Responsibility, Technology, and Experiences of the Tragic: From Kierkegaard to Offshore Engineering.Mark Coeckelbergh - 2012 - Science and Engineering Ethics 18 (1):35-48.
    The standard response to engineering disasters like the Deepwater Horizon case is to ascribe full moral responsibility to individuals and to collectives treated as individuals. However, this approach is inappropriate since concrete action and experience in engineering contexts seldom meets the criteria of our traditional moral theories. Technological action is often distributed rather than individual or collective, we lack full control of the technology and its consequences, and we lack knowledge and are uncertain about these consequences. In this paper, I (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Drones, information technology, and distance: mapping the moral epistemology of remote fighting. [REVIEW]Mark Coeckelbergh - 2013 - Ethics and Information Technology 15 (2):87-98.
    Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. [REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI. According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. (...)
    Download  
     
    Export citation  
     
    Bookmark   1709 citations  
  • Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Moral Responsibility.Matthew Talbert - 2019 - Stanford Encyclopedia of Philosophy.
    This is the Stanford Encyclopedia of Philosophy entry on moral responsibility.
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Attributing responsibility to computer systems1,.William Bechtel - 1985 - Metaphilosophy 16 (4):296-306.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Technological delegation: Responsibility for the unintended.Katinka Waelbers - 2009 - Science and Engineering Ethics 15 (1):51-68.
    This article defends three interconnected premises that together demand for a new way of dealing with moral responsibility in developing and using technological artifacts. The first premise is that humans increasingly make use of dissociated technological delegation. Second, because technologies do not simply fulfill our actions, but rather mediate them, the initial aims alter and outcomes are often different from those intended. Third, since the outcomes are often unforeseen and unintended, we can no longer simply apply the traditional (modernist) models (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What Things Do: Philosophical Reflections on Technology, Agency, and Design.Peter-Paul Verbeek - 2005 - Pennsylvania State University Press.
    This paper praises and criticizes Peter-Paul Verbeek's What Things Do . The four things that Verbeek does well are: remind us of the importance of technological things; bring Karl Jaspers into the conversation on technology; explain how technology "co-shapes" experience by reading Bruno Latour's actor-network theory in light of Don Ihde's post-phenomenology; develop a material aesthetics of design. The three things that Verbeek does not do well are: analyze the material conditions in which things are produced; criticize the social-political design (...)
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • Materializing Morality: Design Ethics and Technological Mediation.Peter-Paul Verbeek - 2006 - Science, Technology, and Human Values 31 (3):361-380.
    During the past decade, the “script” concept, indicating how technologies prescribe human actions, has acquired a central place in STS. Until now, the concept has mainly functioned in descriptive settings. This article will deploy it in a normative setting. When technologies coshape human actions, they give material answers to the ethical question of how to act. This implies that engineers are doing “ethics by other means”: they materialize morality. The article will explore the implications of this insight for engineering ethics. (...)
    Download  
     
    Export citation  
     
    Bookmark   100 citations  
  • The debate on the moral responsibilities of online service providers.Mariarosaria Taddeo & Luciano Floridi - 2016 - Science and Engineering Ethics 22 (6):1575-1603.
    Online service providers —such as AOL, Facebook, Google, Microsoft, and Twitter—significantly shape the informational environment and influence users’ experiences and interactions within it. There is a general agreement on the centrality of OSPs in information societies, but little consensus about what principles should shape their moral responsibilities and practices. In this article, we analyse the main contributions to the debate on the moral responsibilities of OSPs. By endorsing the method of the levels of abstract, we first analyse the moral responsibilities (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Designing a Good Life: A Matrix for the Technological Mediation of Morality. [REVIEW]Tsjalling Swierstra & Katinka Waelbers - 2012 - Science and Engineering Ethics 18 (1):157-172.
    Technologies fulfill a social role in the sense that they influence the moral actions of people, often in unintended and unforeseen ways. Scientists and engineers are already accepting much responsibility for the technological, economical and environmental aspects of their work. This article asks them to take an extra step, and now also consider the social role of their products. The aim is to enable engineers to take a prospective responsibility for the future social roles of their technologies by providing them (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • A critique of positive responsibility in computing.James A. Stieb - 2008 - Science and Engineering Ethics 14 (2):219-233.
    It has been claimed that (1) computer professionals should be held responsible for an undisclosed list of “undesirable events” associated with their work and (2) most if not all computer disasters can be avoided by truly understanding responsibility. Programmers, software developers, and other computer professionals should be defended against such vague, counterproductive, and impossible ideals because these imply the mandatory satisfaction of social needs and the equation of ethics with a kind of altruism. The concept of social needs is debatable (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   215 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Online Responsibility: Bad Samaritanism and the Influence of Internet Mediation.Saskia E. Polder-Verkiel - 2012 - Science and Engineering Ethics 18 (1):117-141.
    In 2008 a young man committed suicide while his webcam was running. 1,500 people apparently watched as the young man lay dying: when people finally made an effort to call the police, it was too late. This closely resembles the case of Kitty Genovese in 1964, where 39 neighbours supposedly watched an attacker assault and did not call until it was too late. This paper examines the role of internet mediation in cases where people may or may not have been (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.
    The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.
    This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Critiquing a critique.Keith W. Miller - 2008 - Science and Engineering Ethics 14 (2):245-249.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   172 citations  
  • Ethical Implications and Accountability of Algorithms.Kirsten Martin - 2018 - Journal of Business Ethics 160 (4):835-850.
    Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? [REVIEW]Arthur Kuflik - 1999 - Ethics and Information Technology 1 (3):173-184.
    To what extent should humans transfer, or abdicate, responsibility to computers? In this paper, I distinguish six different senses of responsible and then consider in which of these senses computers can, and in which they cannot, be said to be responsible for deciding various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater control of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Algorithmic content moderation: Technical and political challenges in the automation of platform governance.Christian Katzenbach, Reuben Binns & Robert Gorwa - 2020 - Big Data and Society 7 (1):1–15.
    As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • Computer systems and responsibility: A normative look at technological complexity.Deborah G. Johnson & Thomas M. Powers - 2005 - Ethics and Information Technology 7 (2):99-107.
    In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Computer Ethics.Deborah G. Johnson - 2005 - In R. G. Frey & Christopher Heath Wellman (eds.), A Companion to Applied Ethics. Oxford, UK: Blackwell. pp. 608–619.
    This chapter contains sections titled: Technology, Ethics, and the Instrumentation of Human Action The Genus‐Species Account Avoiding the Mistake of Unique Technology Avoiding the Mistake of the Applied Ethics Model Conclusion Acknowledgment.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Boeing 737 MAX: Lessons for Engineering Ethics.Joseph Herkert, Jason Borenstein & Keith Miller - 2020 - Science and Engineering Ethics 26 (6):2957-2974.
    The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing’s practices and culture. Explanations for the crashes include: design flaws within the MAX’s new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Informatics and professional responsibility.Donald Gotterbarn - 2001 - Science and Engineering Ethics 7 (2):221-230.
    Many problems in software development can be traced to a narrow understanding of professional responsibility. The author examines ways in which software developers have tried to avoid accepting responsibility for their work. After cataloguing various types of responsibility avoidance, the author introduces an expanded concept of positive responsibility. It is argued that the adoption of this sense of positive responsibility will reduce many problems in software development.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   290 citations  
  • Distributed morality in an information society.Luciano Floridi - 2013 - Science and Engineering Ethics 19 (3):727-743.
    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Recent work on moral responsibility.John Fischer - 1999 - Ethics 110 (1):93–139.
    Download  
     
    Export citation  
     
    Bookmark   148 citations  
  • Editors' Overview: Moral Responsibility in Technology and Engineering.Neelke Doorn & Ibo van de Poel - 2012 - Science and Engineering Ethics 18 (1):1-11.
    Editors’ Overview: Moral Responsibility in Technology and Engineering Content Type Journal Article Category Original Paper Pages 1-11 DOI 10.1007/s11948-011-9285-z Authors Neelke Doorn, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Ibo van de Poel, Department of Technology, Policy and Management, Delft University of Technology, P.O. Box 5015, 2600 GA Delft, The Netherlands Journal Science and Engineering Ethics Online ISSN 1471-5546 Print ISSN 1353-3452 Journal Volume Volume 18 Journal Issue Volume 18, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Social Construction of Technological Systems: New Directions in Sociology and History of Technology (25th Anniversary Edition with new preface).Wiebe E. Bijker, Thomas P. Hughes & Trevor Pinch (eds.) - 1987 - MIT Press.
    Download  
     
    Export citation  
     
    Bookmark   162 citations  
  • The imperative of responsibility: in search of an ethics for the technological age.Hans Jonas - 1984 - Chicago: University of Chicago Press.
    Discusses the ethical implications of modern technology and examines the responsibility of humanity for the fate of the world.
    Download  
     
    Export citation  
     
    Bookmark   291 citations  
  • Human Values and the Design of Computer Technology.Batya Friedman (ed.) - 1997 - Center for the Study of Language and Inf.
    Perhaps this is due to the belief that technology has a value-neutral nature, and that issues of value are better left to philosophers.
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   186 citations  
  • Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
    The doyen of living English philosophers, by these reflections, took hold of and changed the outlook of a good many other philosophers, if not quite enough. He did so, essentially, by assuming that talk of freedom and responsibility is talk not of facts or truths, in a certain sense, but of our attitudes. His more explicit concern was to look again at the question of whether determinism and freedom are consistent with one another -- by shifting attention to certain personal (...)
    Download  
     
    Export citation  
     
    Bookmark   1282 citations  
  • Moral responsibility.Andrew Eshleman - 2008 - Stanford Encyclopedia of Philosophy.
    When a person performs or fails to perform a morally significant action, we sometimes think that a particular kind of response is warranted. Praise and blame are perhaps the most obvious forms this reaction might take. For example, one who encounters a car accident may be regarded as worthy of praise for having saved a child from inside the burning car, or alternatively, one may be regarded as worthy of blame for not having used one's mobile phone to call for (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Computers and Moral Responsibility: A Framework for Ethical Analysis.John Ladd - 1989 - In The Information Web: Ethical and Social Implications of Computer Networking. Boulder, CO: Westview Press. pp. 207-227.
    This chapter will deal with an issue that is as much a problem for moral philosophy as it is for the computer world. My basic theme is that high technology, and computer technology in particular, raises ethical problems of a new sort that require considerable restructuring of our traditional ethical categories. It follows that our job as philosophers is not, as it is often thought to be, simply to apply ready-made categories to new situations; rather, it is to find new (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions.Luciano Floridi - 2016 - Philosophical Transactions of the Royal Society A 374:20160112.
    The concept of distributed moral responsibility (DMR) has a long history. When it is understood as being entirely reducible to the sum of (some) human, individual and already morally loaded actions, then the allocation of DMR, and hence of praise and reward or blame and punishment, may be pragmatically difficult, but not conceptually problematic. However, in distributed environments, it is increasingly possible that a network of agents, some human, some artificial (e.g. a program) and some hybrid (e.g. a group of (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Punishment and Responsibility.H. L. A. Hart - 1968 - Philosophy 45 (172):162-162.
    Download  
     
    Export citation  
     
    Bookmark   305 citations