Switch to: References

Add citations

You must login to add citations.
  1. Evolutionary game theory.Alexander J. McKenzie & Edward N. Zalta - forthcoming - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • (1 other version)The shared circuits model (SCM): How control, mirroring, and simulation can enable imitation, deliberation, and mindreading.Susan Hurley - 2008 - Behavioral and Brain Sciences 31 (1):1-22.
    Imitation, deliberation, and mindreading are characteristically human sociocognitive skills. Research on imitation and its role in social cognition is flourishing across various disciplines. Imitation is surveyed in this target article under headings of behavior, subpersonal mechanisms, and functions of imitation. A model is then advanced within which many of the developments surveyed can be located and explained. The shared circuits model (SCM) explains how imitation, deliberation, and mindreading can be enabled by subpersonal mechanisms of control, mirroring, and simulation. It is (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Games students play: Incorporating the prisoner's dilemma in teaching business ethics. [REVIEW]Kevin Gibson - 2003 - Journal of Business Ethics 48 (1):53-64.
    The so-called "Prisoner''s Dilemma" is often referred to in business ethics, but probably not well understood. This article has three parts: (1) I claim that models derived from game theory are significant in the field for discussions of prudential ethics and the practical decisions managers make; (2) I discuss using them as a practical pedagogical exercise and some of the lessons generated; (3) more speculatively, I suggest that they are useful in discussions of corporate personhood.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking with things: An embodied enactive account of mind–technology interaction.Anco Peeters - 2019 - Dissertation, University of Wollongong
    Technological artefacts have, in recent years, invited increasingly intimate ways of interaction. But surprisingly little attention has been devoted to how such interactions, like with wearable devices or household robots, shape our minds, cognitive capacities, and moral character. In this thesis, I develop an embodied, enactive account of mind--technology interaction that takes the reciprocal influence of artefacts on minds seriously. First, I examine how recent developments in philosophy of technology can inform the phenomenology of mind--technology interaction as seen through an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Designing Virtuous Sex Robots.Anco Peeters & Pim Haselager - 2019 - International Journal of Social Robotics:1-12.
    We propose that virtue ethics can be used to address ethical issues central to discussions about sex robots. In particular, we argue virtue ethics is well equipped to focus on the implications of sex robots for human moral character. Our evaluation develops in four steps. First, we present virtue ethics as a suitable framework for the evaluation of human–robot relationships. Second, we show the advantages of our virtue ethical account of sex robots by comparing it to current instrumentalist approaches, showing (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Introduction: Machine Ethics and the Ethics of Building Intelligent Machines. [REVIEW]Marcello Guarini - 2013 - Topoi 32 (2):213-215.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Designing a machine to learn about the ethics of robotics: the N-reasons platform. [REVIEW]Peter Danielson - 2010 - Ethics and Information Technology 12 (3):251-261.
    We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Is Rational and Voluntary Constraint Possible?Joe Mintoff - 2000 - Dialogue 39 (2):339-.
    Duncan MacIntosh has argued that David Gauthier's notion of a constrained maximization disposition faces a dilemma. For if such a disposition is revocable, it is no longer rational come the time to act on it, and so acting on it is not (as Gauthier argues) rational; but if it is not revocable, acting on it is not voluntary. This paper is a response to MacIntosh's dilemma. I introduce an account of rational intention of a type which has become increasingly and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Prisoner's Dilemma Popularized: Game Theory and Ethical Progress.Peter Danielson - 1995 - Dialogue 34 (2):295-.
    Is game theory good for us? This may seem an odd question. In the strict sense, game theory—the axiomatic account of interaction between rational agents—is as morally neutral as arithmetic. But the popularization of game theory as a way of thinking about social interaction is far from neutral. Consider the contrast between characterizing bargaining over distribution as a “zero-sum society” and focussing on “win-win” cooperative solutions. These reflections bring us to the book under review, Prisoner's Dilemma, a popular introduction to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computing machinery and morality.Blay Whitby - 2008 - AI and Society 22 (4):551-563.
    Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How braess' paradox solves newcomb's problem.A. D. Irvine - 1993 - International Studies in the Philosophy of Science 7 (2):141 – 160.
    Abstract Newcomb's problem is regularly described as a problem arising from equally defensible yet contradictory models of rationality. Braess? paradox is regularly described as nothing more than the existence of non?intuitive (but ultimately non?contradictory) equilibrium points within physical networks of various kinds. Yet it can be shown that Newcomb's problem is structurally identical to Braess? paradox. Both are instances of a well?known result in game theory, namely that equilibria of non?cooperative games are generally Pareto?inefficient. Newcomb's problem is simply a limiting (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The robustness of altruism as an evolutionary strategy.Scott Woodcock & Joseph Heath - 2002 - Biology and Philosophy 17 (4):567-590.
    Kin selection, reciprocity and group selection are widely regarded as evolutionary mechanisms capable of sustaining altruism among humans andother cooperative species. Our research indicates, however, that these mechanisms are only particular examples of a broader set of evolutionary possibilities.In this paper we present the results of a series of simple replicator simulations, run on variations of the 2–player prisoner's dilemma, designed to illustrate the wide range of scenarios under which altruism proves to be robust under evolutionary pressures. The set of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial morality and artificial law.Lothar Philipps - 1993 - Artificial Intelligence and Law 2 (1):51-63.
    The article investigates the interplay of moral rules in computer simulation. The investigation is based on two situations which are well-known to game theory: the prisoner''s dilemma and the game of Chicken. The prisoner''s dilemma can be taken to represent contractual situations, the game of Chicken represents a competitive situation on the one hand and the provision for a common good on the other. Unlike the rules usually used in game theory, each player knows the other''s strategy. In that way, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Planning and the stability of intention: A comment.Laura DeHelian & Edward F. McClennen - 1993 - Minds and Machines 3 (3):319-333.
    Michael Bratman''s restricted two-tier approach to rationalizing the stability of intentions contrasts with an alternative view of planning, for which all of the following claims are made: (a) it shares with Bratman''s restricted two-tier approach the virtue of reducing the magnitude of Smart''s problem; (2) it, rather than the unrestricted two-tier approach, is what is argued for in McClennen (1990); (3) there does not appear to be anything in the central analysis that Bratman has provided of plans and intentions (both (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Playing with ethics: Games, norms and moral freedom.Peter Danielson - 2005 - Topoi 24 (2):221-227.
    Morality is serious yet it needs to be reconciled with the free play of alternatives that characterizes rational and ethical agency. Beginning with a sketch of the seriousness of morality modeled as a constraint, this paper introduces a technical conception of play as degrees of freedom. We consider two ways to apply game theory to ethics, rationalist and evolutionary game theory, contrasting the way they model moral constraint. Freedom in the rationalist account is problematic, subverting willful commitment. In the evolutionary (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Skepticism and Information.Eric T. Kerr & Duncan Pritchard - 2012 - In Hilmi Demir (ed.), Philosophy of Engineering and Technology Volume 8. Springer.
    Philosophers of information, according to Luciano Floridi (The philosophy of information. Oxford University Press, Oxford, 2010, p 32), study how information should be “adequately created, processed, managed, and used.” A small number of epistemologists have employed the concept of information as a cornerstone of their theoretical framework. How this concept can be used to make sense of seemingly intractable epistemological problems, however, has not been widely explored. This paper examines Fred Dretske’s information-based epistemology, in particular his response to radical epistemological (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Privacy, deontic epistemic action logic and software agents.V. Wiegel, M. J. Van den Hoven & G. J. C. Lokhorst - 2005 - Ethics and Information Technology 7 (4):251-264.
    In this paper we present an executable approach to model interactions between agents that involve sensitive, privacy-related information. The approach is formal and based on deontic, epistemic and action logic. It is conceptually related to the Belief-Desire-Intention model of Bratman. Our approach uses the concept of sphere as developed by Waltzer to capture the notion that information is provided mostly with restrictions regarding its application. We use software agent technology to create an executable approach. Our agents hold beliefs about the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Must Constrained Maximizers Be Uncharitable?Jordan Howard Sobel - 1996 - Dialogue 35 (2):241-254.
    By his definition of them, David Gauthier's co-operative constrained maximizers are not necessarily unsharing and disposed to exclude straight maximizers from benefits of their co-operation. Here is Gauthier's full and exact account, his official account, of constrained maximization.
    Download  
     
    Export citation  
     
    Bookmark