Results for 'Artificial Moral Agents'

1000+ found
Order:
  1. Artificial Morality: Making of the Artificial Moral Agents.Petar Nurkić & Marija Kušić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  90
    Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3.  22
    Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  83
    The Rise of Artificial Intelligence and the Crisis of Moral Passivity.Berman Chan - forthcoming - AI and Society:1-3.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7. Autonomous Reboot: The Challenges of Artificial Moral Agency and the Ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  11
    Moral Encounters of the Artificial Kind: Towards a Non-Anthropocentric Account of Machine Moral Agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  95
    Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Are Some Animals Also Moral Agents?Kyle Johannsen - 2019 - Animal Sentience 3 (23/27).
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications.
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible (...) agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  13. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. When is a Robot a Moral Agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  15. Collective Responsibility and Collective Obligations Without Collective Moral Agents.Gunnar Björnsson - forthcoming - In Saba Bazargan-Forward & Deborah Tollefsen (eds.), Handbook of Collective Responsibility. Routledge.
    It is commonplace to attribute obligations to φ or blameworthiness for φ-ing to groups even when no member has an obligation to φ or is individually blameworthy for not φ-ing. Such non-distributive attributions can seem problematic in cases where the group is not a moral agent in its own right. In response, it has been argued both that non-agential groups can have the capabilities requisite to have obligations of their own, and that group obligations can be understood in terms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Are 'Coalitions of the Willing' Moral Agents?Stephanie Collins - 2014 - Ethics and International Affairs 28 (1):online only.
    In this reply to an article of Toni Erskine's, I argue that coalitions of the willing are moral agents. They can therefore bear responsibility in their own right.
    Download  
     
    Export citation  
     
    Bookmark  
  17.  95
    Ethical and Moral Concerns Regarding Artificial Intelligence in Law and Medicine.Soaad Hossain - 2018 - Journal of Undergraduate Life Sciences 12 (1):10.
    This paper summarizes the seminar AI in Medicine in Context: Hopes? Nightmares? that was held at the Centre for Ethics at the University of Toronto on October 17, 2017, with special guest assistant professor and neurosurgeon Dr. Sunit Das. The paper discusses the key points from Dr. Das' talk. Specifically, it discusses about Dr. Das' perspective on the ethical and moral issues that was experienced from applying artificial intelligence (AI) in law and how such issues can also arise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Alfred Mele, Manipulated Agents: A Window Into Moral Responsibility. [REVIEW]Robert J. Hartman - forthcoming - Journal of Moral Philosophy.
    Review of Manipulated Agents: A Window into Moral Responsibility. By Alfred R. Mele.
    Download  
     
    Export citation  
     
    Bookmark  
  19. Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing (...) agents comes into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  20.  95
    Trusting the (Ro)Botic Other: By Assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Responsibility, Authority, and the Community of Moral Agents in Domestic and International Criminal Law.Ryan Long - 2014 - International Criminal Law Review 14 (4-5):836 – 854.
    Antony Duff argues that the criminal law’s characteristic function is to hold people responsible. It only has the authority to do this when the person who is called to account, and those who call her to account, share some prior relationship. In systems of domestic criminal law, this relationship is co-citizenship. The polity is the relevant community. In international criminal law, the relevant community is simply the moral community of humanity. I am sympathetic to his community-based analysis, but argue (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  22.  45
    Manipulated Agents: A Window to Moral Responsibility. [REVIEW]Taylor W. Cyr - 2020 - Philosophical Quarterly 70 (278):207-209.
    Manipulated Agents: A Window to Moral Responsibility. By Mele Alfred R..).
    Download  
     
    Export citation  
     
    Bookmark  
  23. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (APA 2013).Christophe Menant - 2013 - American Philosophical Association Newsletter on Philosophy and Computers 13 (1):30-34.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25.  43
    Artificial Free Will: The Responsibility Strategy and Artificial Agents.Sven Delarivière - 2016 - Apeiron Student Journal of Philosophy (Portugal) 7:175-203.
    Both a traditional notion of free will, present in human beings, and artificial intelligence are often argued to be inherently incompatible with determinism. Contrary to these criticisms, this paper defends that an account of free will compatible with determinism, the responsibility strategy (coined here) specifically, is a variety of free will worth wanting as well as a variety that is possible to (in principle) artificially construct. First, freedom will be defined and related to ethics. With that in mind, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Meaning Generation for Animals, Humans and Artificial Agents. An Evolutionary Perspective on the Philosophy of Information. (IS4SI 2017).Christophe Menant - manuscript
    Meanings are present everywhere in our environment and within ourselves. But these meanings do not exist by themselves. They are associated to information and have to be created, to be generated by agents. The Meaning Generator System (MGS) has been developed on a system approach to model meaning generation in agents following an evolutionary perspective. The agents can be natural or artificial. The MGS generates meaningful information (a meaning) when it receives information that has a connection (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Intelligence Via Ultrafilters: Structural Properties of Some Intelligence Comparators of Deterministic Legg-Hutter Agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Roman Yampolskiy (ed.), Artificial Intelligence Safety and Security. New York, NY, USA: CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  29. Joint Duties and Global Moral Obligations.Anne Schwenkenbecher - 2013 - Ratio 26 (3):310-328.
    In recent decades, concepts of group agency and the morality of groups have increasingly been discussed by philosophers. Notions of collective or joint duties have been invoked especially in the debates on global justice, world poverty and climate change. This paper enquires into the possibility and potential nature of moral duties individuals in unstructured groups may hold together. It distinguishes between group agents and groups of people which – while not constituting a collective agent – are nonetheless capable (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  30.  49
    A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  40
    Justice and the Tendency Towards Good: The Role of Custom in Hume’s Theory of Moral Motivation.James Chamberlain - forthcoming - Hume Studies.
    Given the importance of sympathetic pleasures within Hume’s account of approval and moral motivation, why does Hume think we feel obliged to act justly on those occasions when we know that doing so will benefit nobody? I argue that Hume uses the case of justice as evidence for a key claim regarding all virtues. Hume does not think we approve of token virtuous actions, whether natural or artificial, because they cause or aim to cause happiness in others. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Can Humanoid Robots Be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  18
    Współzależność analizy etycznej i etyki.John Ladd - 1973 - Etyka 11:139-158.
    AI designers endeavour to improve ‘autonomy’ in artificial intelligent devices, as recent developments show. This chapter firstly argues against attributing metaphysical attitudes to AI and, simultaneously, in favor of improving autonomous AI which has been enabled to respect autonomy in human agents. This seems to be the only responsible way of making further advances in the field of autonomous social AI. Let us examine what is meant by claims such as designing our artificial alter egos and sharing (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  34. One Decade of Universal Artificial Intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Understanding and Augmenting Human Morality: The Actwith Model of Conscience.Jeffrey White - 2009 - In L. Magnani (ed.), computational intelligence.
    Abstract. Recent developments, both in the cognitive sciences and in world events, bring special emphasis to the study of morality. The cognitive sci- ences, spanning neurology, psychology, and computational intelligence, offer substantial advances in understanding the origins and purposes of morality. Meanwhile, world events urge the timely synthesis of these insights with tra- ditional accounts that can be easily assimilated and practically employed to augment moral judgment, both to solve current problems and to direct future action. The object of (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Bridging The Emissions Gap: A Plea For Taking Up The Slack.Anne Schwenkenbecher - 2013 - Philosophy and Public Issues - Filosofia E Questioni Pubbliche 3 (1):273-301.
    With the existing commitments to climate change mitigation, global warming is likely to exceed 2°C and to trigger irreversible and harmful threshold effects. The difference between the reductions necessary to keep the 2°C limit and those reductions countries have currently committed to is called the ‘emissions gap’. I argue that capable states not only have a moral duty to make voluntary contributions to bridge that gap, but that complying states ought to make up for the failures of some other (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  29
    Building Machines That Learn and Think About Morality.Christopher Burr & Geoff Keeling - 2018 - In Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  40. Reflections on Emotions, Imagination, and Moral Reasoning Toward an Integrated, Multidisciplinary Approach to Moral Cognition.Wayne Christensen & John Sutton - 2012 - In Robyn Langdon & Catriona Mackenzie (eds.), Emotions, Imagination, and Moral Reasoning. Psychology Press. pp. 327-347.
    B eginning with the problem of integrating diverse disciplinary perspectives on moral cognition, we argue that the various disciplines have an interest in developing a common conceptual framework for moral cognition research. We discuss issues arising in the other chapters in this volume that might serve as focal points for future investigation and as the basis for the eventual development of such a framework. These include the role of theory in binding together diverse phenomena and the role of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  10
    Ethics, Prosperity, and Society: Moral Evaluation Using Virtue Ethics and Utilitarianism.Aditya Hegde, Vibhav Agarwal & Shrisha Rao - 2020 - 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI-PRICAI 2020).
    Modelling ethics is critical to understanding and analysing social phenomena. However, prior literature either incorporates ethics into agent strategies or uses it for evaluation of agent behaviour. This work proposes a framework that models both, ethical decision making as well as evaluation using virtue ethics and utilitarianism. In an iteration, agents can use either the classical Continuous Prisoner's Dilemma or a new type of interaction called moral interaction, where agents donate or steal from other agents. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Corporate Crocodile Tears? On the Reactive Attitudes of Corporate Agents.Gunnar Björnsson & Kendy Hess - 2017 - Philosophy and Phenomenological Research 94 (2):273–298.
    Recently, a number of people have argued that certain entities embodied by groups of agents themselves qualify as agents, with their own beliefs, desires, and intentions; even, some claim, as moral agents. However, others have independently argued that fully-fledged moral agency involves a capacity for reactive attitudes such as guilt and indignation, and these capacities might seem beyond the ken of “collective” or “ corporate ” agents. Individuals embodying such agents can of course (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  43. Karma, Moral Responsibility and Buddhist Ethics.Bronwyn Finnigan - forthcoming - In Manuel Vargas & John Doris (eds.), Oxford Handbook of Moral Psychology.
    The Buddha taught that there is no self. He also accepted a version of the doctrine of karmic rebirth, according to which good and bad actions accrue merit and demerit respectively and where this determines the nature of the agent’s next life and explains some of the beneficial or harmful occurrences in that life. But how is karmic rebirth possible if there are no selves? If there are no selves, it would seem there are no agents that could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Strawson, Moral Responsibility, and the "Order of Explanation": An Intervention.Patrick Todd - 2016 - Ethics 127 (1):208-240.
    P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  46.  32
    Does False Consciousness Necessarily Preclude Moral Blameworthiness?: The Refusal of the Women Anti-Suffragists.Lee Wilson - forthcoming - Hypatia.
    Social philosophers often invoke the concept of false consciousness in their analyses, referring to a set of evidence-resistant, ignorant attitudes held by otherwise sound epistemic agents, systematically occurring in virtue of, and motivating them to perpetuate, structural oppression. But there is a worry that appealing to the notion in questions of responsibility for the harm suffered by members of oppressed groups is victim-blaming. Individuals under false consciousness allegedly systematically fail the relevant rationality and epistemic conditions due to structural distortions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Making Metaethics Work for AI: Realism and Anti-Realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. Amsterdam, Netherlands: IOS Press. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Download  
     
    Export citation  
     
    Bookmark  
  48. Kant Does Not Deny Resultant Moral Luck.Robert J. Hartman - 2019 - Midwest Studies in Philosophy 43 (1):136-150.
    It is almost unanimously accepted that Kant denies resultant moral luck—that is, he denies that the lucky consequence of a person’s action can affect how much praise or blame she deserves. Philosophers often point to the famous good will passage at the beginning of the Groundwork to justify this claim. I argue, however, that this passage does not support Kant’s denial of resultant moral luck. Subsequently, I argue that Kant allows agents to be morally responsible for certain (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  77
    The Moral Grounds of Reasonably Mistaken Self‐Defense.Renée Jorgensen Bolinger - forthcoming - Philosophy and Phenomenological Research.
    Some, but not all, of the mistakes a person makes when acting in apparently necessary self-defense are reasonable: we take them not to violate the rights of the apparent aggressor. I argue that this is explained by duties grounded in agents' entitlements to a fair distribution of the risk of suffering unjust harm. I suggest that the content of these duties is filled in by a social signaling norm, and offer some moral constraints on the form such a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Artificial Consciousness: From Impossibility to Multiplicity.Chuanfei Chin - 2017 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. Berlin: Springer. pp. 3-18.
    How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000