Results for 'Moral agents'

998 found
Order:
  1. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  3. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  6. Are Some Animals Also Moral Agents?Kyle Johannsen - 2019 - Animal Sentience 3 (23/27).
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications.
    Download  
     
    Export citation  
     
    Bookmark  
  7. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  8. From Crooked Wood to Moral Agent: Connecting Anthropology and Ethics in Kant.Jennifer Mensch - 2014 - Estudos Kantianos 2 (1):185-204.
    In this essay I lay out the textual materials surrounding the birth of physical anthropology as a racial science in the eighteenth century with a special focus on the development of Kant's own contributions to the new field. Kant’s contributions to natural history demonstrated his commitment to a physical, mental, and moral hierarchy among the races and I spend some time describing both the advantages he drew from this hierarchy for making sense of the social and political history of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Collective responsibility and collective obligations without collective moral agents.Gunnar Björnsson - 2020 - In Saba Bazargan-Forward & Deborah Tollefsen (eds.), The Routledge Handbook of Collective Responsibility. Routledge.
    It is commonplace to attribute obligations to φ or blameworthiness for φ-ing to groups even when no member has an obligation to φ or is individually blameworthy for not φ-ing. Such non-distributive attributions can seem problematic in cases where the group is not a moral agent in its own right. In response, it has been argued both that non-agential groups can have the capabilities requisite to have obligations of their own, and that group obligations can be understood in terms (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  10. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  11. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Are 'Coalitions of the Willing' Moral Agents?Stephanie Collins - 2014 - Ethics and International Affairs 28 (1):online only.
    In this reply to an article of Toni Erskine's, I argue that coalitions of the willing are moral agents. They can therefore bear responsibility in their own right.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Responsibility, Authority, and the Community of Moral Agents in Domestic and International Criminal Law.Ryan Long - 2014 - International Criminal Law Review 14 (4-5):836 – 854.
    Antony Duff argues that the criminal law’s characteristic function is to hold people responsible. It only has the authority to do this when the person who is called to account, and those who call her to account, share some prior relationship. In systems of domestic criminal law, this relationship is co-citizenship. The polity is the relevant community. In international criminal law, the relevant community is simply the moral community of humanity. I am sympathetic to his community-based analysis, but argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Moral Uncertainty, Pure Justifiers, and Agent-Centred Options.Patrick Kaczmarek & Harry R. Lloyd - forthcoming - Australasian Journal of Philosophy.
    Moral latitude is only ever a matter of coincidence on the most popular decision procedure in the literature on moral uncertainty. In all possible choice situations other than those in which two or more options happen to be tied for maximal expected choiceworthiness, Maximize Expected Choiceworthiness implies that only one possible option is uniquely appropriate. A better theory of appropriateness would be more sensitive to the decision maker’s credence in theories that endorse agent-centred prerogatives. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Collective moral obligations: ‘we-reasoning’ and the perspective of the deliberating agent.Anne Schwenkenbecher - 2019 - The Monist 102 (2):151-171.
    Together we can achieve things that we could never do on our own. In fact, there are sheer endless opportunities for producing morally desirable outcomes together with others. Unsurprisingly, scholars have been finding the idea of collective moral obligations intriguing. Yet, there is little agreement among scholars on the nature of such obligations and on the extent to which their existence might force us to adjust existing theories of moral obligation. What interests me in this paper is the (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  18. Agent-Regret and the Social Practice of Moral Luck.Jordan MacKenzie - 2017 - Res Philosophica 94 (1):95-117.
    Agent-regret seems to give rise to a philosophical puzzle. If we grant that we are not morally responsible for consequences outside our control (the ‘Standard View’), then agent-regret—which involves self-reproach and a desire to make amends for consequences outside one’s control—appears rationally indefensible. But despite its apparent indefensibility, agent-regret still seems like a reasonable response to bad moral luck. I argue here that the puzzle can be resolved if we appreciate the role that agent-regret plays in a larger social (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  19. Collective Agents as Moral Actors.Säde Hormio - forthcoming - In Säde Hormio & Bill Wringe (eds.), Collective Responsibility: Perspectives on Political Philosophy from Social Ontology. Springer.
    How should we make sense of praise and blame and other such reactions towards collective agents like governments, universities, or corporations? Collective agents can be appropriate targets for our moral feelings and judgements because they can maintain and express moral positions of their own. Moral agency requires being capable of recognising moral considerations and reasons. It also necessitates the ability to react reflexively to moral matters, i.e. to take into account new moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Group agents and moral status: what can we owe to organizations?Adam Https://Orcidorg Lovett & Stefan Https://Orcidorg Riedener - 2021 - Canadian Journal of Philosophy 51 (3):221–238.
    Organizations have neither a right to the vote nor a weighty right to life. We need not enfranchise Goldman Sachs. We should feel few scruples in dissolving Standard Oil. But they are not without rights altogether. We can owe it to them to keep our promises. We can owe them debts of gratitude. Thus, we can owe some things to organizations. But we cannot owe them everything we can owe to people. They seem to have a peculiar, fragmented moral (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  21. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality (...)
    Download  
     
    Export citation  
     
    Bookmark   292 citations  
  22. Can morally ignorant agents care enough?Daniel J. Miller - 2021 - Philosophical Explorations 24 (2):155-173.
    Theorists attending to the epistemic condition on responsibility are divided over whether moral ignorance is ever exculpatory. While those who argue that reasonable expectation is required for blameworthiness often maintain that moral ignorance can excuse, theorists who embrace a quality of will approach to blameworthiness are not sanguine about the prospect of excuses among morally ignorant wrongdoers. Indeed, it is sometimes argued that moral ignorance always reflects insufficient care for what matters morally, and therefore that moral (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Moral Status and Agent-Centred Options.Seth Lazar - 2019 - Utilitas 31 (1):83-105.
    If we were required to sacrifice our own interests whenever doing so was best overall, or prohibited from doing so unless it was optimal, then we would be mere sites for the realisation of value. Our interests, not ourselves, would wholly determine what we ought to do. We are not mere sites for the realisation of value — instead we, ourselves, matter unconditionally. So we have options to act suboptimally. These options have limits, grounded in the very same considerations. Though (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Does Moral Virtue Constitute a Benefit to the Agent?Brad Hooker - 1996 - In Roger Crisp (ed.), How Should One Live?: Essays on the Virtues. Oxford: Oxford University Press.
    Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  25. Group Agents, Moral Competence, and Duty-bearers: The Update Argument.Niels de Haan - 2023 - Philosophical Studies 180 (5-6):1691-1715.
    According to some collectivists, purposive groups that lack decision-making procedures such as riot mobs, friends walking together, or the pro-life lobby can be morally responsible and have moral duties. I focus on plural subject- and we-mode-collectivism. I argue that purposive groups do not qualify as duty-bearers even if they qualify as agents on either view. To qualify as a duty-bearer, an agent must be morally competent. I develop the Update Argument. An agent is morally competent only if the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Agent-Relativity and the Foundations of Moral Theory.Matthew Hammerton - 2017 - Dissertation, Australian National University
    Download  
     
    Export citation  
     
    Bookmark  
  27. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Understanding Moral Judgments: The Role of the Agent’s Characteristics in Moral Evaluations.Emilia Alexandra Antonese - 2015 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2 (2): 203-213.
    Traditional studies have shown that the moral judgments are influenced by many biasing factors, like the consequences of a behavior, certain characteristics of the agent who commits the act, or the words chosen to describe the behavior. In the present study we investigated a new factor that could bias the evaluation of morally relevant human behavior: the perceived similarity between the participants and the agent described in the moral scenario. The participants read a story about a driver who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Epistemic Authorities and Skilled Agents: A Pluralist Account of Moral Expertise.Federico Bina, Sofia Bonicalzi & Michel Croce - forthcoming - Topoi:1-13.
    This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Manipulated Agents: A Window to Moral Responsibility. [REVIEW]Taylor W. Cyr - 2020 - Philosophical Quarterly 70 (278):207-209.
    Manipulated Agents: A Window to Moral Responsibility. By Mele Alfred R..).
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Teleology, agent‐relative value, and 'good'.Mark Schroeder - 2007 - Ethics 117 (2):265-000.
    It is now generally understood that constraints play an important role in commonsense moral thinking and generally accepted that they cannot be accommodated by ordinary, traditional consequentialism. Some have seen this as the most conclusive evidence that consequentialism is hopelessly wrong,1 while others have seen it as the most conclusive evidence that moral common sense is hopelessly paradoxical.2 Fortunately, or so it is widely thought, in the last twenty-five years a new research program, that of Agent-Relative Teleology, has (...)
    Download  
     
    Export citation  
     
    Bookmark   82 citations  
  32. Mark Schroeder’s Hypotheticalism: agent-neutrality, moral epistemology, and methodology. [REVIEW]Tristram McPherson - 2012 - Philosophical Studies 157 (3):445-453.
    Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  33. Taking Seriously the Challenges of Agent-Centered Morality.Hye-Ryoung Kang - 2011 - JOURNAL OF INTERNATIONAL WONKWANG CULTURE 2 (1):43-56.
    Agent-centered morality has been a serious challenge to ethical theories based on agent-neutral morality in defining what is the moral point of view. In this paper, my concern is to examine whether arguments for agent-centered morality, in particular, arguments for agent-centered option, can be justified. -/- After critically examining three main arguments for agent-centered morality, I will contend that although there is a ring of truth in the demands of agent-centered morality, agent-centered morality is more problematic than agent-neutral morality. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Moral Advice and Joint Agency.Eric Wiland - 2018 - In Mark C. Timmons (ed.), Oxford Studies in Normative Ethics Volume 8. Oxford University Press. pp. 102-123.
    There are many alleged problems with trusting another person’s moral testimony, perhaps the most prominent of which is that it fails to deliver moral understanding. Without moral understanding, one cannot do the right thing for the right reason, and so acting on trusted moral testimony lacks moral worth. This chapter, however, argues that moral advice differs from moral testimony, differs from it in a way that enables a defender of moral advice to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35.  70
    Agents, Actions, and Mere Means: A Reply to Critics.Pauline Kleingeld - 2024 - Journal for Ethics and Moral Philosophy / Zeitschrift Für Ethik Und Moralphilosophie 7 (1):165-181.
    The prohibition against using others ‘merely as means’ is one of Kant’s most famous ideas, but it has proven difficult to spell out with precision what it requires of us in practice. In ‘How to Use Someone “Merely as a Means”’ (2020), I proposed a new interpretation of the necessary and sufficient conditions for using someone ‘merely as a means’. I argued that my agent-focused actual consent inter- pretation has strong textual support and significant advantages over other readings of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Distinguishing agent-relativity from agent-neutrality.Matthew Hammerton - 2019 - Australasian Journal of Philosophy 97 (2):239-250.
    The agent-relative/agent-neutral distinction is one of the most important in contemporary moral theory. Yet, providing an adequate formal account of it has proven difficult. In this article I defend a new formal account of the distinction, one that avoids various problems faced by other accounts. My account is based on an influential account of the distinction developed by McNaughton and Rawling. I argue that their approach is on the right track but that it succumbs to two serious objections. I (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  37. Weeding Out Flawed Versions of Shareholder Primacy: A Reflection on the Moral Obligations That Carry Over from Principals to Agents.Santiago Mejia - 2019 - Business Ethics Quarterly 29 (4):519-544.
    ABSTRACT:The distinction between what I call nonelective obligations and discretionary obligations, a distinction that focuses on one particular thread of the distinction between perfect and imperfect duties, helps us to identify the obligations that carry over from principals to agents. Clarity on this issue is necessary to identify the moral obligations within “shareholder primacy”, which conceives of managers as agents of shareholders. My main claim is that the principal-agent relation requires agents to fulfill nonelective obligations, but (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  38. Alfred Mele, Manipulated Agents: A Window into Moral Responsibility. [REVIEW]Robert J. Hartman - 2020 - Journal of Moral Philosophy 17 (5):563-566.
    Review of Manipulated Agents: A Window into Moral Responsibility. By Alfred R. Mele .
    Download  
     
    Export citation  
     
    Bookmark  
  39. Beyond Agent-Regret: Another Attitude for Non-Culpable Failure.Luke Maring - 2021 - Journal of Value Inquiry 10:1-13.
    Imagine a moral agent with the native capacity to act rightly in every kind of circumstance. She will never, that is, find herself thrust into conditions she isn’t equipped to handle. Relationships turned tricky, evolving challenges of parenthood, or living in the midst of a global pandemic—she is never mistaken about what must be done, nor does she lack the skills to do it. When we are thrust into a new kind of circumstance, by contrast, we often need time (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Should we Consult Kant when Assessing Agent’s Moral Responsibility for Harm?Friderik Klampfer - 2009 - Balkan Journal of Philosophy 1 (2):131-156.
    The paper focuses on the conditions under which an agent can be justifiably held responsible or liable for the harmful consequences of his or her actions. Kant has famously argued that as long as the agent fulfills his or her moral duty, he or she cannot be blamed for any potential harm that might result from his or her action, no matter how foreseeable these may (have) be(en). I call this the Duty-Absolves-Thesis or DA. I begin by stating the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Moral Patiency Partially Grounds Moral Agency.Dorna Behdadi - manuscript
    This paper argues that, although moral agency and moral patiency are distinct concepts, we have pro tanto normative reasons to ascribe some moral agency to all moral patients. Assuming a practice-focused approach, moral agents are beings that participate in moral responsibility practices. When someone is a participant, we are warranted to take a participant stance toward them. Beings who lack moral agency are instead accounted for by an objective stance. As such, they (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Is Agent-Neutral Deontology Possible?Matthew Hammerton - 2017 - Journal of Ethics and Social Philosophy 12 (3):319-324.
    It is commonly held that all deontological moral theories are agent-relative in the sense that they give each agent a special concern that she does not perform acts of a certain type rather than a general concern with the actions of all agents. Recently, Tom Dougherty has challenged this orthodoxy by arguing that agent-neutral deontology is possible. In this article I counter Dougherty's arguments and show that agent-neutral deontology is not possible.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  43. Moral Responsibility and the Strike Back Emotion: Comments on Bruce Waller’s The Stubborn System of Moral Responsibility.Gregg Caruso - forthcoming - Syndicate Philosophy 1 (1).
    In The Stubborn System of Moral Responsibility (2015), Bruce Waller sets out to explain why the belief in individual moral responsibility is so strong. He begins by pointing out that there is a strange disconnect between the strength of philosophical arguments in support of moral responsibility and the strength of philosophical belief in moral responsibility. While the many arguments in favor of moral responsibility are inventive, subtle, and fascinating, Waller points out that even the most (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Consequentializing agent‐centered restrictions: A Kantsequentialist approach.Douglas W. Portmore - 2023 - Analytic Philosophy 64 (4):443-467.
    There is, on a given moral view, an agent-centered restriction against performing acts of a certain type if that view prohibits agents from performing an instance of that act-type even to prevent two or more others from each performing a morally comparable instance of that act-type. The fact that commonsense morality includes many such agent-centered restrictions has been seen by several philosophers as a decisive objection against consequentialism. Despite this, I argue that agent-centered restrictions are more plausibly accommodated (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Ought, Agents, and Actions.Mark Schroeder - 2011 - Philosophical Review 120 (1):1-41.
    According to a naïve view sometimes apparent in the writings of moral philosophers, ‘ought’ often expresses a relation between agents and actions – the relation that obtains between an agent and an action when that action is what that agent ought to do. It is not part of this naïve view that ‘ought’ always expresses this relation – on the contrary, adherents of the naïve view are happy to allow that ‘ought’ also has an epistemic sense, on which (...)
    Download  
     
    Export citation  
     
    Bookmark   110 citations  
  47. How to Punish Collective Agents.Anne Schwenkenbecher - 2011 - Ethics and International Affairs.
    Assuming that states can hold moral duties, it can easily be seen that states—just like any other moral agent—can sometimes fail to discharge those moral duties. In the context of climate change examples of states that do not meet their emission reduction targets abound. If individual moral agents do wrong they usually deserve and are liable to some kind of punishment. But how can states be punished for failing to comply with moral duties without (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Special agents: Children's autonomy and parental authority.Robert Noggle - 2002 - In David Archard & Colin M. Macleod (eds.), The Moral and Political Status of Children. Oxford University Press. pp. 97--117.
    Cognitive incompetence cannot adequately explain the special character of children's moral status. It is, in fact, because children lack preference structures that are sufficiently stable over time that they are not ’temporally extended agents’. They are best viewed as 'special agents’, and parents have the responsibility of fostering the development of temporally extended agency and other necessary related moral capacities. Parental authority should be exercised with the view to assisting children to acquire the capacities that facilitate (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  49. Shifting the Moral Burden: Expanding Moral Status and Moral Agency.L. Syd M. Johnson - 2021 - Health and Human Rights Journal 2 (23):63-73.
    Two problems are considered here. One relates to who has moral status, and the other relates to who has moral responsibility. The criteria for mattering morally have long been disputed, and many humans and nonhuman animals have been considered “marginal cases,” on the contested edges of moral considerability and concern. The marginalization of humans and other species is frequently the pretext for denying their rights, including the rights to health care, to reproductive freedom, and to bodily autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 998