Results for 'moral agent'

975 found
Order:
  1. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  2. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  2
    Are current AIs moral agents?Xin Guan - manuscript
    In the following essay, I will argue that the current AIs are not moral agents. I will first criticize the influential argument from sentience accounted by Véliz. According to Véliz, AIs are not moral agents because AIs can not feel pleasure and pain. However, I will show that moral agency does not necessarily require the ability to be sentient and refute Véliz’s argument. Instead, I will propose an argument from responsibility. First, I will establish the truth that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  6. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  7. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Are Some Animals Also Moral Agents?Kyle Johannsen - 2019 - Animal Sentience 3 (23/27).
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  10. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Collective responsibility and collective obligations without collective moral agents.Gunnar Björnsson - 2020 - In Saba Bazargan-Forward & Deborah Tollefsen (eds.), The Routledge Handbook of Collective Responsibility. Routledge.
    It is commonplace to attribute obligations to φ or blameworthiness for φ-ing to groups even when no member has an obligation to φ or is individually blameworthy for not φ-ing. Such non-distributive attributions can seem problematic in cases where the group is not a moral agent in its own right. In response, it has been argued both that non-agential groups can have the capabilities requisite to have obligations of their own, and that group obligations can be understood in (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  12. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. From Crooked Wood to Moral Agent: Connecting Anthropology and Ethics in Kant.Jennifer Mensch - 2014 - Estudos Kantianos 2 (1):185-204.
    In this essay I lay out the textual materials surrounding the birth of physical anthropology as a racial science in the eighteenth century with a special focus on the development of Kant's own contributions to the new field. Kant’s contributions to natural history demonstrated his commitment to a physical, mental, and moral hierarchy among the races and I spend some time describing both the advantages he drew from this hierarchy for making sense of the social and political history of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. Are 'Coalitions of the Willing' Moral Agents?Stephanie Collins - 2014 - Ethics and International Affairs 28 (1):online only.
    In this reply to an article of Toni Erskine's, I argue that coalitions of the willing are moral agents. They can therefore bear responsibility in their own right.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Does Mill Demand Too Much Moraliy From a Moral Agent?Madhumita Mitra - 2020 - Philosophical Papers 16:141-150.
    In this paper, an attempt has been made to examine Mill’s standpoint against a frequently raised objection to utilitarianism, i.e. utilitarian morality demands too much morality from a moral agent. Critics claim that utilitarian moral philosophers in maximizing utility ignore the separateness of persons. The utilitarian moral philosophy is claimed to ignore the individuality of a moral agent as well as his special commitments and relationships. I have argued chiefly based on Mill’s "Utilitarianism" and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  18. Agent-Regret and the Social Practice of Moral Luck.Jordan MacKenzie - 2017 - Res Philosophica 94 (1):95-117.
    Agent-regret seems to give rise to a philosophical puzzle. If we grant that we are not morally responsible for consequences outside our control (the ‘Standard View’), then agent-regret—which involves self-reproach and a desire to make amends for consequences outside one’s control—appears rationally indefensible. But despite its apparent indefensibility, agent-regret still seems like a reasonable response to bad moral luck. I argue here that the puzzle can be resolved if we appreciate the role that agent-regret plays (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  19.  89
    Courage, Evidence, And Epistemic Virtue.Osvil Acosta-Morales - 2006 - Florida Philosophical Review 6 (1):8-16.
    I present here a case against the evidentialist approach that claims that in so far as our interests are epistemic what should guide our belief formation and revision is always a strict adherence to the available evidence. I go on to make the stronger claim that some beliefs based on admittedly “insufficient” evidence may exhibit epistemic virtue. I propose that we consider a form of courage to be an intellectual or epistemic virtue. It is through this notion of courage that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Responsibility, Authority, and the Community of Moral Agents in Domestic and International Criminal Law.Ryan Long - 2014 - International Criminal Law Review 14 (4-5):836 – 854.
    Antony Duff argues that the criminal law’s characteristic function is to hold people responsible. It only has the authority to do this when the person who is called to account, and those who call her to account, share some prior relationship. In systems of domestic criminal law, this relationship is co-citizenship. The polity is the relevant community. In international criminal law, the relevant community is simply the moral community of humanity. I am sympathetic to his community-based analysis, but argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Manipulated Agents: A Window to Moral Responsibility. [REVIEW]Taylor W. Cyr - 2020 - Philosophical Quarterly 70 (278):207-209.
    Manipulated Agents: A Window to Moral Responsibility. By Mele Alfred R..).
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Collective Agents as Moral Actors.Säde Hormio - 2024 - In Säde Hormio & Bill Wringe (eds.), Collective Responsibility: Perspectives on Political Philosophy from Social Ontology. Springer.
    How should we make sense of praise and blame and other such reactions towards collective agents like governments, universities, or corporations? My argument is that collective agents do not have to qualify as moral agents for us to make sense of their responsibility. Collective agents can be appropriate targets for our moral feelings and judgements because they can maintain and express moral positions of their own. Moral agency requires being capable of recognizing moral considerations and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Moral Uncertainty, Pure Justifiers, and Agent-Centred Options.Patrick Kaczmarek & Harry R. Lloyd - forthcoming - Australasian Journal of Philosophy.
    Moral latitude is only ever a matter of coincidence on the most popular decision procedure in the literature on moral uncertainty. In all possible choice situations other than those in which two or more options happen to be tied for maximal expected choiceworthiness, Maximize Expected Choiceworthiness implies that only one possible option is uniquely appropriate. A better theory of appropriateness would be more sensitive to the decision maker’s credence in theories that endorse agent-centred prerogatives. In this paper, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Group agents and moral status: what can we owe to organizations?Adam Https://Orcidorg Lovett & Stefan Https://Orcidorg Riedener - 2021 - Canadian Journal of Philosophy 51 (3):221–238.
    Organizations have neither a right to the vote nor a weighty right to life. We need not enfranchise Goldman Sachs. We should feel few scruples in dissolving Standard Oil. But they are not without rights altogether. We can owe it to them to keep our promises. We can owe them debts of gratitude. Thus, we can owe some things to organizations. But we cannot owe them everything we can owe to people. They seem to have a peculiar, fragmented moral (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  25. Collective moral obligations: ‘we-reasoning’ and the perspective of the deliberating agent.Anne Schwenkenbecher - 2019 - The Monist 102 (2):151-171.
    Together we can achieve things that we could never do on our own. In fact, there are sheer endless opportunities for producing morally desirable outcomes together with others. Unsurprisingly, scholars have been finding the idea of collective moral obligations intriguing. Yet, there is little agreement among scholars on the nature of such obligations and on the extent to which their existence might force us to adjust existing theories of moral obligation. What interests me in this paper is the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  26. Group Agents, Moral Competence, and Duty-bearers: The Update Argument.Niels de Haan - 2023 - Philosophical Studies 180 (5-6):1691-1715.
    According to some collectivists, purposive groups that lack decision-making procedures such as riot mobs, friends walking together, or the pro-life lobby can be morally responsible and have moral duties. I focus on plural subject- and we-mode-collectivism. I argue that purposive groups do not qualify as duty-bearers even if they qualify as agents on either view. To qualify as a duty-bearer, an agent must be morally competent. I develop the Update Argument. An agent is morally competent only if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Moral Status and Agent-Centred Options.Seth Lazar - 2019 - Utilitas 31 (1):83-105.
    If we were required to sacrifice our own interests whenever doing so was best overall, or prohibited from doing so unless it was optimal, then we would be mere sites for the realisation of value. Our interests, not ourselves, would wholly determine what we ought to do. We are not mere sites for the realisation of value — instead we, ourselves, matter unconditionally. So we have options to act suboptimally. These options have limits, grounded in the very same considerations. Though (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Can morally ignorant agents care enough?Daniel J. Miller - 2021 - Philosophical Explorations 24 (2):155-173.
    Theorists attending to the epistemic condition on responsibility are divided over whether moral ignorance is ever exculpatory. While those who argue that reasonable expectation is required for blameworthiness often maintain that moral ignorance can excuse, theorists who embrace a quality of will approach to blameworthiness are not sanguine about the prospect of excuses among morally ignorant wrongdoers. Indeed, it is sometimes argued that moral ignorance always reflects insufficient care for what matters morally, and therefore that moral (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. Does Moral Virtue Constitute a Benefit to the Agent?Brad Hooker - 1998 - In Roger Crisp (ed.), How Should One Live?: Essays on the Virtues. Oxford: Oxford University Press.
    Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  30. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Understanding Moral Judgments: The Role of the Agent’s Characteristics in Moral Evaluations.Emilia Alexandra Antonese - 2015 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2 (2): 203-213.
    Traditional studies have shown that the moral judgments are influenced by many biasing factors, like the consequences of a behavior, certain characteristics of the agent who commits the act, or the words chosen to describe the behavior. In the present study we investigated a new factor that could bias the evaluation of morally relevant human behavior: the perceived similarity between the participants and the agent described in the moral scenario. The participants read a story about a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Epistemic Authorities and Skilled Agents: A Pluralist Account of Moral Expertise.Federico Bina, Sofia Bonicalzi & Michel Croce - 2024 - Topoi 43:1053-1065.
    This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Agent-Relativity and the Foundations of Moral Theory.Matthew Hammerton - 2017 - Dissertation, Australian National University
    Download  
     
    Export citation  
     
    Bookmark  
  34. Taking Seriously the Challenges of Agent-Centered Morality.Hye-Ryoung Kang - 2011 - JOURNAL OF INTERNATIONAL WONKWANG CULTURE 2 (1):43-56.
    Agent-centered morality has been a serious challenge to ethical theories based on agent-neutral morality in defining what is the moral point of view. In this paper, my concern is to examine whether arguments for agent-centered morality, in particular, arguments for agent-centered option, can be justified. -/- After critically examining three main arguments for agent-centered morality, I will contend that although there is a ring of truth in the demands of agent-centered morality, agent-centered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35.  95
    Moral Testimony: Another Defense.Xuanpu Zhuang - 2024 - Filosofia Unisinos 25 (2):1-12.
    According to some pessimists, trusting moral testimony is an action in which the agent does not think about moral questions by herself, and thus it is unacceptable. I argue for optimism by giving some reasons to display moral agents are still depending upon their own in many cases of moral testimony. Specifically, I argue that testimony is a form of social cooperation: the division of epistemic labor. My strategy is as follows: First, I give a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Distinguishing agent-relativity from agent-neutrality.Matthew Hammerton - 2018 - Australasian Journal of Philosophy 97 (2):239-250.
    The agent-relative/agent-neutral distinction is one of the most important in contemporary moral theory. Yet, providing an adequate formal account of it has proven difficult. In this article I defend a new formal account of the distinction, one that avoids various problems faced by other accounts. My account is based on an influential account of the distinction developed by McNaughton and Rawling. I argue that their approach is on the right track but that it succumbs to two serious (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  37. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Mark Schroeder’s Hypotheticalism: agent-neutrality, moral epistemology, and methodology. [REVIEW]Tristram McPherson - 2012 - Philosophical Studies 157 (3):445-453.
    Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  39. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Is Agent-Neutral Deontology Possible?Matthew Hammerton - 2017 - Journal of Ethics and Social Philosophy 12 (3):319-324.
    It is commonly held that all deontological moral theories are agent-relative in the sense that they give each agent a special concern that she does not perform acts of a certain type rather than a general concern with the actions of all agents. Recently, Tom Dougherty has challenged this orthodoxy by arguing that agent-neutral deontology is possible. In this article I counter Dougherty's arguments and show that agent-neutral deontology is not possible.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  41. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Should we Consult Kant when Assessing Agent’s Moral Responsibility for Harm?Friderik Klampfer - 2009 - Balkan Journal of Philosophy 1 (2):131-156.
    The paper focuses on the conditions under which an agent can be justifiably held responsible or liable for the harmful consequences of his or her actions. Kant has famously argued that as long as the agent fulfills his or her moral duty, he or she cannot be blamed for any potential harm that might result from his or her action, no matter how foreseeable these may (have) be(en). I call this the Duty-Absolves-Thesis or DA. I begin by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Beyond Agent-Regret: Another Attitude for Non-Culpable Failure.Luke Maring - 2021 - Journal of Value Inquiry 57 (3):463-475.
    Imagine a moral agent with the native capacity to act rightly in every kind of circumstance. She will never, that is, find herself thrust into conditions she isn’t equipped to handle. Relationships turned tricky, evolving challenges of parenthood, or living in the midst of a global pandemic—she is never mistaken about what must be done, nor does she lack the skills to do it. When we are thrust into a new kind of circumstance, by contrast, we often need (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Weeding Out Flawed Versions of Shareholder Primacy: A Reflection on the Moral Obligations That Carry Over from Principals to Agents.Santiago Mejia - 2019 - Business Ethics Quarterly 29 (4):519-544.
    ABSTRACT:The distinction between what I call nonelective obligations and discretionary obligations, a distinction that focuses on one particular thread of the distinction between perfect and imperfect duties, helps us to identify the obligations that carry over from principals to agents. Clarity on this issue is necessary to identify the moral obligations within “shareholder primacy”, which conceives of managers as agents of shareholders. My main claim is that the principal-agent relation requires agents to fulfill nonelective obligations, but it does (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  45. Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2024 - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Alfred Mele, Manipulated Agents: A Window into Moral Responsibility. [REVIEW]Robert J. Hartman - 2020 - Journal of Moral Philosophy 17 (5):563-566.
    Review of Manipulated Agents: A Window into Moral Responsibility. By Alfred R. Mele .
    Download  
     
    Export citation  
     
    Bookmark  
  47. Interpersonal Moral Luck and Normative Entanglement.Daniel Story - 2019 - Ergo: An Open Access Journal of Philosophy 6:601-616.
    I introduce an underdiscussed type of moral luck, which I call interpersonal moral luck. Interpersonal moral luck characteristically occurs when the actions of other moral agents, qua morally evaluable actions, affect an agent’s moral status in a way that is outside of that agent’s capacity to control. I suggest that interpersonal moral luck is common in collective contexts involving shared responsibility and has interesting distinctive features. I also suggest that many philosophers are (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Moral Luck and The Unfairness of Morality.Robert Hartman - 2019 - Philosophical Studies 176 (12):3179-3197.
    Moral luck occurs when factors beyond an agent’s control positively affect how much praise or blame she deserves. Kinds of moral luck are differentiated by the source of lack of control such as the results of her actions, the circumstances in which she finds herself, and the way in which she is constituted. Many philosophers accept the existence of some of these kinds of moral luck but not others, because, in their view, the existence of only (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  49. Doubts about Moral Perception.Pekka Väyrynen - 2018 - In Anna Bergqvist & Robert Cowan (eds.), Evaluative Perception. Oxford, United Kingdom: Oxford University Press. pp. 109-28.
    This paper defends doubts about the existence of genuine moral perception, understood as the claim that at least some moral properties figure in the contents of perceptual experience. Standard examples of moral perception are better explained as transitions in thought whose degree of psychological immediacy varies with how readily non-moral perceptual inputs, jointly with the subject's background moral beliefs, training, and habituation, trigger the kinds of phenomenological responses that moral agents are normally disposed to (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  50. Moral Responsibility and Existential Attitudes.Paul Russell - 2022 - In Dana Kay Nelkin & Derk Pereboom (eds.), The Oxford Handbook of Moral Responsibility. New York: Oxford University Press. pp. 519-543.
    We might describe the philosophical issue of human freedom and moral responsibility as an existential metaphysical problem. Problems of this kind are not just a matter of theoretical interest and curiosity: They address issues that we care about and that affect us. They are, more specifically, relevant to the significance and value that we attach to our lives and the way that we lead them. According to the orthodox view, there is a tidy connection between skepticism and pessimism. Skepticism (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 975