Results for 'Moral agents'

991 found
Order:
  1. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  2. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  5. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  7. Are Some Animals Also Moral Agents?Kyle Johannsen - 2019 - Animal Sentience 3 (23/27).
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications.
    Download  
     
    Export citation  
     
    Bookmark  
  8. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  10. Collective responsibility and collective obligations without collective moral agents.Gunnar Björnsson - 2020 - In Saba Bazargan-Forward & Deborah Tollefsen (eds.), The Routledge Handbook of Collective Responsibility. Routledge.
    It is commonplace to attribute obligations to φ or blameworthiness for φ-ing to groups even when no member has an obligation to φ or is individually blameworthy for not φ-ing. Such non-distributive attributions can seem problematic in cases where the group is not a moral agent in its own right. In response, it has been argued both that non-agential groups can have the capabilities requisite to have obligations of their own, and that group obligations can be understood in terms (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  11.  10
    Does Mill Demand Too Much Moraliy From a Moral Agent?Madhumita Mitra - 2020 - Philosophical Papers 16:141-150.
    In this paper, an attempt has been made to examine Mill’s standpoint against a frequently raised objection to utilitarianism, i.e. utilitarian morality demands too much morality from a moral agent. Critics claim that utilitarian moral philosophers in maximizing utility ignore the separateness of persons. The utilitarian moral philosophy is claimed to ignore the individuality of a moral agent as well as his special commitments and relationships. I have argued chiefly based on Mill’s "Utilitarianism" and "On Liberty" (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. From Crooked Wood to Moral Agent: Connecting Anthropology and Ethics in Kant.Jennifer Mensch - 2014 - Estudos Kantianos 2 (1):185-204.
    In this essay I lay out the textual materials surrounding the birth of physical anthropology as a racial science in the eighteenth century with a special focus on the development of Kant's own contributions to the new field. Kant’s contributions to natural history demonstrated his commitment to a physical, mental, and moral hierarchy among the races and I spend some time describing both the advantages he drew from this hierarchy for making sense of the social and political history of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. Moral Uncertainty, Pure Justifiers, and Agent-Centred Options.Patrick Kaczmarek & Harry R. Lloyd - forthcoming - Australasian Journal of Philosophy.
    Moral latitude is only ever a matter of coincidence on the most popular decision procedure in the literature on moral uncertainty. In all possible choice situations other than those in which two or more options happen to be tied for maximal expected choiceworthiness, Maximize Expected Choiceworthiness implies that only one possible option is uniquely appropriate. A better theory of appropriateness would be more sensitive to the decision maker’s credence in theories that endorse agent-centred prerogatives. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Are 'Coalitions of the Willing' Moral Agents?Stephanie Collins - 2014 - Ethics and International Affairs 28 (1):online only.
    In this reply to an article of Toni Erskine's, I argue that coalitions of the willing are moral agents. They can therefore bear responsibility in their own right.
    Download  
     
    Export citation  
     
    Bookmark  
  17. Collective moral obligations: ‘we-reasoning’ and the perspective of the deliberating agent.Anne Schwenkenbecher - 2019 - The Monist 102 (2):151-171.
    Together we can achieve things that we could never do on our own. In fact, there are sheer endless opportunities for producing morally desirable outcomes together with others. Unsurprisingly, scholars have been finding the idea of collective moral obligations intriguing. Yet, there is little agreement among scholars on the nature of such obligations and on the extent to which their existence might force us to adjust existing theories of moral obligation. What interests me in this paper is the (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  18. Agent-Regret and the Social Practice of Moral Luck.Jordan MacKenzie - 2017 - Res Philosophica 94 (1):95-117.
    Agent-regret seems to give rise to a philosophical puzzle. If we grant that we are not morally responsible for consequences outside our control (the ‘Standard View’), then agent-regret—which involves self-reproach and a desire to make amends for consequences outside one’s control—appears rationally indefensible. But despite its apparent indefensibility, agent-regret still seems like a reasonable response to bad moral luck. I argue here that the puzzle can be resolved if we appreciate the role that agent-regret plays in a larger social (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  19. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality (...)
    Download  
     
    Export citation  
     
    Bookmark   293 citations  
  20. Responsibility, Authority, and the Community of Moral Agents in Domestic and International Criminal Law.Ryan Long - 2014 - International Criminal Law Review 14 (4-5):836 – 854.
    Antony Duff argues that the criminal law’s characteristic function is to hold people responsible. It only has the authority to do this when the person who is called to account, and those who call her to account, share some prior relationship. In systems of domestic criminal law, this relationship is co-citizenship. The polity is the relevant community. In international criminal law, the relevant community is simply the moral community of humanity. I am sympathetic to his community-based analysis, but argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Collective Agents as Moral Actors.Säde Hormio - forthcoming - In Säde Hormio & Bill Wringe (eds.), Collective Responsibility: Perspectives on Political Philosophy from Social Ontology. Springer.
    How should we make sense of praise and blame and other such reactions towards collective agents like governments, universities, or corporations? Collective agents can be appropriate targets for our moral feelings and judgements because they can maintain and express moral positions of their own. Moral agency requires being capable of recognising moral considerations and reasons. It also necessitates the ability to react reflexively to moral matters, i.e. to take into account new moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Group agents and moral status: what can we owe to organizations?Adam Https://Orcidorg Lovett & Stefan Https://Orcidorg Riedener - 2021 - Canadian Journal of Philosophy 51 (3):221–238.
    Organizations have neither a right to the vote nor a weighty right to life. We need not enfranchise Goldman Sachs. We should feel few scruples in dissolving Standard Oil. But they are not without rights altogether. We can owe it to them to keep our promises. We can owe them debts of gratitude. Thus, we can owe some things to organizations. But we cannot owe them everything we can owe to people. They seem to have a peculiar, fragmented moral (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Moral Status and Agent-Centred Options.Seth Lazar - 2019 - Utilitas 31 (1):83-105.
    If we were required to sacrifice our own interests whenever doing so was best overall, or prohibited from doing so unless it was optimal, then we would be mere sites for the realisation of value. Our interests, not ourselves, would wholly determine what we ought to do. We are not mere sites for the realisation of value — instead we, ourselves, matter unconditionally. So we have options to act suboptimally. These options have limits, grounded in the very same considerations. Though (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  24. Group Agents, Moral Competence, and Duty-bearers: The Update Argument.Niels de Haan - 2023 - Philosophical Studies 180 (5-6):1691-1715.
    According to some collectivists, purposive groups that lack decision-making procedures such as riot mobs, friends walking together, or the pro-life lobby can be morally responsible and have moral duties. I focus on plural subject- and we-mode-collectivism. I argue that purposive groups do not qualify as duty-bearers even if they qualify as agents on either view. To qualify as a duty-bearer, an agent must be morally competent. I develop the Update Argument. An agent is morally competent only if the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Can morally ignorant agents care enough?Daniel J. Miller - 2021 - Philosophical Explorations 24 (2):155-173.
    Theorists attending to the epistemic condition on responsibility are divided over whether moral ignorance is ever exculpatory. While those who argue that reasonable expectation is required for blameworthiness often maintain that moral ignorance can excuse, theorists who embrace a quality of will approach to blameworthiness are not sanguine about the prospect of excuses among morally ignorant wrongdoers. Indeed, it is sometimes argued that moral ignorance always reflects insufficient care for what matters morally, and therefore that moral (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Does Moral Virtue Constitute a Benefit to the Agent?Brad Hooker - 1996 - In Roger Crisp (ed.), How Should One Live?: Essays on the Virtues. Oxford: Oxford University Press.
    Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  28. Manipulated Agents: A Window to Moral Responsibility. [REVIEW]Taylor W. Cyr - 2020 - Philosophical Quarterly 70 (278):207-209.
    Manipulated Agents: A Window to Moral Responsibility. By Mele Alfred R..).
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Understanding Moral Judgments: The Role of the Agent’s Characteristics in Moral Evaluations.Emilia Alexandra Antonese - 2015 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2 (2): 203-213.
    Traditional studies have shown that the moral judgments are influenced by many biasing factors, like the consequences of a behavior, certain characteristics of the agent who commits the act, or the words chosen to describe the behavior. In the present study we investigated a new factor that could bias the evaluation of morally relevant human behavior: the perceived similarity between the participants and the agent described in the moral scenario. The participants read a story about a driver who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Epistemic Authorities and Skilled Agents: A Pluralist Account of Moral Expertise.Federico Bina, Sofia Bonicalzi & Michel Croce - forthcoming - Topoi:1-13.
    This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Agent-Relativity and the Foundations of Moral Theory.Matthew Hammerton - 2017 - Dissertation, Australian National University
    Download  
     
    Export citation  
     
    Bookmark  
  32. Taking Seriously the Challenges of Agent-Centered Morality.Hye-Ryoung Kang - 2011 - JOURNAL OF INTERNATIONAL WONKWANG CULTURE 2 (1):43-56.
    Agent-centered morality has been a serious challenge to ethical theories based on agent-neutral morality in defining what is the moral point of view. In this paper, my concern is to examine whether arguments for agent-centered morality, in particular, arguments for agent-centered option, can be justified. -/- After critically examining three main arguments for agent-centered morality, I will contend that although there is a ring of truth in the demands of agent-centered morality, agent-centered morality is more problematic than agent-neutral morality. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Mark Schroeder’s Hypotheticalism: agent-neutrality, moral epistemology, and methodology. [REVIEW]Tristram McPherson - 2012 - Philosophical Studies 157 (3):445-453.
    Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  34. Distinguishing agent-relativity from agent-neutrality.Matthew Hammerton - 2018 - Australasian Journal of Philosophy 97 (2):239-250.
    The agent-relative/agent-neutral distinction is one of the most important in contemporary moral theory. Yet, providing an adequate formal account of it has proven difficult. In this article I defend a new formal account of the distinction, one that avoids various problems faced by other accounts. My account is based on an influential account of the distinction developed by McNaughton and Rawling. I argue that their approach is on the right track but that it succumbs to two serious objections. I (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  35. Is Agent-Neutral Deontology Possible?Matthew Hammerton - 2017 - Journal of Ethics and Social Philosophy 12 (3):319-324.
    It is commonly held that all deontological moral theories are agent-relative in the sense that they give each agent a special concern that she does not perform acts of a certain type rather than a general concern with the actions of all agents. Recently, Tom Dougherty has challenged this orthodoxy by arguing that agent-neutral deontology is possible. In this article I counter Dougherty's arguments and show that agent-neutral deontology is not possible.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  36. Weeding Out Flawed Versions of Shareholder Primacy: A Reflection on the Moral Obligations That Carry Over from Principals to Agents.Santiago Mejia - 2019 - Business Ethics Quarterly 29 (4):519-544.
    ABSTRACT:The distinction between what I call nonelective obligations and discretionary obligations, a distinction that focuses on one particular thread of the distinction between perfect and imperfect duties, helps us to identify the obligations that carry over from principals to agents. Clarity on this issue is necessary to identify the moral obligations within “shareholder primacy”, which conceives of managers as agents of shareholders. My main claim is that the principal-agent relation requires agents to fulfill nonelective obligations, but (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  37. How to Punish Collective Agents.Anne Schwenkenbecher - 2011 - Ethics and International Affairs.
    Assuming that states can hold moral duties, it can easily be seen that states—just like any other moral agent—can sometimes fail to discharge those moral duties. In the context of climate change examples of states that do not meet their emission reduction targets abound. If individual moral agents do wrong they usually deserve and are liable to some kind of punishment. But how can states be punished for failing to comply with moral duties without (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  38. Beyond Agent-Regret: Another Attitude for Non-Culpable Failure.Luke Maring - 2021 - Journal of Value Inquiry 10:1-13.
    Imagine a moral agent with the native capacity to act rightly in every kind of circumstance. She will never, that is, find herself thrust into conditions she isn’t equipped to handle. Relationships turned tricky, evolving challenges of parenthood, or living in the midst of a global pandemic—she is never mistaken about what must be done, nor does she lack the skills to do it. When we are thrust into a new kind of circumstance, by contrast, we often need time (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Moral Facts do not Supervene on Non-Moral Qualitative Facts.Frank Hong - 2024 - Erkenntnis:1-11.
    It is very natural to think that if two people, x and y, are qualitatively identical and have committed qualitatively identical actions, then it cannot be the case that one has committed something wrong whereas the other did not. That is to say, if x and y differ in their moral status, then it must be because x and y are qualitatively different, and not simply because x is identical to x and not identical to y. In this fictional (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Interpersonal Moral Luck and Normative Entanglement.Daniel Story - 2019 - Ergo: An Open Access Journal of Philosophy 6:601-616.
    I introduce an underdiscussed type of moral luck, which I call interpersonal moral luck. Interpersonal moral luck characteristically occurs when the actions of other moral agents, qua morally evaluable actions, affect an agent’s moral status in a way that is outside of that agent’s capacity to control. I suggest that interpersonal moral luck is common in collective contexts involving shared responsibility and has interesting distinctive features. I also suggest that many philosophers are already (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  42. Agents, Actions, and Mere Means: A Reply to Critics.Pauline Kleingeld - 2024 - Journal for Ethics and Moral Philosophy / Zeitschrift Für Ethik Und Moralphilosophie 7 (1):165-181.
    The prohibition against using others ‘merely as means’ is one of Kant’s most famous ideas, but it has proven difficult to spell out with precision what it requires of us in practice. In ‘How to Use Someone “Merely as a Means”’ (2020), I proposed a new interpretation of the necessary and sufficient conditions for using someone ‘merely as a means’. I argued that my agent-focused actual consent inter- pretation has strong textual support and significant advantages over other readings of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  73
    The Unity of the Moral Domain.Jeremy David Fix - forthcoming - European Journal of Philosophy.
    What is the function of morality—what is it all about? What is the basis of morality—what explains our moral agency and patiency? This essay defends a unique Kantian answer to these questions. Morality is about securing our independence from each other by giving each other equal discretion over whether and how we interact. The basis of our moral agency and patiency is practical reason. The first half addresses objections that this account cannot explain the moral patiency of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Alfred Mele, Manipulated Agents: A Window into Moral Responsibility. [REVIEW]Robert J. Hartman - 2020 - Journal of Moral Philosophy 17 (5):563-566.
    Review of Manipulated Agents: A Window into Moral Responsibility. By Alfred R. Mele .
    Download  
     
    Export citation  
     
    Bookmark  
  46. Should we Consult Kant when Assessing Agent’s Moral Responsibility for Harm?Friderik Klampfer - 2009 - Balkan Journal of Philosophy 1 (2):131-156.
    The paper focuses on the conditions under which an agent can be justifiably held responsible or liable for the harmful consequences of his or her actions. Kant has famously argued that as long as the agent fulfills his or her moral duty, he or she cannot be blamed for any potential harm that might result from his or her action, no matter how foreseeable these may (have) be(en). I call this the Duty-Absolves-Thesis or DA. I begin by stating the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Moral Luck and The Unfairness of Morality.Robert J. Hartman - 2019 - Philosophical Studies 176 (12):3179-3197.
    Moral luck occurs when factors beyond an agent’s control positively affect how much praise or blame she deserves. Kinds of moral luck are differentiated by the source of lack of control such as the results of her actions, the circumstances in which she finds herself, and the way in which she is constituted. Many philosophers accept the existence of some of these kinds of moral luck but not others, because, in their view, the existence of only some (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  48. Moral Dependence: Reliance on Moral Testimony.Philip J. Nickel - 2002 - Dissertation, Ucla
    Moral dependence is taking another person's assertion or "testimony" that C as a reason to believe C (where C is some moral claim), such that whatever justificatory force is associated with the person's testimony endures or remains as one's reason for believing C. People are justified in relying on one another's testimony in non-moral matters. The dissertation takes up the question whether the same is true for moral beliefs. My method is to divide the topic into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Geoengineering, Agent-Regret, and the Lesser of Two Evils Argument.Toby Svoboda - 2015 - Environmental Ethics 37 (2):207-220.
    According to the “Lesser of Two Evils Argument,” deployment of solar radiation management (SRM) geoengineering in a climate emergency would be morally justified because it likely would be the best option available. A prominent objection to this argument is that a climate emergency might constitute a genuine moral dilemma in which SRM would be impermissible even if it was the best option. However, while conceiving of a climate emergency as a moral dilemma accounts for some ethical concerns about (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  89
    Moral theory and its role in everyday moral thought and action.Brad Hooker - 2018 - In Aaron Zimmerman, Karen Jones & Mark Timmons (eds.), Routledge Handbook on Moral Epistemology. New York: Routledge. pp. 387-400.
    This paper starts by characterising moral requirements and everyday thought. Then ways in which moral requirements shape everyday thought are identified, including the way internalised moral requirements prevent some possible actions from even being considered. The paper then explains that everyday moral thought might be structured by dispositions to which there are corresponding principles even if these principles do not usually appear in the conscious thoughts of agents while they are engaged in everyday moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 991