Results for 'Moral agents'

998 found
Order:
  1. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  3. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  6. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  7. Are Some Animals Also Moral Agents?Kyle Johannsen - 2019 - Animal Sentience 3 (23/27).
    Animal rights philosophers have traditionally accepted the claim that human beings are unique, but rejected the claim that our uniqueness justifies denying animals moral rights. Humans were thought to be unique specifically because we possess moral agency. In this commentary, I explore the claim that some nonhuman animals are also moral agents, and I take note of its counter-intuitive implications.
    Download  
     
    Export citation  
     
    Bookmark  
  8. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  9. From Crooked Wood to Moral Agent: Connecting Anthropology and Ethics in Kant.Jennifer Mensch - 2014 - Estudos Kantianos 2 (1):185-204.
    In this essay I lay out the textual materials surrounding the birth of physical anthropology as a racial science in the eighteenth century with a special focus on the development of Kant's own contributions to the new field. Kant’s contributions to natural history demonstrated his commitment to a physical, mental, and moral hierarchy among the races and I spend some time describing both the advantages he drew from this hierarchy for making sense of the social and political history of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Collective responsibility and collective obligations without collective moral agents.Gunnar Björnsson - forthcoming - In Saba Bazargan-Forward & Deborah Tollefsen (eds.), Handbook of Collective Responsibility. Routledge.
    It is commonplace to attribute obligations to φ or blameworthiness for φ-ing to groups even when no member has an obligation to φ or is individually blameworthy for not φ-ing. Such non-distributive attributions can seem problematic in cases where the group is not a moral agent in its own right. In response, it has been argued both that non-agential groups can have the capabilities requisite to have obligations of their own, and that group obligations can be understood in terms (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12. Collective Agents as Moral Actors.Säde Hormio - forthcoming - In Säde Hormio & Bill Wringe (eds.), Collective Responsibility: Perspectives on Political Philosophy from Social Ontology. Springer.
    How should we make sense of praise and blame and other such reactions towards collective agents like governments, universities, or corporations? Collective agents can be appropriate targets for our moral feelings and judgements because they can maintain and express moral positions of their own. Moral agency requires being capable of recognising moral considerations and reasons. It also necessitates the ability to react reflexively to moral matters, i.e. to take into account new moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Moral Uncertainty, Pure Justifiers, and Agent-Centred Options.Patrick Kaczmarek & Harry R. Lloyd - forthcoming - Australasian Journal of Philosophy.
    Moral latitude is only ever a matter of coincidence on the most popular decision procedure in the literature on moral uncertainty. In all possible choice situations other than those in which two or more options happen to be tied for maximal expected choiceworthiness, Maximize Expected Choiceworthiness implies that only one possible option is uniquely appropriate. A better theory of appropriateness would be more sensitive to the decision maker’s credence in theories that endorse agent-centred prerogatives. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Does Moral Virtue Constitute a Benefit to the Agent?Brad Hooker - 1996 - In Roger Crisp (ed.), How Should One Live?: Essays on the Virtues. Oxford: Oxford University Press.
    Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  15. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Are 'Coalitions of the Willing' Moral Agents?Stephanie Collins - 2014 - Ethics and International Affairs 28 (1):online only.
    In this reply to an article of Toni Erskine's, I argue that coalitions of the willing are moral agents. They can therefore bear responsibility in their own right.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Responsibility, Authority, and the Community of Moral Agents in Domestic and International Criminal Law.Ryan Long - 2014 - International Criminal Law Review 14 (4-5):836 – 854.
    Antony Duff argues that the criminal law’s characteristic function is to hold people responsible. It only has the authority to do this when the person who is called to account, and those who call her to account, share some prior relationship. In systems of domestic criminal law, this relationship is co-citizenship. The polity is the relevant community. In international criminal law, the relevant community is simply the moral community of humanity. I am sympathetic to his community-based analysis, but argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Agent-Regret and the Social Practice of Moral Luck.Jordan MacKenzie - 2017 - Res Philosophica 94 (1):95-117.
    Agent-regret seems to give rise to a philosophical puzzle. If we grant that we are not morally responsible for consequences outside our control (the ‘Standard View’), then agent-regret—which involves self-reproach and a desire to make amends for consequences outside one’s control—appears rationally indefensible. But despite its apparent indefensibility, agent-regret still seems like a reasonable response to bad moral luck. I argue here that the puzzle can be resolved if we appreciate the role that agent-regret plays in a larger social (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  20. Epistemic Authorities and Skilled Agents: A Pluralist Account of Moral Expertise.Federico Bina, Sofia Bonicalzi & Michel Croce - forthcoming - Topoi:1-13.
    This paper explores the concept of moral expertise in the contemporary philosophical debate, with a focus on three accounts discussed across moral epistemology, bioethics, and virtue ethics: an epistemic authority account, a skilled agent account, and a hybrid model sharing key features of the two. It is argued that there are no convincing reasons to defend a monistic approach that reduces moral expertise to only one of these models. A pluralist view is outlined in the attempt to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality (...)
    Download  
     
    Export citation  
     
    Bookmark   287 citations  
  22. Collective moral obligations: ‘we-reasoning’ and the perspective of the deliberating agent.Anne Schwenkenbecher - 2019 - The Monist 102 (2):151-171.
    Together we can achieve things that we could never do on our own. In fact, there are sheer endless opportunities for producing morally desirable outcomes together with others. Unsurprisingly, scholars have been finding the idea of collective moral obligations intriguing. Yet, there is little agreement among scholars on the nature of such obligations and on the extent to which their existence might force us to adjust existing theories of moral obligation. What interests me in this paper is the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  23. Group agents and moral status: what can we owe to organizations?Adam Https://Orcidorg Lovett & Stefan Https://Orcidorg Riedener - 2021 - Canadian Journal of Philosophy 51 (3):221–238.
    Organizations have neither a right to the vote nor a weighty right to life. We need not enfranchise Goldman Sachs. We should feel few scruples in dissolving Standard Oil. But they are not without rights altogether. We can owe it to them to keep our promises. We can owe them debts of gratitude. Thus, we can owe some things to organizations. But we cannot owe them everything we can owe to people. They seem to have a peculiar, fragmented moral (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  24. Moral Status and Agent-Centred Options.Seth Lazar - 2019 - Utilitas 31 (1):83-105.
    If we were required to sacrifice our own interests whenever doing so was best overall, or prohibited from doing so unless it was optimal, then we would be mere sites for the realisation of value. Our interests, not ourselves, would wholly determine what we ought to do. We are not mere sites for the realisation of value — instead we, ourselves, matter unconditionally. So we have options to act suboptimally. These options have limits, grounded in the very same considerations. Though (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Can morally ignorant agents care enough?Daniel J. Miller - 2021 - Philosophical Explorations 24 (2):155-173.
    Theorists attending to the epistemic condition on responsibility are divided over whether moral ignorance is ever exculpatory. While those who argue that reasonable expectation is required for blameworthiness often maintain that moral ignorance can excuse, theorists who embrace a quality of will approach to blameworthiness are not sanguine about the prospect of excuses among morally ignorant wrongdoers. Indeed, it is sometimes argued that moral ignorance always reflects insufficient care for what matters morally, and therefore that moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Group Agents, Moral Competence, and Duty-bearers: The Update Argument.Niels de Haan - 2023 - Philosophical Studies 180 (5-6):1691-1715.
    According to some collectivists, purposive groups that lack decision-making procedures such as riot mobs, friends walking together, or the pro-life lobby can be morally responsible and have moral duties. I focus on plural subject- and we-mode-collectivism. I argue that purposive groups do not qualify as duty-bearers even if they qualify as agents on either view. To qualify as a duty-bearer, an agent must be morally competent. I develop the Update Argument. An agent is morally competent only if the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Understanding Moral Judgments: The Role of the Agent’s Characteristics in Moral Evaluations.Emilia Alexandra Antonese - 2015 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 2 (2): 203-213.
    Traditional studies have shown that the moral judgments are influenced by many biasing factors, like the consequences of a behavior, certain characteristics of the agent who commits the act, or the words chosen to describe the behavior. In the present study we investigated a new factor that could bias the evaluation of morally relevant human behavior: the perceived similarity between the participants and the agent described in the moral scenario. The participants read a story about a driver who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  94
    Agent-Relativity and the Foundations of Moral Theory.Matthew Hammerton - 2017 - Dissertation, Australian National University
    Download  
     
    Export citation  
     
    Bookmark  
  29. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Manipulated Agents: A Window to Moral Responsibility. [REVIEW]Taylor W. Cyr - 2020 - Philosophical Quarterly 70 (278):207-209.
    Manipulated Agents: A Window to Moral Responsibility. By Mele Alfred R..).
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Mark Schroeder’s Hypotheticalism: agent-neutrality, moral epistemology, and methodology. [REVIEW]Tristram McPherson - 2012 - Philosophical Studies 157 (3):445-453.
    Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  32. Taking Seriously the Challenges of Agent-Centered Morality.Hye-Ryoung Kang - 2011 - JOURNAL OF INTERNATIONAL WONKWANG CULTURE 2 (1):43-56.
    Agent-centered morality has been a serious challenge to ethical theories based on agent-neutral morality in defining what is the moral point of view. In this paper, my concern is to examine whether arguments for agent-centered morality, in particular, arguments for agent-centered option, can be justified. -/- After critically examining three main arguments for agent-centered morality, I will contend that although there is a ring of truth in the demands of agent-centered morality, agent-centered morality is more problematic than agent-neutral morality. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Weeding Out Flawed Versions of Shareholder Primacy: A Reflection on the Moral Obligations That Carry Over from Principals to Agents.Santiago Mejia - 2019 - Business Ethics Quarterly 29 (4):519-544.
    ABSTRACT:The distinction between what I call nonelective obligations and discretionary obligations, a distinction that focuses on one particular thread of the distinction between perfect and imperfect duties, helps us to identify the obligations that carry over from principals to agents. Clarity on this issue is necessary to identify the moral obligations within “shareholder primacy”, which conceives of managers as agents of shareholders. My main claim is that the principal-agent relation requires agents to fulfill nonelective obligations, but (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  34. Distinguishing agent-relativity from agent-neutrality.Matthew Hammerton - 2019 - Australasian Journal of Philosophy 97 (2):239-250.
    The agent-relative/agent-neutral distinction is one of the most important in contemporary moral theory. Yet, providing an adequate formal account of it has proven difficult. In this article I defend a new formal account of the distinction, one that avoids various problems faced by other accounts. My account is based on an influential account of the distinction developed by McNaughton and Rawling. I argue that their approach is on the right track but that it succumbs to two serious objections. I (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  35. Moral Luck and Deviant Causation.Sara Bernstein - 2019 - Midwest Studies in Philosophy 43 (1):151-161.
    This paper discusses a puzzling tension in attributions of moral responsibility in cases of resultant moral luck: we seem to hold agents fully morally responsible for unlucky outcomes, but less-than-fully-responsible for unlucky outcomes brought about differently than intended. This tension cannot be easily discharged or explained, but it does shed light on a famous puzzle about causation and responsibility, the Thirsty Traveler.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Karma, Moral Responsibility and Buddhist Ethics.Bronwyn Finnigan - 2022 - In Manuel Vargas & John Doris (eds.), The Oxford Handbook of Moral Psychology. Oxford, U.K.: Oxford University Press. pp. 7-23.
    The Buddha taught that there is no self. He also accepted a version of the doctrine of karmic rebirth, according to which good and bad actions accrue merit and demerit respectively and where this determines the nature of the agent’s next life and explains some of the beneficial or harmful occurrences in that life. But how is karmic rebirth possible if there are no selves? If there are no selves, it would seem there are no agents that could be (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Alfred Mele, Manipulated Agents: A Window into Moral Responsibility. [REVIEW]Robert J. Hartman - 2020 - Journal of Moral Philosophy 17 (5):563-566.
    Review of Manipulated Agents: A Window into Moral Responsibility. By Alfred R. Mele .
    Download  
     
    Export citation  
     
    Bookmark  
  38. Is Agent-Neutral Deontology Possible?Matthew Hammerton - 2017 - Journal of Ethics and Social Philosophy 12 (3):319-324.
    It is commonly held that all deontological moral theories are agent-relative in the sense that they give each agent a special concern that she does not perform acts of a certain type rather than a general concern with the actions of all agents. Recently, Tom Dougherty has challenged this orthodoxy by arguing that agent-neutral deontology is possible. In this article I counter Dougherty's arguments and show that agent-neutral deontology is not possible.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  39. Beyond Agent-Regret: Another Attitude for Non-Culpable Failure.Luke Maring - 2021 - Journal of Value Inquiry 10:1-13.
    Imagine a moral agent with the native capacity to act rightly in every kind of circumstance. She will never, that is, find herself thrust into conditions she isn’t equipped to handle. Relationships turned tricky, evolving challenges of parenthood, or living in the midst of a global pandemic—she is never mistaken about what must be done, nor does she lack the skills to do it. When we are thrust into a new kind of circumstance, by contrast, we often need time (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Should we Consult Kant when Assessing Agent’s Moral Responsibility for Harm?Friderik Klampfer - 2009 - Balkan Journal of Philosophy 1 (2):131-156.
    The paper focuses on the conditions under which an agent can be justifiably held responsible or liable for the harmful consequences of his or her actions. Kant has famously argued that as long as the agent fulfills his or her moral duty, he or she cannot be blamed for any potential harm that might result from his or her action, no matter how foreseeable these may (have) be(en). I call this the Duty-Absolves-Thesis or DA. I begin by stating the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Ought, Agents, and Actions.Mark Schroeder - 2011 - Philosophical Review 120 (1):1-41.
    According to a naïve view sometimes apparent in the writings of moral philosophers, ‘ought’ often expresses a relation between agents and actions – the relation that obtains between an agent and an action when that action is what that agent ought to do. It is not part of this naïve view that ‘ought’ always expresses this relation – on the contrary, adherents of the naïve view are happy to allow that ‘ought’ also has an epistemic sense, on which (...)
    Download  
     
    Export citation  
     
    Bookmark   107 citations  
  43.  94
    Freedom, Harmony & Moral Beauty.Ryan P. Doran - forthcoming - Philosophers' Imprint.
    Why are moral actions beautiful, when indeed they are? This paper assesses the view, found most notably in Schiller, that moral actions are beautiful just when they present the appearance of freedom by appearing to be the result of internal harmony (the Schillerian Internal Harmony Thesis). I argue that while this thesis can accommodate some of the beauty involved in contrasts of the ‘continent’ and the ‘fully’ virtuous, it cannot account for all of the beauty in such contrasts, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Consequentializing agent‐centered restrictions: A Kantsequentialist approach.Douglas W. Portmore - 2023 - Analytic Philosophy 64 (4):443-467.
    There is, on a given moral view, an agent-centered restriction against performing acts of a certain type if that view prohibits agents from performing an instance of that act-type even to prevent two or more others from each performing a morally comparable instance of that act-type. The fact that commonsense morality includes many such agent-centered restrictions has been seen by several philosophers as a decisive objection against consequentialism. Despite this, I argue that agent-centered restrictions are more plausibly accommodated (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. The possibility of collective moral obligations.Anne Schwenkenbecher - 2020 - In Saba Bazargan-Forward & Deborah Perron Tollefsen (eds.), Routledge Handbook of Collective Responsibility. Routledge. pp. 258-273.
    Our moral obligations can sometimes be collective in nature: They can jointly attach to two or more agents in that neither agent has that obligation on their own, but they – in some sense – share it or have it in common. In order for two or more agents to jointly hold an obligation to address some joint necessity problem they must have joint ability to address that problem. Joint ability is highly context-dependent and particularly sensitive to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Special agents: Children's autonomy and parental authority.Robert Noggle - 2002 - In David Archard & Colin M. Macleod (eds.), The Moral and Political Status of Children. Oxford University Press. pp. 97--117.
    Cognitive incompetence cannot adequately explain the special character of children's moral status. It is, in fact, because children lack preference structures that are sufficiently stable over time that they are not ’temporally extended agents’. They are best viewed as 'special agents’, and parents have the responsibility of fostering the development of temporally extended agency and other necessary related moral capacities. Parental authority should be exercised with the view to assisting children to acquire the capacities that facilitate (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  49. Humean agent-neutral reasons?Daan Evers - 2009 - Philosophical Explorations 12 (1):55 – 67.
    In his recent book Slaves of the Passions , Mark Schroeder defends a Humean account of practical reasons ( hypotheticalism ). He argues that it is compatible with 'genuinely agent-neutral reasons'. These are reasons that any agent whatsoever has. According to Schroeder, they may well include moral reasons. Furthermore, he proposes a novel account of a reason's weight, which is supposed to vindicate the claim that agent-neutral reasons ( if they exist), would be weighty irrespective of anyone's desires. If (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  50. Moral Rationalism on the Brain.Joshua May - 2023 - Mind and Language 38 (1):237-255.
    I draw on neurobiological evidence to defend the rationalist thesis that moral judgments are essentially dependent on reasoning, not emotions (conceived as distinct from inference). The neuroscience reveals that moral cognition arises from domain-general capacities in the brain for inferring, in particular, the consequences of an agent’s action, the agent’s intent, and the rules or norms relevant to the context. Although these capacities entangle inference and affect, blurring the reason/emotion dichotomy doesn’t preferentially support sentimentalism. The argument requires careful (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 998