Results for 'Moral Reasoning in AI'

976 found
Order:
  1. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - forthcoming - American Philosophical Quarterly.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  5.  21
    Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Conversational AI for Psychotherapy and Its Role in the Space of Reason.Jana Sedlakova - 2024 - Cosmos+Taxis 12 (5+6):80-87.
    The recent book by Landgrebe and Smith (2022) offers compelling arguments against the possibility of Artificial General Intelligence (AGI) as well as against the idea that machines have the abilities to master human language, human social interaction and morality. Their arguments leave open, however, a problem on the side of the imaginative power of humans to perceive more than there is and treat AIs as humans and social actors independent of their actual properties and abilities or lack thereof. The mathematical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Building machines that learn and think about morality.Christopher Burr & Geoff Keeling - 2018 - In Christopher Burr & Geoff Keeling (eds.), Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13. Augmenting Morality through Ethics Education: the ACTWith model.Jeffrey White - 2024 - AI and Society:1-20.
    Recently in this journal, Jessica Morley and colleagues (AI & SOC 2023 38:411–423) review AI ethics and education, suggesting that a cultural shift is necessary in order to prepare students for their responsibilities in developing technology infrastructure that should shape ways of life for many generations. Current AI ethics guidelines are abstract and difficult to implement as practical moral concerns proliferate. They call for improvements in ethics course design, focusing on real-world cases and perspective-taking tools to immerse students in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Moral Reasoning in Mencius.James A. Ryan - 2003 - In Keli Fang (ed.), Chinese Philosophy and the Trends of the 21st Century Civilization. Commercial Press. pp. 151-167.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  19. Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Making metaethics work for AI: realism and anti-realism.Michal Klincewicz & Lily E. Frank - 2018 - In Mark Coeckelbergh, M. Loh, J. Funk, M. Seibt & J. Nørskov (eds.), Envisioning Robots in Society – Power, Politics, and Public Space. pp. 311-318.
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  22. Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Meta-Reasoning in Making Moral Decisions Under Normative Uncertainty.Tomasz Żuradzki - 2016 - In Dima Mohammed & Marcin Lewiński (eds.), Argumentation and Reasoned Action. College Publications. pp. 1093-1104.
    I analyze recent discussions about making moral decisions under normative uncertainty. I discuss whether this kind of uncertainty should have practical consequences for decisions and whether there are reliable methods of reasoning that deal with the possibility that we are wrong about some moral issues. I defend a limited use of the decision theory model of reasoning in cases of normative uncertainty.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Regard for Reason in the Moral Mind.Joshua May - 2018 - New York: Oxford University Press.
    The burgeoning science of ethics has produced a trend toward pessimism. Ordinary moral thought and action, we’re told, are profoundly influenced by arbitrary factors and ultimately driven by unreasoned feelings. This book counters the current orthodoxy on its own terms by carefully engaging with the empirical literature. The resulting view, optimistic rationalism, shows the pervasive role played by reason, and ultimately defuses sweeping debunking arguments in ethics. The science does suggest that moral knowledge and virtue don’t come easily. (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  27. Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  28. Précis of Regard for Reason in the Moral Mind.Joshua May - 2019 - Behavioral and Brain Sciences 42 (e146):1-60.
    Regard for Reason in the Moral Mind argues that a careful examination of the scientific literature reveals a foundational role for reasoning in moral thought and action. Grounding moral psychology in reason then paves the way for a defense of moral knowledge and virtue against a variety of empirical challenges, such as debunking arguments and situationist critiques. The book attempts to provide a corrective to current trends in moral psychology, which celebrate emotion over reason (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Marino, Patricia. Moral Reasoning in a Pluralistic World.Montreal: McGill-Queens University Press, 2015. Pp. 216. $27.95. [REVIEW]Uri D. Leibowitz - 2017 - Ethics 127 (3):792-797.
    Download  
     
    Export citation  
     
    Bookmark  
  30. Why Moral Reasoning Is Insufficient for Moral Progress.Agnes Tam - 2020 - Journal of Political Philosophy 28 (1):73-96.
    A lively debate in the literature on moral progress concerns the role of practical reasoning: Does it enable or subvert moral progress? Rationalists believe that moral reasoning enables moral progress, because it helps enhance objectivity in thinking, overcome unruly sentiments, and open our minds to new possibilities. By contrast, skeptics argue that moral reasoning subverts moral progress. Citing growing empirical research on bias, they show that objectivity is an illusion and that (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  31. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. The Relation Between Moral Reasons and Moral Requirement.Brendan de Kenessey - 2023 - Erkenntnis.
    What is the relation between moral reasons and moral requirement? Specifically: what relation does an action have to bear to one’s moral reasons in order to count as morally required? This paper defends the following answer to this question: an action is morally required just in case the moral reasons in favor of that action are enough on their own to outweigh all of the reasons, moral and nonmoral, to perform any alternative. I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Measuring Moral Reasoning using Moral Dilemmas: Evaluating Reliability, Validity, and Differential Item Functioning of the Behavioral Defining Issues Test (bDIT).Youn-Jeng Choi, Hyemin Han, Kelsie J. Dawson, Stephen J. Thoma & Andrea L. Glenn - 2019 - European Journal of Developmental Psychology 16 (5):622-631.
    We evaluated the reliability, validity, and differential item functioning (DIF) of a shorter version of the Defining Issues Test-1 (DIT-1), the behavioral DIT (bDIT), measuring the development of moral reasoning. 353 college students (81 males, 271 females, 1 not reported; age M = 18.64 years, SD = 1.20 years) who were taking introductory psychology classes at a public University in a suburb area in the Southern United States participated in the present study. First, we examined the reliability of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. “Psychopathy, Moral Reasons, and Responsibility”.Erick Ramirez - 2013 - In Christopher D. Herrera & Alexandra Perry (eds.), Ethics and Neurodiversity. Cambridge Scholars University.
    In popular culture psychopaths are inaccurately portrayed as serial killers or homicidal maniacs. Most real-world psychopaths are neither killers nor maniacs. Psychologists currently understand psychopathy as an affective disorder that leads to repeated criminal and antisocial behavior. Counter to this prevailing view, I claim that psychopathy is not necessarily linked with criminal behavior. Successful psychopaths, an intriguing new category of psychopathic agent, support this conception of psychopathy. I then consider reactive attitude theories of moral responsibility. Within this tradition, psychopaths (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Markus Kneer & Michael T. Stuart (eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  37. Moral Reasoning and Moral Progress.Victor Kumar & Joshua May - forthcoming - In David Copp & Connie Rosati (eds.), The Oxford Handbook of Metaethics. Oxford University Press.
    Can reasoning improve moral judgments and lead to moral progress? Pessimistic answers to this question are often based on caricatures of reasoning, weak scientific evidence, and flawed interpretations of solid evidence. In support of optimism, we discuss three forms of moral reasoning (principle reasoning, consistency reasoning, and social proof) that can spur progressive changes in attitudes and behavior on a variety of issues, such as charitable giving, gay rights, and meat consumption. We (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. (1 other version)Moral Reasoning and Emotion.Joshua May & Victor Kumar - 2018 - In Aaron Zimmerman, Karen Jones & Mark Timmons (eds.), Routledge Handbook on Moral Epistemology. New York: Routledge. pp. 139-156.
    This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  40. Bowling alone in the autonomous vehicle: the ethics of well-being in the driverless car.Avigail Ferdman - 2022 - AI and Society:1-13.
    There is a growing body of scholarship on the ethics of autonomous vehicles. Yet the ethical discourse has mostly been focusing on the behavior of the vehicle in accident scenarios. This paper offers a different ethical prism: the implications of the autonomous vehicle for human well-being. As such, it contributes to the growing discourse on the wider societal and moral implications of the autonomous vehicle. The paper is premised on the neo-Aristotelian approach which holds that as human beings, our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Moral Reasons for Moral Beliefs: A Puzzle for Moral Testimony Pessimism.Andrew Reisner & Joseph Van Weelden - 2015 - Logos and Episteme 6 (4):429-448.
    According to moral testimony pessimists, the testimony of moral experts does not provide non-experts with normative reasons for belief. Moral testimony optimists hold that it does. We first aim to show that moral testimony optimism is, to the extent such things may be shown, the more natural view about moral testimony. Speaking roughly, the supposed discontinuity between the norms of moral beliefs and the norms of non-moral beliefs, on careful reflection, lacks the intuitive (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Exciting Reasons and Moral Rationalism in Hutcheson's Illustrations upon the Moral Sense.John J. Tilley - 2012 - Journal of the History of Philosophy 50 (1):53-83.
    One of the most oft-cited parts of Francis Hutcheson’s Illustrations upon the Moral Sense (1728) is his discussion of “exciting reasons.” In this paper I address the question: What is the function of that discussion? In particular, what is its relation to Hutcheson’s attempt to show that the rationalists’ normative thesis ultimately implies, contrary to their moral epistemology, that moral ideas spring from a sense? Despite first appearances, Hutcheson’s discussion of exciting reasons is not part of that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Apparent Paradoxes in Moral Reasoning; Or how you forced him to do it, even though he wasn’t forced to do it.Jonathan Phillips & Liane Young - 2011 - Proceedings of the Thirty-Third Annual Conference of the Cognitive Science Society:138-143.
    The importance of situational constraint for moral evaluations is widely accepted in philosophy, psychology, and the law. However, recent work suggests that this relationship is actually bidirectional: moral evaluations can also influence our judgments of situational constraint. For example, if an agent is thought to have acted immorally rather than morally, that agent is often judged to have acted with greater freedom and under less situational constraint. Moreover, when considering interpersonal situations, we judge that an agent who forces (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Moral uncertainty in bioethical argumentation: a new understanding of the pro-life view on early human embryos.Tomasz Żuradzki - 2014 - Theoretical Medicine and Bioethics 35 (6):441-457.
    In this article, I present a new interpretation of the pro-life view on the status of early human embryos. In my understanding, this position is based not on presumptions about the ontological status of embryos and their developmental capabilities but on the specific criteria of rational decisions under uncertainty and on a cautious response to the ambiguous status of embryos. This view, which uses the decision theory model of moral reasoning, promises to reconcile the uncertainty about the ontological (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  45. Participatory Moral Reasons: Their Scope and Strength.Garrett Cullity - forthcoming - Journal of Practical Ethics.
    A familiar part of ordinary moral thought is this idea: when other people are doing something worthwhile together, there is a reason for you to join in on the same terms as them. Morality does not tell you that you must always do this; but it exerts some pressure on you to join in. Suppose we take this idea seriously: just how should it be developed and applied? More particularly, just which groups and which actions are the ones with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. There’s Some Fetish in Your Ethics: A limited defense of purity reasoning in moral discourse.Dan Demetriou - 2013 - Journal of Philosophical Research 38:377-404.
    Call the ethos understanding rightness in terms of spiritual purity and piety, and wrongness in terms of corruption and sacrilege, the “fetish ethic.” Jonathan Haidt and his colleagues suggest that this ethos is particularly salient to political conservatives and non-liberal cultures around the globe. In this essay, I point to numerous examples of moral fetishism in mainstream academic ethics. Once we see how deeply “infected” our ethical reasoning is by fetishistic intuitions, we can respond by 1) repudiating the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Moral Reasoning. Moral Motivation and the Rational Foundation of Morals.Luz Marina Barreto - manuscript
    In the following paper I will examine the possibility for a rational foundation of morals, rational in the sense that to ground a moral statement on reason amounts to being able to convince an unmotivated agent to conform to a moral rule - that is to say, to “rationally motivate” him (as Habermas would have said) to act in ways for which he or she had no previous reason to act. We will scrutinize the “internalist’s” objection (in Williams’ (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. The Role of Reasoning in Pragmatic Morality.Toby Svoboda - 2021 - Contemporary Pragmatism 18 (1):1-17.
    Charles Sanders Peirce offers a number of arguments against the rational application of theory to morality, suggesting instead that morality should be grounded in instinct. Peirce maintains that we currently lack the scientific knowledge that would justify a rational structuring of morality. This being the case, philosophically generated moralities cannot be otherwise than dogmatic and dangerous. In this paper, I contend that Peirce’s critique of what I call “dogmatic-philosophical morality” should be taken very seriously, but I also claim that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. A moral reason to be a mere theist: improving the practical argument.Xiaofei Liu - 2016 - International Journal for Philosophy of Religion 79 (2):113-132.
    This paper is an attempt to improve the practical argument for beliefs in God. Some theists, most famously Kant and William James, called our attention to a particular set of beliefs, the Jamesian-type beliefs, which are justified by virtue of their practical significance, and these theists tried to justify theistic beliefs on the exact same ground. I argue, contra the Jamesian tradition, that theistic beliefs are different from the Jamesian-type beliefs and thus cannot be justified on the same ground. I (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 976