Results for 'moral intelligence'

998 found
Order:
  1. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  3. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The intelligibility of moral intransigence: A dilemma for cognitivism about moral judgment.Richard Rowland - 2018 - Analysis 78 (2):266-275.
    Many have argued that various features of moral disagreements create problems for cognitivism about moral judgment, but these arguments have been shown to fail. In this paper, I articulate a new problem for cognitivism that derives from features of our responses to moral disagreement. I argue that cognitivism entails that one of the following two claims is false: (1) a mental state is a belief only if it tracks changes in perceived evidence; (2) it is intelligible to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Moral realism, normative reasons, and rational intelligibility.Hallvard Lillehammer - 2002 - Erkenntnis 57 (1):47-69.
    This paper concerns a prima facie tension between the claims that (a) agents have normative reasons obtaining in virtue of the nature of the options that confront them, and (b) there is a non-trivial connection between the grounds of normative reasons and the upshots of sound practical reasoning. Joint commitment to these claims is shown to give rise to a dilemma. I argue that the dilemma is avoidable on a response dependent account of normative reasons accommodating both (a) and (b) (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  8. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Moral Projection and the Intelligibility of Collective Forgiveness.Harry Bunting - 2009 - Yearbook of the Irish Philosophical Society 7:107 - 120.
    ABSTRACT. The paper explores the philosophical intelligibility of contemporary defences of collective political forgiveness against a background of sceptical doubt, both general and particular. Three genera sceptical arguments are examined: one challenges the idea that political collectives exist; another challenges the idea that moral agency can be projected upon political collectives; a final argument challenges the attribution of emotions, especially anger, to collectives. Each of these sceptical arguments is rebutted. At a more particular level, the contrasts between individual forgiveness (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Artificial Intelligence and Moral Theology: A Conversation.Brian Patrick Green, Matthew J. Gaudet, Levi Checketts, Brian Cutter, Noreen Herzfeld, Cory Andrew Labrecque, Anselm Ramelow, Paul Scherz, Marga Vega, Andrea Vicini & Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):13-40.
    Download  
     
    Export citation  
     
    Bookmark  
  12. Moral Agency in Artificial Intelligence (Robots).The Journal of Ethical Reflections & Saleh Gorbanian - 2020 - Ethical Reflections, 1 (1):11-32.
    Growing technological advances in intelligent artifacts and bitter experiences of the past have emphasized the need to use and operate ethics in this field. Accordingly, it is vital to discuss the ethical integrity of having intelligent artifacts. Concerning the method of gathering materials, the current study uses library and documentary research followed by attribution style. Moreover, descriptive analysis is employed in order to analyze data. Explaining and criticizing the opposing views in this field and reviewing the related literature, it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Theological Foundations for Moral Artificial Intelligence.Mark Graves - 2022 - Journal of Moral Theology 11 (Special Issue 1):182-211.
    The expanding social role and continued development of artificial intelligence (AI) needs theological investigation of its anthropological and moral potential. A pragmatic theological anthropology adapted for AI can characterize moral AI as experiencing its natural, social, and moral world through interpretations of its external reality as well as its self-reckoning. Systems theory can further structure insights into an AI social self that conceptualizes itself within Ignacio Ellacuria’s historical reality and its moral norms through Thomistic ideogenesis. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  46
    Reframing the Moral Status Question: An Investigation into the Moral Patiency of Intelligent Technologies.Cassandra Beyer - 2024 - Dissertation, Frankfurt School of Finance and Management
    Download  
     
    Export citation  
     
    Bookmark  
  16. Ethical and Moral Concerns Regarding Artificial Intelligence in Law and Medicine.Soaad Hossain - 2018 - Journal of Undergraduate Life Sciences 12 (1):10.
    This paper summarizes the seminar AI in Medicine in Context: Hopes? Nightmares? that was held at the Centre for Ethics at the University of Toronto on October 17, 2017, with special guest assistant professor and neurosurgeon Dr. Sunit Das. The paper discusses the key points from Dr. Das' talk. Specifically, it discusses about Dr. Das' perspective on the ethical and moral issues that was experienced from applying artificial intelligence (AI) in law and how such issues can also arise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  20. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  22. Artificial Intelligence, Robots and the Ethics of the Future.Constantin Vica & Cristina Voinea - 2019 - Revue Roumaine de Philosophie 63 (2):223–234.
    The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  25. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Intelligence ethics and non-coercive interrogation.Michael Skerker - 2007 - Defense Intelligence Journal 16 (1):61-76.
    This paper will address the moral implications of non-coercive interrogations in intelligence contexts. U.S. Army and CIA interrogation manuals define non-coercive interrogation as interrogation which avoids the use of physical pressure, relying instead on oral gambits. These methods, including some that involve deceit and emotional manipulation, would be mostly familiar to viewers of TV police dramas. As I see it, there are two questions that need be answered relevant to this subject. First, under what circumstances, if any, may (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  29. Artificial Intelligence and the Secret Ballot.Jakob Mainz, Jorn Sonderholm & Rasmus Uhrenfeldt - forthcoming - AI and Society.
    In this paper, we argue that because of the advent of Artificial Intelligence, the secret ballot is now much less effective at protecting voters from voting related instances of social ostracism and social punishment. If one has access to vast amounts of data about specific electors, then it is possible, at least with respect to a significant subset of electors, to infer with high levels of accuracy how they voted in a past election. Since the accuracy levels of Artificial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Objectivity of Truth, Morality, and Beauty.Steven James Bartlett - 2017 - Willamette University Faculty Research Website.
    Whether truth, morality, and beauty have an objective basis has been a perennial question for philosophy, ethics, and aesthetics, while for a great many relativists and skeptics it poses a problem without a solution. In this essay, the author proposes an innovative approach that shows how cognitive intelligence, moral intelligence, and aesthetic intelligence provide the basis needed for objective judgments about truth, morality, and beauty.
    Download  
     
    Export citation  
     
    Bookmark  
  31. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the mental (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  32. The Rights of Foreign Intelligence Targets.Michael Skerker - 2021 - In Seumas Miller, Mitt Regan & Patrick Walsh (eds.), National Security Intelligence and Ethics. Routledge. pp. 89-106.
    I develop a contractualist theory of just intelligence collection based on the collective moral responsibility to deliver security to a community and use the theory to justify certain kinds of signals interception. I also consider the rights of various intelligence targets like intelligence officers, service personnel, government employees, militants, and family members of all of these groups in order to consider how targets' waivers or forfeitures might create the moral space for just surveillance. Even people (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction.Barry Francis Dainton, Will Slocombe & Attila Tanyi (eds.) - 2021 - Springer.
    Bringing together literary scholars, computer scientists, ethicists, philosophers of mind, and scholars from affiliated disciplines, this collection of essays offers important and timely insights into the pasts, presents, and, above all, possible futures of Artificial Intelligence. This book covers topics such as ethics and morality, identity and selfhood, and broader issues about AI, addressing questions about the individual, social, and existential impacts of such technologies. Through the works of science fiction authors such as Isaac Asimov, Stanislaw Lem, Ann Leckie, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Morality as Art: Dewey, Metaphor, and Moral Imagination.Steven Fesmire - 1999 - Transactions of the Charles S. Peirce Society 35 (3):527-550.
    It is a familiar thesis that art affects moral imagination. But as a metaphor or model for moral experience, artistic production and enjoyment have been overlooked. This is no small oversight, not because artists are more saintly than the rest of us, but because seeing imagination so blatantly manifested gives us new eyes with which to see what can be made of imagination in everyday life. Artistic creation offers a rich model for understanding the sort of social imagination (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Incorporating Ethics into Artificial Intelligence.Amitai Etzioni & Oren Etzioni - 2017 - The Journal of Ethics 21 (4):403-418.
    This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  38. Moral Realism and Philosophical Angst.Joshua Blanchard - 2020 - In Russ Shafer-Landau (ed.), Oxford Studies in Metaethics Volume 15.
    This paper defends pro-realism, the view that it is better if moral realism is true rather than any of its rivals. After offering an account of philosophical angst, I make three general arguments. The first targets nihilism: in securing the possibility of moral justification and vindication in objecting to certain harms, moral realism secures something that is non-morally valuable and even essential to the meaning and intelligibility of our lives. The second argument targets antirealism: moral realism (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  39. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  40. Non-Human Moral Status: Problems with Phenomenal Consciousness.Joshua Shepherd - 2023 - American Journal of Bioethics Neuroscience 14 (2):148-157.
    Consciousness-based approaches to non-human moral status maintain that consciousness is necessary for (some degree or level of) moral status. While these approaches are intuitive to many, in this paper I argue that the judgment that consciousness is necessary for moral status is not secure enough to guide policy regarding non-humans, that policies responsive to the moral status of non-humans should take seriously the possibility that psychological features independent of consciousness are sufficient for moral status. Further, (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  41. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  73
    The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as (...) agents in the same manner as human beings. Additionally, the article sheds light on the practical implications within human-machine relations and their effect on human friendships. The lack of friendliness or its equivalent in interactions with machines directly impacts the quality and depth of human relations. This concerningly suggests the potential replacement or compromise of genuine interpersonal connections due to limitations in human-machine interactions. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  83
    Anscombe's Moral Epistemology and the Relevance of Wittgenstein's Anti-Scepticism.Michael Wee - 2020 - Enrahonar: Quaderns de Filosofía 64:81.
    Elizabeth Anscombe is well-known for her insistence that there are absolutely prohibited actions, though she is somewhat obscure about why this is so. Nonetheless, I contend in this paper that Anscombe is more concerned with the epistemology of absolute prohibitions, and that her thought on connatural moral knowledge – which resembles moral intuition – is key to understanding her thought on moral prohibitions. I shall identify key features of Anscombe’s moral epistemology before turning to investigate its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Artificial Consciousness Is Morally Irrelevant.Bruce P. Blackshaw - 2023 - American Journal of Bioethics Neuroscience 14 (2):72-74.
    It is widely agreed that possession of consciousness contributes to an entity’s moral status, even if it is not necessary for moral status (Levy and Savulescu 2009). An entity is considered to have...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Reasonable Moral Doubt.Emad Atiq - 2022 - New York University Law Review 97:1373-1425.
    Sentencing outcomes turn on moral and evaluative determinations. For example, a finding of “irreparable corruption” is generally a precondition for juvenile life without parole. A finding that the “aggravating factors outweigh the mitigating factors” determines whether a defendant receives the death penalty. Should such moral determinations that expose defendants to extraordinary penalties be subject to a standard of proof? A broad range of federal and state courts have purported to decide this issue “in the abstract and without reference (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Moral error theory.Hallvard Lillehammer - 2004 - Proceedings of the Aristotelian Society 104 (2):93–109.
    The paper explores the consequences of adopting a moral error theory targeted at the notion of reasonable convergence. I examine the prospects of two ways of combining acceptance of such a theory with continued acceptance of moral judgements in some form. On the first model, moral judgements are accepted as a pragmatically intelligible fiction. On the second model, moral judgements are made relative to a framework of assumptions with no claim to reasonable convergence on their behalf. (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  47. Science is not always “self-correcting” : fact–value conflation and the study of intelligence.Nathan Cofnas - 2016 - Foundations of Science 21 (3):477-492.
    Some prominent scientists and philosophers have stated openly that moral and political considerations should influence whether we accept or promulgate scientific theories. This widespread view has significantly influenced the development, and public perception, of intelligence research. Theories related to group differences in intelligence are often rejected a priori on explicitly moral grounds. Thus the idea, frequently expressed by commentators on science, that science is “self-correcting”—that hypotheses are simply abandoned when they are undermined by empirical evidence—may not (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  48. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   287 citations  
  50. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligence.Mazarian Alireza - 2019 - Journal of Philosophical Theological Research 21 (1):165-190.
    There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings (in an imaginary future), by designing a new argument (2015). In this paper, after an introduction, the author reviews and analyzes the main argument (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 998