Results for 'moral intelligence'

959 found
Order:
  1. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  3.  68
    Pragmatist Representationalism and the Aesthetics of Moral Intelligence[REVIEW]David Seiple - 2004 - Contemporary Pragmatism 1 (2):171-178.
    Important work on the relation of pragmatic ethics and aesthetics, such as Steven Fesmire's _John Dewey and Moral Imagination: Pragmatism in Ethics,_ misses an important feature of the entire issue unless non-mimetic representation is invoked to explain the relation between what Dewey would call the "problem" and the "solution" presented in experience. This cannot be elaborated within a Rortyan neo-pragmatism, nor can it be addressed without attending to the "spiritual" aspect of moral agency.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  6. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Moral realism, normative reasons, and rational intelligibility.Hallvard Lillehammer - 2002 - Erkenntnis 57 (1):47-69.
    This paper concerns a prima facie tension between the claims that (a) agents have normative reasons obtaining in virtue of the nature of the options that confront them, and (b) there is a non-trivial connection between the grounds of normative reasons and the upshots of sound practical reasoning. Joint commitment to these claims is shown to give rise to a dilemma. I argue that the dilemma is avoidable on a response dependent account of normative reasons accommodating both (a) and (b) (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  8. The intelligibility of moral intransigence: A dilemma for cognitivism about moral judgment.Rach Cosker-Rowland - 2018 - Analysis 78 (2):266-275.
    Many have argued that various features of moral disagreements create problems for cognitivism about moral judgment, but these arguments have been shown to fail. In this paper, I articulate a new problem for cognitivism that derives from features of our responses to moral disagreement. I argue that cognitivism entails that one of the following two claims is false: (1) a mental state is a belief only if it tracks changes in perceived evidence; (2) it is intelligible to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Ethical and Moral Concerns Regarding Artificial Intelligence in Law and Medicine.Soaad Hossain - 2018 - Journal of Undergraduate Life Sciences 12 (1):10.
    This paper summarizes the seminar AI in Medicine in Context: Hopes? Nightmares? that was held at the Centre for Ethics at the University of Toronto on October 17, 2017, with special guest assistant professor and neurosurgeon Dr. Sunit Das. The paper discusses the key points from Dr. Das' talk. Specifically, it discusses about Dr. Das' perspective on the ethical and moral issues that was experienced from applying artificial intelligence (AI) in law and how such issues can also arise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Moral Projection and the Intelligibility of Collective Forgiveness.Harry Bunting - 2009 - Yearbook of the Irish Philosophical Society 7:107 - 120.
    ABSTRACT. The paper explores the philosophical intelligibility of contemporary defences of collective political forgiveness against a background of sceptical doubt, both general and particular. Three genera sceptical arguments are examined: one challenges the idea that political collectives exist; another challenges the idea that moral agency can be projected upon political collectives; a final argument challenges the attribution of emotions, especially anger, to collectives. Each of these sceptical arguments is rebutted. At a more particular level, the contrasts between individual forgiveness (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Moral Agency in Artificial Intelligence (Robots).Saleh Gorbanian - 2020 - Ethical Reflections, 1 (1):11-32.
    Growing technological advances in intelligent artifacts and bitter experiences of the past have emphasized the need to use and operate ethics in this field. Accordingly, it is vital to discuss the ethical integrity of having intelligent artifacts. Concerning the method of gathering materials, the current study uses library and documentary research followed by attribution style. Moreover, descriptive analysis is employed in order to analyze data. Explaining and criticizing the opposing views in this field and reviewing the related literature, it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Theological Foundations for Moral Artificial Intelligence.Mark Graves - 2022 - Journal of Moral Theology 11 (Special Issue 1):182-211.
    The expanding social role and continued development of artificial intelligence (AI) needs theological investigation of its anthropological and moral potential. A pragmatic theological anthropology adapted for AI can characterize moral AI as experiencing its natural, social, and moral world through interpretations of its external reality as well as its self-reckoning. Systems theory can further structure insights into an AI social self that conceptualizes itself within Ignacio Ellacuria’s historical reality and its moral norms through Thomistic ideogenesis. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  16. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Artificial Intelligence and Moral Theology: A Conversation.Brian Patrick Green, Matthew J. Gaudet, Levi Checketts, Brian Cutter, Noreen Herzfeld, Cory Andrew Labrecque, Anselm Ramelow, Paul Scherz, Marga Vega, Andrea Vicini & Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):13-40.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - 2024 - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  22.  78
    Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Artificial Intelligence, Robots and the Ethics of the Future.Constantin Vica & Cristina Voinea - 2019 - Revue Roumaine de Philosophie 63 (2):223–234.
    The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. The Objectivity of Truth, Morality, and Beauty.Steven James Bartlett - 2017 - Willamette University Faculty Research Website.
    Whether truth, morality, and beauty have an objective basis has been a perennial question for philosophy, ethics, and aesthetics, while for a great many relativists and skeptics it poses a problem without a solution. In this essay, the author proposes an innovative approach that shows how cognitive intelligence, moral intelligence, and aesthetic intelligence provide the basis needed for objective judgments about truth, morality, and beauty.
    Download  
     
    Export citation  
     
    Bookmark  
  26.  44
    Moral Argument for AI Ethics.Michael Haimes - manuscript
    The Moral Argument for AI Ethics emphasizes the need for an adaptive, globally equitable, and philosophically grounded framework for the ethical development and deployment of artificial intelligence. It highlights key principles, including dynamic adaptation to societal values, inclusivity, and the mitigation of global disparities. Drawing from historical AI ethical failures, the argument underscores the urgency of proactive and enforceable frameworks addressing bias, surveillance, and existential threats. The conclusion advocates for international coalitions that integrate diverse philosophical traditions and practical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  28. African Reasons Why Artificial Intelligence Should Not Maximize Utility.Thaddeus Metz - 2021 - In Beatrice Dedaa Okyere-Manu (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Palgrave-Macmillan. pp. 55-72.
    Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Intelligence ethics and non-coercive interrogation.Michael Skerker - 2007 - Defense Intelligence Journal 16 (1):61-76.
    This paper will address the moral implications of non-coercive interrogations in intelligence contexts. U.S. Army and CIA interrogation manuals define non-coercive interrogation as interrogation which avoids the use of physical pressure, relying instead on oral gambits. These methods, including some that involve deceit and emotional manipulation, would be mostly familiar to viewers of TV police dramas. As I see it, there are two questions that need be answered relevant to this subject. First, under what circumstances, if any, may (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Science is not always “self-correcting” : fact–value conflation and the study of intelligence.Nathan Cofnas - 2016 - Foundations of Science 21 (3):477-492.
    Some prominent scientists and philosophers have stated openly that moral and political considerations should influence whether we accept or promulgate scientific theories. This widespread view has significantly influenced the development, and public perception, of intelligence research. Theories related to group differences in intelligence are often rejected a priori on explicitly moral grounds. Thus the idea, frequently expressed by commentators on science, that science is “self-correcting”—that hypotheses are simply abandoned when they are undermined by empirical evidence—may not (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  31. Reframing the Moral Status Question: An Investigation into the Moral Patiency of Intelligent Technologies.Cassandra Beyer - 2024 - Dissertation, Frankfurt School of Finance and Management
    Download  
     
    Export citation  
     
    Bookmark  
  32. The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework.Franziska Poszler & Benjamin Lange - forthcoming - Technological Forecasting and Social Change.
    With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Praise as Moral Address.Daniel Telech - 2021 - Oxford Studies in Agency and Responsibility 7.
    While Strawsonians have focused on the way in which our “reactive attitudes”—the emotions through which we hold one another responsible for manifestations of morally significant quality of regard—express moral demands, serious doubt has been cast on the idea that non-blaming reactive attitudes direct moral demands to their targets. Building on Gary Watson’s proposal that the reactive attitudes are ‘forms of moral address’, this paper advances a communicative view of praise according to which the form of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  36.  65
    Ethical Permissibility of Using Artificial Intelligence through the Lens of Al-Farabi's Theory on Natural Rights and Prosperity.Mohamad Mahdi Davar - 2024 - Legal Civilization 6 (18):195-200.
    The discussion of artificial intelligence (AI) as a newly emerging phenomenon in the present era has always been faced with various ethical challenges. The expansion of artificial intelligence is inevitable, and since this phenomenon is related to the human and social world, anything related to humans and society falls within the realm of morality and rights. In doing so, it must be understood whether the use of artificial intelligence is an ethical matter or not. Furthermore, do humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the mental (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction.Barry Francis Dainton, Will Slocombe & Attila Tanyi (eds.) - 2021 - Springer.
    Bringing together literary scholars, computer scientists, ethicists, philosophers of mind, and scholars from affiliated disciplines, this collection of essays offers important and timely insights into the pasts, presents, and, above all, possible futures of Artificial Intelligence. This book covers topics such as ethics and morality, identity and selfhood, and broader issues about AI, addressing questions about the individual, social, and existential impacts of such technologies. Through the works of science fiction authors such as Isaac Asimov, Stanislaw Lem, Ann Leckie, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Moral Implications of Data-Mining, Key-word Searches, and Targeted Electronic Surveillance.Michael Skerker - 2015 - In Bradley J. Strawser, Fritz Allhoff & Adam Henschke (eds.), Binary Bullets.
    This chapter addresses the morality of two types of national security electronic surveillance (SIGINT) programs: the analysis of communication “metadata” and dragnet searches for keywords in electronic communication. The chapter develops a standard for assessing coercive government action based on respect for the autonomy of inhabitants of liberal states and argues that both types of SIGINT can potentially meet this standard. That said, the collection of metadata creates opportunities for abuse of power, and so judgments about the trustworthiness and competence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  42. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Limits of Intelligibility: Issues from Kant and Wittgenstein.Jens Pier (ed.) - 2023 - London: Routledge.
    The essays in this volume investigate the question of where, and in what sense, the bounds of intelligible thought, knowledge, and speech are to be drawn. Is there a way in which we are limited in what we think, know, and say? And if so, does this mean that we are constrained – that there is something beyond the ken of human intelligibility of which we fall short? Or is there another way to think about these limits of intelligibility – (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  46. Moral Realism and Philosophical Angst.Joshua Blanchard - 2020 - In Russ Shafer-Landau (ed.), Oxford Studies in Metaethics Volume 15. Oxford University Press.
    This paper defends pro-realism, the view that it is better if moral realism is true rather than any of its rivals. After offering an account of philosophical angst, I make three general arguments. The first targets nihilism: in securing the possibility of moral justification and vindication in objecting to certain harms, moral realism secures something that is non-morally valuable and even essential to the meaning and intelligibility of our lives. The second argument targets antirealism: moral realism (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  47. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. The Rights of Foreign Intelligence Targets.Michael Skerker - 2021 - In Seumas Miller, Mitt Regan & Patrick Walsh (eds.), National Security Intelligence and Ethics. Routledge. pp. 89-106.
    I develop a contractualist theory of just intelligence collection based on the collective moral responsibility to deliver security to a community and use the theory to justify certain kinds of signals interception. I also consider the rights of various intelligence targets like intelligence officers, service personnel, government employees, militants, and family members of all of these groups in order to consider how targets' waivers or forfeitures might create the moral space for just surveillance. Even people (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
1 — 50 / 959