Results for 'moral AIs, hybrid system, moral disagreement problem, opacity problem'

969 found
Order:
  1. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  3. Moral Relativism and Moral Disagreement.Jussi Suikkanen - 2024 - In Maria Baghramian, J. Adam Carter & Rach Cosker-Rowland (eds.), Routledge Handbook of Philosophy of Disagreement. New York, NY: Routledge.
    This chapter focuses on the connection between moral disagreement and moral relativism. Moral relativists, generally speaking, think both (i) that there is no unique objectively correct moral standard and (ii) that the rightness and wrongness of an action depends in some way on a moral standard accepted by some group or an individual. This chapter will first consider the metaphysical and epistemic arguments for moral relativism that begin from the premise that there is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think.Mir H. S. Quadri - 2024 - Arkinfo Notes.
    The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. The Problem of Musical Creativity and its Relevance for Ethical and Legal Decisions towards Musical AI.Ivano Zanzarella - manuscript
    Because of its non-representational nature, music has always had familiarity with computational and algorithmic methodologies for automatic composition and performance. Today, AI and computer technology are transforming systems of automatic music production from passive means within musical creative processes into ever more autonomous active collaborators of human musicians. This raises a large number of interrelated questions both about the theoretical problems of artificial musical creativity and about its ethical consequences. Considering two of the most urgent ethical problems of Musical AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. (1 other version)Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical (...) of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacityproblem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  17. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Moral disagreement and non-moral ignorance.Nicholas Smyth - 2019 - Synthese 198 (2):1089-1108.
    The existence of deep and persistent moral disagreement poses a problem for a defender of moral knowledge. It seems particularly clear that a philosopher who thinks that we know a great many moral truths should explain how human populations have failed to converge on those truths. In this paper, I do two things. First, I show that the problem is more difficult than it is often taken to be, and second, I criticize a popular (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Moral Attitudes for Non-Cognitivists: Solving the Specification Problem.Gunnar Björnsson & Tristram McPherson - 2014 - Mind 123 (489):1-38.
    Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  27. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Metalinguistic negotiations in moral disagreement.Renée Jorgensen Bolinger - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 65 (3):352-380.
    The problem of moral disagreement has been presented as an objection to contextualist semantics for ‘ought’, since it is not clear that contextualism can accommodate or give a convincing gloss of such disagreement. I argue that independently of our semantics, disagreements over ‘ought’ in non-cooperative contexts are best understood as indirect metalinguistic disputes, which is easily accommodated by contextualism. If this is correct, then rather than posing a problem for contextualism, the data from moral (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  31. Mathematical and Moral Disagreement.Silvia Jonas - 2020 - Philosophical Quarterly 70 (279):302-327.
    The existence of fundamental moral disagreements is a central problem for moral realism and has often been contrasted with an alleged absence of disagreement in mathematics. However, mathematicians do in fact disagree on fundamental questions, for example on which set-theoretic axioms are true, and some philosophers have argued that this increases the plausibility of moral vis-à-vis mathematical realism. I argue that the analogy between mathematical and moral disagreement is not as straightforward as those (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Does Deep Moral Disagreement Exist in Real Life?Serhiy Kiš - 2023 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 30 (3):255-277.
    The existence of deep moral disagreement is used in support of views ranging from moral relativism to the impossibility of moral expertise. This is done despite the fact that it is not at all clear whether deep moral disagreements actually occur, as the usually given examples are never of real life situations, but of some generalized debates on controversial issues. The paper will try to remedy this, as any strength of arguments appealing to deep (...) disagreement is partly depended on the fact the disagreement exists. This will be done by showing that some real life conflicts that are intractable, i.e. notoriously difficult to resolve, share some important features with deep moral disagreement. The article also deals with the objection that the mere conceptual possibility renders illustrations of actually happening deep moral disagreements unnecessary. The problem with such objection is that it depends on theoretical assumptions (i.e. denial of moral realism) that are not uncontroversial. Instead, the article claims we need not only suppose deep moral disagreements exist because they actually occur when some intractable conflicts occur. Thus, in so far as to the deep moral disagreement’s existence, the arguments appealing to it are safe. But as intractable conflicts can be resolved, by seeing deep moral disagreements as constitutive part of them, we might have to consider whether deep moral disagreements are resolvable too. A brief suggestion of how that might look like is given in the end of the paper. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  34. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. A Good Friend Will Help You Move a Body: Friendship and the Problem of Moral Disagreement.Daniel Koltonski - 2016 - Philosophical Review 125 (4):473-507.
    On the shared-­ends account of close friendship, proper care for a friend as an agent requires seeing yourself as having important reasons to accommodate and promote the friend’s valuable ends for her own sake. However, that friends share ends doesn't inoculate them against disagreements about how to pursue those ends. This paper defends the claim that, in certain circumstances of reasonable disagreement, proper care for a friend as a practical and moral agent sometimes requires allowing her judgment to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  36. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Cham: Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  39. Moral Disagreement and the" Fact/Value Entanglement".Ángel Manuel Faerna - 2008 - Poznan Studies in the Philosophy of the Sciences and the Humanities 95 (1):245-264.
    In his recent work, "The Collapse of the Fact-Value Dichotomy," Hilary Putnam traces the history of the fact-value dichotomy from Hume to Stevenson and Logical Positivism. The aim of this historical reconstruction is to undermine the foundations of the dichotomy, showing that it is of a piece with the dichotomy - untenable, as we know now - of "analytic" and "synthetic" judgments. Putnam's own thesis is that facts and values are "entangled" in a way that precludes any attempt to draw (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Moral Realism and the Problem of Moral Aliens.Thomas Grundmann - 2020 - Logos and Episteme 11 (3):305-321.
    In this paper, I discuss a new problem for moral realism, the problem of moral aliens. In the first section, I introduce this problem. Moral aliens are people who radically disagree with us concerning moral matters. Moral aliens are neither obviously incoherent nor do they seem to lack rational support from their own perspective. On the one hand, moral realists claim that we should stick to our guns when we encounter (...) aliens. On the other hand, moral realists, in contrast to anti-realists, seem to be committed to an epistemic symmetry between us and our moral aliens that forces us into rational suspension of our moral beliefs. Unless one disputes the very possibility of moral aliens, this poses a severe challenge to the moral realist. In the second section, I will address this problem. It will turn out that, on closer scrutiny, we cannot make any sense of the idea that moral aliens should be taken as our epistemic peers. Consequently, there is no way to argue that encountering moral aliens gives us any reason to revise our moral beliefs. If my argument is correct, the possibility of encountering moral aliens poses no real threat to moral realism. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Moral intuitionism and disagreement.Brian Besong - 2014 - Synthese 191 (12):2767-2789.
    According to moral intuitionism, at least some moral seeming states are justification-conferring. The primary defense of this view currently comes from advocates of the standard account, who take the justification-conferring power of a moral seeming to be determined by its phenomenological credentials alone. However, the standard account is vulnerable to a problem. In brief, the standard account implies that moral knowledge is seriously undermined by those commonplace moral disagreements in which both agents have equally (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  44. "Jewish Law, Techno-Ethics, and Autonomous Weapon Systems: Ethical-Halakhic Perspectives".Nadav S. Berman - 2020 - Jewish Law Association Studies 29:91-124.
    Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous Weapon Systems (AWS), defensive and offensive (the article deals only with the latter). Such AI-operated lethal machines of various forms (aerial, marine, continental) raise substantial ethical concerns. Interestingly, the topic of AWS was almost not treated in Jewish law and its research. This article thus proposes an introductory ethical-halakhic perspective on AWS, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The Problem with Disagreement on Social Media: Moral not Epistemic.Elizabeth Edenberg - 2021 - In Elizabeth Edenberg & Michael Hannon (eds.), Political Epistemology. Oxford: Oxford University Press.
    Intractable political disagreements threaten to fracture the common ground upon which we can build a political community. The deepening divisions in society are partly fueled by the ways social media has shaped political engagement. Social media allows us to sort ourselves into increasingly likeminded groups, consume information from different sources, and end up in polarized and insular echo chambers. To solve this, many argue for various ways of cultivating more responsible epistemic agency. This chapter argues that this epistemic lens does (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Perspectival shapes are viewpoint-dependent relational properties.Tony Cheng, Yi Lin & Chen-Wei Wu - 2022 - Psychological Review (1):307-310.
    Recently, there is a renewed debate concerning the role of perspective in vision. Morales et al. (2020) present evidence that, in the case of viewing a rotated coin, the visual system is sensitive to what has often been called “perspectival shapes.” It has generated vigorous discussions, including an online symposium by Morales and Cohen, an exchange between Linton (2021) and Morales et al. (2021), and most recently, a fierce critique by Burge and Burge (2022), in which they launch various conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  54
    Angelito Enriquez Malicse Solution to Freewill Problem- Comparison in Existing Theory.Angelito Malicse - manuscript - Translated by Angelito Malicse.
    Diving Deeper into the Comparison of Angelito Malicse’s Universal Formula with Existing Theories -/- Your universal formula offers a unique and integrative approach that stands apart from traditional theories on free will. Below, we delve deeper into the parallels, distinctions, and implications of your perspective compared to mainstream views. -/- 1. Cause-and-Effect: Your Karma-Based System vs. Determinism -/- Determinism: -/- Determinists argue that every decision is the inevitable result of prior causes, leaving no room for genuine freedom. From this view, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  50. AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 969