Results for 'moral AIs, hybrid system, moral disagreement problem, opacity problem'

1000+ found
Order:
  1. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Moral Relativism and Moral Disagreement.Jussi Suikkanen - forthcoming - In Maria Baghramian, J. Adam Carter & Rach Cosker-Rowland (eds.), Routledge Handbook of Disagreement. Routledge.
    This chapter focuses on the connection between moral disagreement and moral relativism. Moral relativists, generally speaking, think both (i) that there is no unique objectively correct moral standard and (ii) that the rightness and wrongness of an action depends in some way on a moral standard accepted by some group or an individual. This chapter will first consider the metaphysical and epistemic arguments for moral relativism that begin from the premise that there is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  4. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. The Problem of Musical Creativity and its Relevance for Ethical and Legal Decisions towards Musical AI.Ivano Zanzarella - manuscript
    Because of its non-representational nature, music has always had familiarity with computational and algorithmic methodologies for automatic composition and performance. Today, AI and computer technology are transforming systems of automatic music production from passive means within musical creative processes into ever more autonomous active collaborators of human musicians. This raises a large number of interrelated questions both about the theoretical problems of artificial musical creativity and about its ethical consequences. Considering two of the most urgent ethical problems of Musical AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement.Trystan S. Goetze - 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).
    When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  12. Moral disagreement and non-moral ignorance.Nicholas Smyth - 2019 - Synthese 198 (2):1089-1108.
    The existence of deep and persistent moral disagreement poses a problem for a defender of moral knowledge. It seems particularly clear that a philosopher who thinks that we know a great many moral truths should explain how human populations have failed to converge on those truths. In this paper, I do two things. First, I show that the problem is more difficult than it is often taken to be, and second, I criticize a popular (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Moral Attitudes for Non-Cognitivists: Solving the Specification Problem.Gunnar Björnsson & Tristram McPherson - 2014 - Mind 123 (489):1-38.
    Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  14. Metalinguistic negotiations in moral disagreement.Renée Jorgensen Bolinger - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 65 (3):352-380.
    The problem of moral disagreement has been presented as an objection to contextualist semantics for ‘ought’, since it is not clear that contextualism can accommodate or give a convincing gloss of such disagreement. I argue that independently of our semantics, disagreements over ‘ought’ in non-cooperative contexts are best understood as indirect metalinguistic disputes, which is easily accommodated by contextualism. If this is correct, then rather than posing a problem for contextualism, the data from moral (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. A Good Friend Will Help You Move a Body: Friendship and the Problem of Moral Disagreement.Daniel Koltonski - 2016 - Philosophical Review 125 (4):473-507.
    On the shared-­ends account of close friendship, proper care for a friend as an agent requires seeing yourself as having important reasons to accommodate and promote the friend’s valuable ends for her own sake. However, that friends share ends doesn't inoculate them against disagreements about how to pursue those ends. This paper defends the claim that, in certain circumstances of reasonable disagreement, proper care for a friend as a practical and moral agent sometimes requires allowing her judgment to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  17. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Mathematical and Moral Disagreement.Silvia Jonas - 2020 - Philosophical Quarterly 70 (279):302-327.
    The existence of fundamental moral disagreements is a central problem for moral realism and has often been contrasted with an alleged absence of disagreement in mathematics. However, mathematicians do in fact disagree on fundamental questions, for example on which set-theoretic axioms are true, and some philosophers have argued that this increases the plausibility of moral vis-à-vis mathematical realism. I argue that the analogy between mathematical and moral disagreement is not as straightforward as those (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  19. Moral Disagreement and the" Fact/Value Entanglement".Ángel Manuel Faerna - 2008 - Poznan Studies in the Philosophy of the Sciences and the Humanities 95 (1):245-264.
    In his recent work, "The Collapse of the Fact-Value Dichotomy," Hilary Putnam traces the history of the fact-value dichotomy from Hume to Stevenson and Logical Positivism. The aim of this historical reconstruction is to undermine the foundations of the dichotomy, showing that it is of a piece with the dichotomy - untenable, as we know now - of "analytic" and "synthetic" judgments. Putnam's own thesis is that facts and values are "entangled" in a way that precludes any attempt to draw (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Does Deep Moral Disagreement Exist in Real Life?Serhiy Kiš - 2023 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 30 (3):255-277.
    The existence of deep moral disagreement is used in support of views ranging from moral relativism to the impossibility of moral expertise. This is done despite the fact that it is not at all clear whether deep moral disagreements actually occur, as the usually given examples are never of real life situations, but of some generalized debates on controversial issues. The paper will try to remedy this, as any strength of arguments appealing to deep (...) disagreement is partly depended on the fact the disagreement exists. This will be done by showing that some real life conflicts that are intractable, i.e. notoriously difficult to resolve, share some important features with deep moral disagreement. The article also deals with the objection that the mere conceptual possibility renders illustrations of actually happening deep moral disagreements unnecessary. The problem with such objection is that it depends on theoretical assumptions (i.e. denial of moral realism) that are not uncontroversial. Instead, the article claims we need not only suppose deep moral disagreements exist because they actually occur when some intractable conflicts occur. Thus, in so far as to the deep moral disagreement’s existence, the arguments appealing to it are safe. But as intractable conflicts can be resolved, by seeing deep moral disagreements as constitutive part of them, we might have to consider whether deep moral disagreements are resolvable too. A brief suggestion of how that might look like is given in the end of the paper. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical (...) of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacityproblem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  63
    On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Moral Realism and the Problem of Moral Aliens.Thomas Grundmann - 2020 - Logos and Episteme 11 (3):305-321.
    In this paper, I discuss a new problem for moral realism, the problem of moral aliens. In the first section, I introduce this problem. Moral aliens are people who radically disagree with us concerning moral matters. Moral aliens are neither obviously incoherent nor do they seem to lack rational support from their own perspective. On the one hand, moral realists claim that we should stick to our guns when we encounter (...) aliens. On the other hand, moral realists, in contrast to anti-realists, seem to be committed to an epistemic symmetry between us and our moral aliens that forces us into rational suspension of our moral beliefs. Unless one disputes the very possibility of moral aliens, this poses a severe challenge to the moral realist. In the second section, I will address this problem. It will turn out that, on closer scrutiny, we cannot make any sense of the idea that moral aliens should be taken as our epistemic peers. Consequently, there is no way to argue that encountering moral aliens gives us any reason to revise our moral beliefs. If my argument is correct, the possibility of encountering moral aliens poses no real threat to moral realism. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  32. Moral intuitionism and disagreement.Brian Besong - 2014 - Synthese 191 (12):2767-2789.
    According to moral intuitionism, at least some moral seeming states are justification-conferring. The primary defense of this view currently comes from advocates of the standard account, who take the justification-conferring power of a moral seeming to be determined by its phenomenological credentials alone. However, the standard account is vulnerable to a problem. In brief, the standard account implies that moral knowledge is seriously undermined by those commonplace moral disagreements in which both agents have equally (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  33. The Problem with Disagreement on Social Media: Moral not Epistemic.Elizabeth Edenberg - 2021 - In Elizabeth Edenberg & Michael Hannon (eds.), Political Epistemology. Oxford, UK:
    Intractable political disagreements threaten to fracture the common ground upon which we can build a political community. The deepening divisions in society are partly fueled by the ways social media has shaped political engagement. Social media allows us to sort ourselves into increasingly likeminded groups, consume information from different sources, and end up in polarized and insular echo chambers. To solve this, many argue for various ways of cultivating more responsible epistemic agency. This chapter argues that this epistemic lens does (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Autonomous Weapons Systems, the Frame Problem and Computer Security.Michał Klincewicz - 2015 - Journal of Military Ethics 14 (2):162-176.
    Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  35. Problems of Religious Luck: Assessing the Limits of Reasonable Religious Disagreement.Guy Axtell - 2019 - Lanham, MD, USA & London, UK: Lexington Books/Rowman & Littlefield.
    To speak of being religious lucky certainly sounds odd. But then, so does “My faith holds value in God’s plan, while yours does not.” This book argues that these two concerns — with the concept of religious luck and with asymmetric or sharply differential ascriptions of religious value — are inextricably connected. It argues that religious luck attributions can profitably be studied from a number of directions, not just theological, but also social scientific and philosophical. There is a strong tendency (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  36. Moral Relativism, Metalinguistic Negotiation, and the Epistemic Significance of Disagreement.Katharina Anna Sodoma - 2021 - Erkenntnis 88 (4):1621-1641.
    Although moral relativists often appeal to cases of apparent moral disagreement between members of different communities to motivate their view, accounting for these exchanges as evincing genuine disagreements constitutes a challenge to the coherence of moral relativism. While many moral relativists acknowledge this problem, attempts to solve it so far have been wanting. In response, moral relativists either give up the claim that there can be moral disagreement between members of different (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Moral Realism and Expert Disagreement.Prabhpal Singh - 2020 - Trames: A Journal of the Humanities and Social Sciences 24 (3):441-457.
    SPECIAL ISSUE ON DISAGREEMENTS: The fact of moral disagreement is often raised as a problem for moral realism. The idea is that disagreement amongst people or communities on moral issues is to be taken as evidence that there are no objective moral facts. While the fact of ‘folk’ moral disagreement has been of interest, the fact of expert moral disagreement, that is, widespread and longstanding disagreement amongst expert (...) philosophers, is even more compelling. In this paper, I present three arguments against the anti-realist explanation for widespread and longstanding disagreement amongst expert moral philosophers. Each argument shows the argument from expert disagreement for moral anti-realism, that is, denial of morality’s objectivity, to be in one way or another self-undermining. I conclude that widespread and longstanding disagreement amongst expert moral philosophers is not a problem for moral realism. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  38. "Jewish Law, Techno-Ethics, and Autonomous Weapon Systems: Ethical-Halakhic Perspectives".Nadav S. Berman - 2020 - Jewish Law Association Studies 29:91-124.
    Techno-ethics is the area in the philosophy of technology which deals with emerging robotic and digital AI technologies. In the last decade, a new techno-ethical challenge has emerged: Autonomous Weapon Systems (AWS), defensive and offensive (the article deals only with the latter). Such AI-operated lethal machines of various forms (aerial, marine, continental) raise substantial ethical concerns. Interestingly, the topic of AWS was almost not treated in Jewish law and its research. This article thus proposes an introductory ethical-halakhic perspective on AWS, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  41. Hybridizing Moral Expressivism and Moral Error Theory.Toby Svoboda - 2011 - Journal of Value Inquiry 45 (1):37-48.
    Philosophers should consider a hybrid meta-ethical theory that includes elements of both moral expressivism and moral error theory. Proponents of such an expressivist-error theory hold that all moral utterances are either expressions of attitudes or expressions of false beliefs. Such a hybrid theory has two advantages over pure expressivism, because hybrid theorists can offer a more plausible account of the moral utterances that seem to be used to express beliefs, and hybrid theorists (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Moral Experts, Deference & Disagreement.Jonathan Matheson, Nathan Nobis & Scott McElreath - 2018 - In Nathan Nobis, Scott McElreath & Jonathan Matheson (eds.), Moral Expertise. Springer Verlag.
    We sometimes seek expert guidance when we don’t know what to think or do about a problem. In challenging cases concerning medical ethics, we may seek a clinical ethics consultation for guidance. The assumption is that the bioethicist, as an expert on ethical issues, has knowledge and skills that can help us better think about the problem and improve our understanding of what to do regarding the issue. The widespread practice of ethics consultations raises these questions and more: (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. The Neural Correlates of Consciousness.Jorge Morales & Hakwan Lau - 2020 - In Uriah Kriegel (ed.), The Oxford Handbook of the Philosophy of Consciousness. Oxford: Oxford University Press. pp. 233-260.
    In this chapter, we discuss a selection of current views of the neural correlates of consciousness (NCC). We focus on the different predictions they make, in particular with respect to the role of prefrontal cortex (PFC) during visual experiences, which is an area of critical interest and some source of contention. Our discussion of these views focuses on the level of functional anatomy, rather than at the neuronal circuitry level. We take this approach because we currently understand more about experimental (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  45. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Moral Responsibility and the Strike Back Emotion: Comments on Bruce Waller’s The Stubborn System of Moral Responsibility.Gregg Caruso - forthcoming - Syndicate Philosophy 1 (1).
    In The Stubborn System of Moral Responsibility (2015), Bruce Waller sets out to explain why the belief in individual moral responsibility is so strong. He begins by pointing out that there is a strange disconnect between the strength of philosophical arguments in support of moral responsibility and the strength of philosophical belief in moral responsibility. While the many arguments in favor of moral responsibility are inventive, subtle, and fascinating, Waller points out that even the most (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The problem of AI identity.Soenke Ziesche & Roman V. Yampolskiy - manuscript
    The problem of personal identity is a longstanding philosophical topic albeit without final consensus. In this article the somewhat similar problem of AI identity is discussed, which has not gained much traction yet, although this investigation is increasingly relevant for different fields, such as ownership issues, personhood of AI, AI welfare, brain–machine interfaces, the distinction between singletons and multi-agent systems as well as to potentially support finding a solution to the problem of personal identity. The AI identity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000