Results for 'responsible AI judgment'

1000+ found
Order:
  1. Philosophizing with AI: happiness.Daniel Durante - manuscript
    A few years ago, I wrote a short text illustrating a problematic situation regarding the judgment of whether a particular fictional person, Bento, led a happy life or not. I frequently use this text in my introductory classes as a didactic resource to explain the nature of philosophy, its role in our understanding of the world, and to demonstrate its main challenge: the aporetic nature of philosophical questions. These questions do not yield unanimous or incontrovertible solutions; they always demand (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. RESPONSIBLE AI: INTRODUCTION OF “NOMADIC AI PRINCIPLES” FOR CENTRAL ASIA.Ammar Younas - 2020 - Conference Proceeding of International Conference Organized by Jizzakh Polytechnical Institute Uzbekistan.
    We think that Central Asia should come up with its own AI Ethics Principles which we propose to name as “Nomadic AI Principles”.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Proposing Central Asian AI Ethics Principles: A Multilevel Approach for Responsible AI.Ammar Younas & Yi Zeng - manuscript
    This paper puts forth Central Asian AI Ethics Principles and proposes a layered strategy tailored for the development of ethical principles in the field of artificial intelligence (AI) in Central Asian countries. This approach includes the customization of AI ethics principles to resonate with local nuances, the formulation of national and regional-level AI Ethics principles, and the implementation of subject-specific principles. While countering the narrative of ineffectiveness of the AI Ethics principles, this paper underscores the importance of stakeholder collaboration, provides (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Machine Learning and Irresponsible Inference: Morally Assessing the Training Data for Image Recognition Systems.Owen C. King - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Springer Verlag. pp. 265-282.
    Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Responsibility in Descartes’s Theory of Judgment.Marie Jayasekera - 2016 - Ergo: An Open Access Journal of Philosophy 3:321-347.
    In this paper I develop a new account of the philosophical motivations for Descartes’s theory of judgment. The theory needs explanation because the idea that judgment, or belief, is an operation of the will seems problematic at best, and Descartes does not make clear why he adopted what, at the time, was a novel view. I argue that attending to Descartes’s conception of the will as the active, free faculty of mind reveals that a general concern with responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  7. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency (Facct ’24).
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir fiction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. More than Skin Deep: a Response to “The Whiteness of AI”.Shelley Park - 2021 - Philosophy and Technology 34 (4):1961-1966.
    This commentary responds to Stephen Cave and Kanta Dihal’s call for further investigations of the whiteness of AI. My response focuses on three overlapping projects needed to more fully understand racial bias in the construction of AI and its representations in pop culture: unpacking the intersections of gender and other variables with whiteness in AI’s construction, marketing, and intended functions; observing the many different ways in which whiteness is scripted, and noting how white racial framing exceeds white casting and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  18. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  88
    Generative AI and the value changes and conflicts in its integration in Japanese educational system.Ngoc-Thang B. Le, Phuong-Thao Luu & Manh-Tung Ho - manuscript
    This paper critically examines Japan's approach toward the adoption of Generative AI such as ChatGPT in education via studying media discourse and guidelines at both the national as well as local levels. It highlights the lack of consideration for socio-cultural characteristics inherent in the Japanese educational systems, such as the notion of self, teachers’ work ethics, community-centric activities for the successful adoption of the technology. We reveal ChatGPT’s infusion is likely to further accelerate the shift away from traditional notion of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Moral Judgment and Volitional Incapacity.Antti Kauppinen - 2010 - In J. Campbell, M. O'Rourke & H. Silverstein (eds.), Action, Ethics and Responsibility: Topics in Contemporary Philosophy, Vol. 7. MIT Press.
    The central question of the branch of metaethics we may call philosophical moral psychology concerns the nature or essence of moral judgment: what is it to think that something is right or wrong, good or bad, obligatory or forbidden? One datum in this inquiry is that sincerely held moral views appear to influence conduct: on the whole, people do not engage in behaviours they genuinely consider base or evil, sometimes even when they would stand to benefit from it personally. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Intention, Judgement-Dependence and Self-Deception.Ali Hossein Khani - 2023 - Res Philosophica 100 (2):203-226.
    Wright’s judgement-dependent account of intention is an attempt to show that truths about a subject’s intentions can be viewed as constituted by the subject’s own best judgements about those intentions. The judgements are considered to be best if they are formed under certain cognitively optimal conditions, which mainly include the subject’s conceptual competence, attentiveness to the questions about what the intentions are, and lack of any material self-deception. Offering a substantive, non-trivial specification of the no-self-deception condition is one of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. AI-aesthetics and the artificial author.Emanuele Arielli - forthcoming - Proceedings of the European Society for Aesthetics.
    ABSTRACT. Consider this scenario: you discover that an artwork you greatly admire, or a captivating novel that deeply moved you, is in fact the product of artificial intelligence, not a human’s work. Would your aesthetic judgment shift? Would you perceive the work differently? If so, why? The advent of artificial intelligence (AI) in the realm of art has sparked numerous philosophical questions related to the authorship and artistic intent behind AI-generated works. This paper explores the debate between viewing AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  65
    Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning.Jonathan Y. H. Sim - 2023 - Higher Education in Southeast Asia and Beyond.
    Trust is the foundation for learning, and we must not allow ignorance of this new technologies, like Generative AI, to disrupt the relationship between students and educators. As a first step, we need to actively engage with AI tools to better understand how they can help us in our work.
    Download  
     
    Export citation  
     
    Bookmark  
  28. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The AI gambit — leveraging artificial intelligence to combat climate change: opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - In Vodafone Institute for Society and Communications.
    In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  31. Generative AI and photographic transparency.P. D. Magnus - forthcoming - AI and Society:1-6.
    There is a history of thinking that photographs provide a special kind of access to the objects depicted in them, beyond the access that would be provided by a painting or drawing. What is included in the photograph does not depend on the photographer’s beliefs about what is in front of the camera. This feature leads Kendall Walton to argue that photographs literally allow us to see the objects which appear in them. Current generative algorithms produce images in response to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  96
    Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  34. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Moral judgment in adults with autism spectrum disorders.Tiziana Zalla, Luca Barlassina, Marine Buon & Marion Leboyer - 2011 - Cognition 121 (1):115-126.
    The ability of a group of adults with high functioning autism (HFA) or Asperger Syndrome (AS) to distinguish moral, conventional and disgust transgressions was investigated using a set of six transgression scenarios, each of which was followed by questions about permissibility, seriousness, authority contingency and justification. The results showed that although individuals with HFA or AS (HFA/AS) were able to distinguish affect-backed norms from conventional affect-neutral norms along the dimensions of permissibility, seriousness and authority-dependence, they failed to distinguish moral and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  36. AI & democracy, and the importance of asking the right questions.Ognjen Arandjelović - 2021 - AI Ethics Journal 2 (1):2.
    Democracy is widely praised as a great achievement of humanity. However, in recent years there has been an increasing amount of concern that its functioning across the world may be eroding. In response, efforts to combat such change are emerging. Considering the pervasiveness of technology and its increasing capabilities, it is no surprise that there has been much focus on the use of artificial intelligence (AI) to this end. Questions as to how AI can be best utilized to extend the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Arrow's theorem in judgment aggregation.Franz Dietrich & Christian List - 2007 - Social Choice and Welfare 29 (1):19-33.
    In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. (...)
    Download  
     
    Export citation  
     
    Bookmark   83 citations  
  38. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - forthcoming - Journal of Ethics and Social Philosophy.
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Will AI take away your job? [REVIEW]Marie Oldfield - 2020 - Tech Magazine.
    Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. How to Use AI Ethically for Ethical Decision-Making.Joanna Demaree-Cotton, Brian D. Earp & Julian Savulescu - 2022 - American Journal of Bioethics 22 (7):1-3.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. The disunity of moral judgment: Implications for the study of psychopathy.David Sackris - 2022 - Philosophical Psychology 1.
    Since the 18th century, one of the key features of diagnosed psychopaths has been “moral colorblindness” or an inability to form moral judgments. However, attempts at experimentally verifying this moral incapacity have been largely unsuccessful. After reviewing the centrality of “moral colorblindness” to the study and diagnosis of psychopathy, I argue that the reason that researchers have been unable to verify that diagnosed psychopaths have an inability to make moral judgments is because their research is premised on the assumption that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Responsibility for implicit bias.Jules Holroyd - 2017 - Philosophy Compass 12 (3).
    Research programs in empirical psychology from the past two decades have revealed implicit biases. Although implicit processes are pervasive, unavoidable, and often useful aspects of our cognitions, they may also lead us into error. The most problematic forms of implicit cognition are those which target social groups, encoding stereotypes or reflecting prejudicial evaluative hierarchies. Despite intentions to the contrary, implicit biases can influence our behaviours and judgements, contributing to patterns of discriminatory behaviour. These patterns of discrimination are obviously wrong and (...)
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  43. Dynamically rational judgment aggregation.Franz Dietrich & Christian List - forthcoming - Social Choice and Welfare.
    Judgment-aggregation theory has always focused on the attainment of rational collective judgments. But so far, rationality has been understood in static terms: as coherence of judgments at a given time, defined as consistency, completeness, and/or deductive closure. This paper asks whether collective judgments can be dynamically rational, so that they change rationally in response to new information. Formally, a judgment aggregation rule is dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights negative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Big Tech corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age.Marianna Capasso & Steven Umbrello - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 231–249.
    The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Responsibility for Implicit Bias.Jules Holroyd - 2012 - Journal of Social Philosophy 43 (3):274-306.
    Philosophers who have written about implicit bias have claimed or implied that individuals are not responsible, and therefore not blameworthy, for their implicit biases, and that this is a function of the nature of implicit bias as implicit: below the radar of conscious reflection, out of the control of the deliberating agent, and not rationally revisable in the way many of our reflective beliefs are. I argue that close attention to the findings of empirical psychology, and to the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   100 citations  
  47. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain this interaction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  78
    The Paradox of Phenomenal Judgement and the Case Against Illusionism.Hane Htut Maung - 2023 - Dialogues in Philosophy, Mental and Neuro Sciences 16 (1):1-13.
    Illusionism is the view that conscious experience is some sort of introspective illusion. According to illusionism, there is no conscious experience, but it merely seems like there is conscious experience. This would suggest that much phenomenological enquiry, including work on phenomenological psychopathology, rests on a mistake. Some philosophers have argued that illusionism is obviously false, because seeming is itself an experiential state, and so necessarily presupposes the reality of conscious experience. In response, the illusionist could suggest that the relevant sort (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Historical Judgement: The Limits of Historiographical Choice.Jonathan L. Gorman - 2007 - Mcgill-Queen's University Press.
    The historical profession is not noted for examining its own methodologies. Indeed, most historians are averse to historical theory. In "Historical Judgement" Jonathan Gorman's response to this state of affairs is to argue that if we want to characterize a discipline, we need to look to persons who successfully occupy the role of being practitioners of that discipline. So to model historiography we must do so from the views of historians. Gorman begins by showing what it is to model a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000