Switch to: References

Add citations

You must login to add citations.
  1. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why AI Ethics Is a Critical Theory.Rosalie Waelen - 2022 - Philosophy and Technology 35 (1):1-16.
    The ethics of artificial intelligence is an upcoming field of research that deals with the ethical assessment of emerging AI applications and addresses the new kinds of moral questions that the advent of AI raises. The argument presented in this article is that, even though there exist different approaches and subfields within the ethics of AI, the field resembles a critical theory. Just like a critical theory, the ethics of AI aims to diagnose as well as change society and is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • “Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI.Karin van Es & Dennis Nguyen - forthcoming - AI and Society:1-13.
    This study analyzes how ChatGPT portrays and describes itself, revealing misleading myths about AI technologies, specifically conversational agents based on large language models. This analysis allows for critical reflection on the potential harm these misconceptions may pose for public understanding of AI and related technologies. While previous research has explored AI discourses and representations more generally, few studies focus specifically on AI chatbots. To narrow this research gap, an experimental-qualitative investigation into auto-generated AI representations based on prompting was conducted. Over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A place where “You can be who you've always wanted to be…” Examining the ethics of intelligent virtual environments.Danielle Shanley & Darian Meacham - 2024 - Journal of Responsible Technology 18 (C):100085.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms and dehumanization: a definition and avoidance model.Mario D. Schultz, Melanie Clegg, Reto Hofstetter & Peter Seele - forthcoming - AI and Society:1-21.
    Dehumanization by algorithms raises important issues for business and society. Yet, these issues remain poorly understood due to the fragmented nature of the evolving dehumanization literature across disciplines, originating from colonialism, industrialization, post-colonialism studies, contemporary ethics, and technology studies. This article systematically reviews the literature on algorithms and dehumanization (n = 180 articles) and maps existing knowledge across several clusters that reveal its underlying characteristics. Based on the review, we find that algorithmic dehumanization is particularly problematic for human resource management (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Images of Artificial Intelligence: a Blind Spot in AI Ethics.Alberto Romele - 2022 - Philosophy and Technology 35 (1):1-19.
    This paper argues that the AI ethics has generally neglected the issues related to the science communication of AI. In particular, the article focuses on visual communication about AI and, more specifically, on the use of certain stock images in science communication about AI — in particular, those characterized by an excessive use of blue color and recurrent subjects, such as androgyne faces, half-flesh and half-circuit brains, and variations on Michelangelo’s The Creation of Adam. In the first section, the author (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Revista de Filosofía, avance en línea, pp. 1-13 1 ¿Es la seguridad moralmente relevante para la inteligencia artificial confiable? El valor de la dignidad humana en la sociedad tecnologizada1. [REVIEW]Antonio Luis Terrones Rodríguez - 2022 - Revista de Filosofía (Madrid):1-13.
    El discurso sobre la seguridad que viene planteándose en el ámbito de la Inteligencia Artificial (IA) se caracteriza por un predominio de las perspectivas técnica y normativa. Los efectos ambivalentes de esta tecnología y su consideración como un sistema sociotécnico plantean la necesidad de complementar este discurso integrando una perspectiva moral. Así pues, el objetivo principal de este trabajo consiste en argumentar que la seguridad moral es un elemento indispensable para alcanzar una IA confiable, en virtud de las diversas situaciones (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • More than Skin Deep: a Response to “The Whiteness of AI”.Shelley Park - 2021 - Philosophy and Technology 34 (4):1961-1966.
    This commentary responds to Stephen Cave and Kanta Dihal’s call for further investigations of the whiteness of AI. My response focuses on three overlapping projects needed to more fully understand racial bias in the construction of AI and its representations in pop culture: unpacking the intersections of gender and other variables with whiteness in AI’s construction, marketing, and intended functions; observing the many different ways in which whiteness is scripted, and noting how white racial framing exceeds white casting and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence in the Colonial Matrix of Power.James Muldoon & Boxi A. Wu - 2023 - Philosophy and Technology 36 (4):1-24.
    Drawing on the analytic of the “colonial matrix of power” developed by Aníbal Quijano within the Latin American modernity/coloniality research program, this article theorises how a system of coloniality underpins the structuring logic of artificial intelligence (AI) systems. We develop a framework for critiquing the regimes of global labour exploitation and knowledge extraction that are rendered invisible through discourses of the purported universality and objectivity of AI. ​​Through bringing the political economy literature on AI production into conversation with scholarly work (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Toward an Ethics of AI Belief.Winnie Ma & Vincent Valton - 2024 - Philosophy and Technology 37 (3):1-28.
    In this paper we, an epistemologist and a machine learning scientist, argue that we need to pursue a novel area of philosophical research in AI – the ethics of belief for AI. Here we take the ethics of belief to refer to a field at the intersection of epistemology and ethics concerned with possible moral, practical, and other non-truth-related dimensions of belief. In this paper we will primarily be concerned with the normative question within the ethics of belief regarding what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Decolonization of AI: a Crucial Blind Spot.Carlos Largacha-Martínez & John W. Murphy - 2022 - Philosophy and Technology 35 (4):1-13.
    Critics are calling for the decolonization of AI (artificial intelligence). The problem is that this technology is marginalizing other modes of knowledge with dehumanizing applications. What is needed to remedy this situation is the development of human-centric AI. However, there is a serious blind spot in this strategy that is addressed in this paper. The corrective that is usually proposed—participatory design—lacks the philosophical rigor to undercut the autonomy of AI, and thus the colonization spawned by this technology. A more radical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Meta-narratives on machinic otherness: beyond anthropocentrism and exoticism.Min-Sun Kim - 2023 - AI and Society 38 (4):1763-1770.
    Intelligent machines are no longer distant fantasies of the future or solely used for industrial purposes; they are real “living” things that operate similarly to humans with verbal and nonverbal communication capabilities. Humans see in such technology the horrifying dangers and the bliss enabled by the saving power. Entrenched in the emotions of hope and fear concerning intelligent machines, humans’ attitudes toward intelligent machines are not free of expectations, judgments, strategies, and selfish agendas. As the discovery of the New Worlds (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mind extended: relational, spatial, and performative ontologies.Maurice Jones - forthcoming - AI and Society:1-8.
    The original extended mind theory propagated by Clark and Chalmers (Analysis 58:7–19, 1998) refers to the idea that our minds do not simply live within our brains or bodies but extend into the material world. In other words, the extended mind refers to the externalization of cognitive processes into technology. Through the case study of the artistic performance of the android Alter inspired by the Japanese Shintoist ritual of Kagura this paper reconceptualizes the extended mind from a technological act of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Standing of Social Robots: Untapped Insights from Africa.Nancy S. Jecker, Caesar A. Atiure & Martin Odei Ajei - 2022 - Philosophy and Technology 35 (2):1-22.
    This paper presents an African relational view of social robots’ moral standing which draws on the philosophy of ubuntu. The introduction places the question of moral standing in historical and cultural contexts. Section 2 demonstrates an ubuntu framework by applying it to the fictional case of a social robot named Klara, taken from Ishiguro’s novel, Klara and the Sun. We argue that an ubuntu ethic assigns moral standing to Klara, based on her relational qualities and pro-social virtues. Section 3 introduces (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Artificial intelligence in fiction: between narratives and metaphors.Isabella Hermann - 2023 - AI and Society 38 (1):319-329.
    Science-fiction (SF) has become a reference point in the discourse on the ethics and risks surrounding artificial intelligence (AI). Thus, AI in SF—science-fictional AI—is considered part of a larger corpus of ‘AI narratives’ that are analysed as shaping the fears and hopes of the technology. SF, however, is not a foresight or technology assessment, but tells dramas for a human audience. To make the drama work, AI is often portrayed as human-like or autonomous, regardless of the actual technological limitations. Taking (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Situating questions of data, power, and racial formation.Kathryn Henne & Renee Shelby - 2022 - Big Data and Society 9 (1).
    This special theme of Big Data & Society explores connections, relationships, and tensions that coalesce around data, power, and racial formation. This collection of articles and commentaries builds upon scholarly observations of data substantiating and transforming racial hierarchies. Contributors consider how racial projects intersect with interlocking systems of oppression across concerns of class, coloniality, dis/ability, gendered difference, and sexuality across contexts and jurisdictions. In doing so, this special issue illuminates how data can both reinforce and challenge colorblind ideologies as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Datafication of #MeToo: Whiteness, Racial Capitalism, and Anti-Violence Technologies.Jenna Harb, Renee Shelby & Kathryn Henne - 2021 - Big Data and Society 8 (2).
    This article illustrates how racial capitalism can enhance understandings of data, capital, and inequality through an in-depth study of digital platforms used for intervening in gender-based violence. Specifically, we examine an emergent sociotechnical strategy that uses software platforms and artificial intelligence chatbots to offer users emergency assistance, education, and a means to report and build evidence against perpetrators. Our analysis details how two reporting apps construct data to support institutionally legible narratives of violence, highlighting overlooked racialised dimensions of the data (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy.Michael Gentzel - 2021 - Philosophy and Technology 34 (4):1639-1663.
    This paper presents a novel philosophical analysis of the problem of law enforcement’s use of biased face recognition technology in liberal democracies. FRT programs used by law enforcement in identifying crime suspects are substantially more error-prone on facial images depicting darker skin tones and females as compared to facial images depicting Caucasian males. This bias can lead to citizens being wrongfully investigated by police along racial and gender lines. The author develops and defends “A Liberal Argument Against Biased FRT,” which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Negotiating becoming: a Nietzschean critique of large language models.Simon W. S. Fischer & Bas de Boer - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) structure the linguistic landscape by reflecting certain beliefs and assumptions. In this paper, we address the risk of people unthinkingly adopting and being determined by the values or worldviews embedded in LLMs. We provide a Nietzschean critique of LLMs and, based on the concept of will to power, consider LLMs as will-to-power organisations. This allows us to conceptualise the interaction between self and LLMs as power struggles, which we understand as negotiation. Currently, the invisibility and incomprehensibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Est-ce que Vous Compute?Arianna Falbo & Travis LaCroix - 2022 - Feminist Philosophy Quarterly 8 (3).
    Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appearance in response to a perceived change in our social environment. We defend the need to investigate cultural code-switching capacities in artificial intelligence systems. We explore a series of ethical and epistemic issues that arise when bringing cultural code-switching to bear on artificial intelligence. Building upon Dotson’s (2014) analysis of testimonial smothering, we discuss how emerging technologies in AI can give rise to epistemic oppression, and specifically, a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”.Eleanor Drage & Kerry Mackereth - 2022 - Philosophy and Technology 35 (4):1-25.
    In this paper, we analyze two key claims offered by recruitment AI companies in relation to the development and deployment of AI-powered HR tools: (1) recruitment AI can objectively assess candidates by removing gender and race from their systems, and (2) this removal of gender and race will make recruitment fairer, help customers attain their DEI goals, and lay the foundations for a truly meritocratic culture to thrive within an organization. We argue that these claims are misleading for four reasons: (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Speeding up to keep up: exploring the use of AI in the research process.Jennifer Chubb, Peter Cowling & Darren Reed - 2022 - AI and Society 37 (4):1439-1457.
    There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Expert views about missing AI narratives: is there an AI story crisis?Jennifer Chubb, Darren Reed & Peter Cowling - 2024 - AI and Society 39 (3):1107-1126.
    Stories are an important indicator of our vision of the future. In the case of artificial intelligence (AI), dominant stories are polarized between notions of threat and myopic solutionism. The central storytellers—big tech, popular media, and authors of science fiction—represent particular demographics and motivations. Many stories, and storytellers, are missing. This paper details the accounts of missing AI narratives by leading scholars from a range of disciplines interested in AI Futures. Participants focused on the gaps between dominant narratives and the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Race and AI: the Diversity Dilemma.Stephen Cave & Kanta Dihal - 2021 - Philosophy and Technology 34 (4):1775-1779.
    This commentary is a response to ‘More than Skin Deep’ by Shelley M. Park, and a development of our own 2020 paper ‘The Whiteness of AI’. We aim to explain how representations of AI can be varied in one sense, whilst not being diverse. We argue that Whiteness’s claim to universal humanity permits a broad range of roles to White humans and White-presenting machines, whilst assigning a much narrower range of stereotypical roles to people of colour. Because the attributes of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence ethics has a black box problem.Jean-Christophe Bélisle-Pipon, Erica Monteferrante, Marie-Christine Roy & Vincent Couture - 2023 - AI and Society 38 (4):1507-1522.
    It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (_n_ = 47) and analyzed the accessible information regarding (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice.Hannah Bleher & Matthias Braun - 2023 - Science and Engineering Ethics 29 (3):1-21.
    Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory–practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Commentary: “Whiteness and Colourblindness”.Gerd Bayer - 2022 - Philosophy and Technology 35 (1):1-5.
    This commentary argues that, in discussing the racial and cultural identities of cinematic representations of humanoid AI robots, nuances and differentiations are beneficial. It suggests that the essay on which the present text comments does not sufficiently acknowledge the range of identities found in AI films, in particular in Alex Garland's Ex Machina.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - 2024 - Episteme: A Journal of Social Epistemology:1-19.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - 2024 - In David J. Gunkel (ed.), Handbook on the Ethics of Artificial Intelligence. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark