Switch to: References

Add citations

You must login to add citations.
  1. How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations.Thomas Cantens - forthcoming - AI and Society:1-12.
    This article explores the challenges surrounding generative artificial intelligence (GenAI) in public administrations and its impact on human‒machine interactions within the public sector. First, it aims to deconstruct the reasons for distrust in GenAI in public administrations. The risks currently linked to GenAI in the public sector are often similar to those of conventional AI. However, while some risks remain pertinent, others are less so because GenAI has limited explainability, which, in return, limits its uses in public administrations. Confidentiality, marking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on the science (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Not “what”, but “where is creativity?”: towards a relational-materialist approach to generative AI.Claudio Celis Bueno, Pei-Sze Chow & Ada Popowicz - forthcoming - AI and Society:1-13.
    The recent emergence of generative AI software as viable tools for use in the cultural and creative industries has sparked debates about the potential for “creativity” to be automated and “augmented” by algorithmic machines. Such discussions, however, begin from an ontological position, attempting to define creativity by either falling prey to universalism (i.e. “creativity is X”) or reductionism (i.e. “only humans can be truly creative” or “human creativity will be fully replaced by creative machines”). Furthermore, such an approach evades addressing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Authorship and ChatGPT: a Conservative View.René van Woudenberg, Chris Ranalli & Daniel Bracker - 2024 - Philosophy and Technology 37 (1):1-26.
    Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and cannot take responsibility for the texts it produces. Three perspectives (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • GPT-4-Trinis: assessing GPT-4’s communicative competence in the English-speaking majority world.Samantha Jackson, Barend Beekhuizen, Zhao Zhao & Rhonda McEwen - forthcoming - AI and Society:1-17.
    Biases and misunderstanding stemming from pre-training in Generative Pre-Trained Transformers are more likely for users of underrepresented English varieties, since the training dataset favors dominant Englishes (e.g., American English). We investigate (potential) bias in GPT-4 when it interacts with Trinidadian English Creole (TEC), a non-hegemonic English variety that partially overlaps with standardized English (SE) but still contains distinctive characteristics. (1) Comparable responses: we asked GPT-4 18 questions in TEC and SE and compared the content and detail of the responses. (2) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms Don’t Have A Past: Beyond Gadamer’s Alterity of the Text and Stader’s Reflected Prejudiced Use.Matthew S. Lindia - 2024 - Philosophy and Technology 37 (1):1-6.
    This commentary on Daniel Stader's recent article, “Algorithms Don't Have a Future: On the Relation of Judgement and Calculation” develops and complicates his argument by suggesting that algorithms ossify multiple kinds of prejudices, namely, the structural prejudices of the programmer and the exemplary prejudices of the dataset. This typology at once suggests that the goal of transparency may be impossible, but this impossibility enriches the possibilities for developing Stader's concept of reflected prejudiced use.
    Download  
     
    Export citation  
     
    Bookmark