Switch to: References

Add citations

You must login to add citations.
  1. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Balancing AI and academic integrity: what are the positions of academic publishers and universities?Bashar Haruna Gulumbe, Shuaibu Muhammad Audu & Abubakar Muhammad Hashim - forthcoming - AI and Society:1-10.
    This paper navigates the relationship between the growing influence of Artificial Intelligence (AI) and the foundational principles of academic integrity. It offers an in-depth analysis of how key academic stakeholders—publishers and universities—are crafting strategies and guidelines to integrate AI into the sphere of scholarly work. These efforts are not merely reactionary but are part of a broader initiative to harness AI’s potential while maintaining ethical standards. The exploration reveals a diverse array of stances, reflecting the varied applications of AI in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The problem of alignment.Tsvetelina Hristova, Liam Magee & Karen Soldatic - forthcoming - AI and Society:1-15.
    Large language models (LLMs) produce sequences learned as statistical patterns from large corpora. Their emergent status as representatives of the advances in artificial intelligence (AI) have led to an increased attention to the possibilities of regulating the automated production of linguistic utterances and interactions with human users in a process that computer scientists refer to as ‘alignment’—a series of technological and political mechanisms to impose a normative model of morality on algorithms and networks behind the model. Alignment, which can be (...)
    Download  
     
    Export citation  
     
    Bookmark