Switch to: References

Add citations

You must login to add citations.
  1. On trusting chatbots.P. D. Magnus - forthcoming - Episteme.
    This paper focuses on the epistemic situation one faces when using a Large Language Model based chatbot like ChatGPT: When reading the output of the chatbot, how should one decide whether or not to believe it? By surveying strategies we use with other, more familiar sources of information, I argue that chatbots present a novel challenge. This makes the question of how one could trust a chatbot especially vexing.
    Download  
     
    Export citation  
     
    Bookmark  
  • “A Place of very Arduous interfaces”. Social Media Platforms as Epistemic Environments with Faulty Interfaces.Lavinia Marin - forthcoming - Topoi.
    I argue that the concept of an epistemic interface is a useful one to add to the epistemic ecology toolkit in order to enrich our investigations concerning the complex epistemic phenomena arising on social media. An epistemic interface is defined as any informational interface (be it technical, human or institutional) that facilitates the transfer of epistemic goods from one epistemic environment to its outside, be that another epistemic environment or a person. When assessing the kinds of epistemic environments emerging on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Epistemic Authorities.Rico Hauswald - forthcoming - Social Epistemology.
    While AI systems are increasingly assuming roles traditionally occupied by human epistemic authorities (EAs), their epistemological status remains unclear. This paper aims to address this lacuna by assessing the potential for AI systems to be recognized as artificial epistemic authorities. In a first step, I examine the arguments against considering AI systems as EAs, in particular the established model of EAs as engaging in intentional belief transfer via testimony to laypeople – a process seemingly inapplicable to intentionless and beliefless AI. (...)
    Download  
     
    Export citation  
     
    Bookmark