In Patrick Connolly, Sandy Goldberg & Jennifer Saul (eds.),
Conversations Online. Oxford University Press (
forthcoming)
Copy
BIBTEX
Abstract
The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. As we can all bear witness to now though, at the point of use chatbots appear capable of performing language tasks at a level close to that of humans. This creates a tension when we consider something like, for example, the classic Gricean theory of implicature. On a broad summary of this type of account, to utter p and implicate q requires the reflexive occurrence of an audience supposing a speaker believes that q, and the speaker believing that their audience can determine they believe it when they utter p. So taken at face value, if a chatbot doesn’t have the capacity for belief, then either in their role as speaker or audience, they would not seem capable of either generating or comprehending implicatures. Yet on the surface it does seem that chatbots are capable of dealing with (some) implicatures, and as such it raises questions about how we should then correlate this with what we think occurs in cases of implicature with chatbots.