Sociocommunicative functions of a generative text: the case of GPT-3

Lexia. Rivista di Semiotica 39:177-192 (2022)
  Copy   BIBTEX

Abstract

Recently, there have been significant advances in the development of language-transformer models that enable statistical analysis of co-occurring words (word prediction) and text generation. One example is the Generative Pre-trained Transformer 3 (GPT-3) by OpenAI, which was used to generate an opinion article (op-ed) published in “The Guardian” in Septem- ber 2020. The publication and reception of the op-ed highlights the difficulty for human readers to differentiate a machine-produced text; it also calls attention to the challenge of perceiving such a text as a synthetic text even when its origins are made explicit. This article offers a critical examination of the process behind the generation and the interpretation of a synthetic text, framing it as an example of generative literature. Lotman’s concept of the text and its sociocommunicative functions offers a framework for understanding how and why the output of a natural language generator may be interpreted as a (human-written) text. This article also inquires whether the generative output can be called a text in a Lotmanian sense and how the output is textualized (attributed meaning) in the process of interpretation.

Author's Profile

Auli Viidalepp
University of Tartu

Analytics

Added to PP
2024-08-16

Downloads
48 (#96,628)

6 months
48 (#93,914)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?