Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3)

Ethics in Science and Environmental Politics 21:17-23 (2021)
  Copy   BIBTEX

Abstract

As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts in natural language, on virtually any topic, and generate relevant, original text content that is indistinguishable from human writing. Moreover, access to these capabilities, in a limited yet worrisome enough extent, is available to the general public as of the time of this writing. This paper presents select examples of original content generated by the author using GPT-3. These examples illustrate some of the capabilities of GPT-3 in comprehending prompts in natural language and generating convincing content in response. We use these examples to raise specific, fundamental questions pertaining to the intellectual property of this content and the potential use of GPT-3 to facilitate plagiarism. Our goal is to instigate not just a sense of urgency, but of a present tardiness on the part of the academic community in addressing these questions.

Author's Profile

Analytics

Added to PP
2021-01-19

Downloads
764 (#17,330)

6 months
205 (#10,764)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?