The Telegram Chronicles of Online Harm

Abstract

Harmful and dangerous language is frequent in social media, in particular in spaces which are considered anonymous and/or allow free participation. In this paper, we analyse the language in a Telegram channel populated by followers of Donald Trump, in order to identify the ways in which harmful language is used to create a specific narrative in a group of mostly like-minded discussants. Our research has several aims. First, we create an extended taxonomy of potentially harmful language that includes not only hate speech and direct insults, but also more indirect ways of poisoning online discourse, such as divisive speech and the glorification of violence. We apply this taxonomy to a large portion of the corpus. Our data gives empirical evidence for harmful speech such as in/out-group divisive language and the use of codes within certain communities which have not often been investigated before. Second, we compare our manual annotations to several automatic methods of classifying hate speech and offensive language, namely list based and machine learning based approaches. We find that the Telegram data set still poses particular challenges for these automatic methods. Finally, we argue for the value of studying such naturally occurring, coherent data sets for research on online harm and how to address it in linguistics and philosophy.

Author's Profile

Mihaela Popa-Wyatt
University of Manchester

Analytics

Added to PP
2021-04-08

Downloads
279 (#55,089)

6 months
68 (#59,490)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?