Cyber Security and Dehumanisation

5Th Digital Geographies Research Group Annual Symposium (2021)
  Copy   BIBTEX

Abstract

Artificial Intelligence is becoming widespread and as we continue ask ‘can we implement this’ we neglect to ask ‘should we implement this’. There are various frameworks and conceptual journeys one should take to ensure a robust AI product; context is one of the vital parts of this. AI is now expected to make decisions, from deciding who gets a credit card to cancer diagnosis. These decisions affect most, if not all, of society. As developers if we do not understand or use fundamental modelling principles then we can cause real harm to society. Recently more serious effects of AI have been observed. Dehumanisation is the human reaction to overused anthropomorphism and lack of social contact caused by excessive interaction with, or addiction to, technology. This can cause humans to devalue technology and to devalue other humans. This is a contradiction of the use of ‘social robots’ and ‘chatbots’, indicating that the negative effects of this technology would outweigh any perceived positive effects. Also, within cyberspace, anthropomorphism and similar techniques based on deep philosophical principles can, and are, being used to alter the behaviour of humans. These techniques are used to manipulate human behaviours at a basic level in the human mind. As these types of techniques are becoming more widespread, it is clear that we are entering unchartered territory that holds a vast array of consequences for society.

Author's Profile

Dr Marie Oldfield
London School of Economics

Analytics

Added to PP
2021-09-19

Downloads
246 (#75,379)

6 months
76 (#82,366)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?