Cyber Security and Dehumanisation

5th Digital Geographies Research Group Annual Symposium (2021)
Download Edit this record How to cite View on PhilPapers
Abstract
Artificial Intelligence is becoming widespread and as we continue ask ‘can we implement this’ we neglect to ask ‘should we implement this’. There are various frameworks and conceptual journeys one should take to ensure a robust AI product; context is one of the vital parts of this. AI is now expected to make decisions, from deciding who gets a credit card to cancer diagnosis. These decisions affect most, if not all, of society. As developers if we do not understand or use fundamental modelling principles then we can cause real harm to society. Recently more serious effects of AI have been observed. Dehumanisation is the human reaction to overused anthropomorphism and lack of social contact caused by excessive interaction with, or addiction to, technology. This can cause humans to devalue technology and to devalue other humans. This is a contradiction of the use of ‘social robots’ and ‘chatbots’, indicating that the negative effects of this technology would outweigh any perceived positive effects. Also, within cyberspace, anthropomorphism and similar techniques based on deep philosophical principles can, and are, being used to alter the behaviour of humans. These techniques are used to manipulate human behaviours at a basic level in the human mind. As these types of techniques are becoming more widespread, it is clear that we are entering unchartered territory that holds a vast array of consequences for society.
PhilPapers/Archive ID
OLDCSA-2
Upload history
Archival date: 2021-09-19
View other versions
Added to PP index
2021-09-19

Total views
8 ( #64,046 of 2,448,303 )

Recent downloads (6 months)
8 ( #51,715 of 2,448,303 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.