Trusting artificial intelligence in cybersecurity is a double-edged sword

Philosophy and Technology 32:1-15 (2019)
Download Edit this record How to cite View on PhilPapers
Abstract
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.
Categories
No categories specified
(categorize this paper)
PhilPapers/Archive ID
TADTAI-2
Upload history
Archival date: 2021-06-10
View other versions
Added to PP index
2021-06-10

Total views
19 ( #59,196 of 2,432,760 )

Recent downloads (6 months)
19 ( #35,222 of 2,432,760 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.