Taking AI Risks Seriously: a New Assessment Model for the AI Act

AI and Society 38 (3):1-5 (2023)
  Copy   BIBTEX

Abstract

The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This model enables the estimation of the magnitude of AI risk by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We use large language models (LLMs) as an example.

Author Profiles

Analytics

Added to PP
2023-05-14

Downloads
733 (#20,509)

6 months
294 (#7,324)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?