Abstract
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by the Intergovernmental Panel on Climate Change (IPCC) reports and related literature. This approach enables a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We further refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment. Finally, we present three uses of this approach under the AIA: to implement the Regulation, to assess the significance of risks, and to develop internal risk management systems for AI deployers.