4 found
Order:
  1. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  3. Automating Business Process Compliance for the EU AI Act.Claudio Novelli, Guido Governatori & Antonino Rotolo - 2023 - In Giovanni Sileno, Jerry Spanakis & Gijs van Dijck, Legal Knowledge and Information Systems. Proceedings of JURIX 2023. IOS Press. pp. 125-130.
    The EU AI Act is the first step toward a comprehensive legal framework for AI. It introduces provisions for AI systems based on their risk levels in relation to fundamental rights. Providers of AI systems must conduct Conformity Assessments before market placement. Recent amendments added Fundamental Rights Impact Assessments for high-risk AI system users, focusing on compliance with EU and national laws, fundamental rights, and potential impacts on EU values. The paper suggests that automating business process compliance can help standardize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. A Replica for our Democracies? On Using Digital Twins to Enhance Deliberative Democracy.Claudio Novelli, Javier Argota Sánchez-Vaquerizo, Dirk Helbing, Antonino Rotolo & Luciano Floridi - manuscript
    Deliberative democracy depends on carefully designed institutional frameworks-such as participant selection, facilitation methods, and decision-making mechanisms-that shape how deliberation occurs. However, determining which institutional design best suits a given context often proves difficult when relying solely on real-world observations or laboratory experiments, which can be resource-intensive and hard to replicate. To address these challenges, this paper explores Digital Twin (DT) technology as a regulatory sandbox for deliberative democracy. DTs enable researchers and policymakers to run "what-if" scenarios on varied deliberative designs (...)
    Download  
     
    Export citation  
     
    Bookmark