4 found
Order:
See also
Claudio Novelli
University of Bologna
  1. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - unknown - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. How to evaluate the risks of Artificial Intelligence: a proportionality-based, risk model for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI systems (AIs), the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. Our suggestion is to apply the four categories to the risk scenarios of each AIs, rather than solely to its field of application. We address this model flaw by integrating the AIA with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - forthcoming - Ai and Society: Knowledge, Culture and Communication:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  41
    Cancel Culture: an Essentially Contested Concept?Claudio Novelli - 2023 - Athena - Critical Inquiries in Law, Philosophy and Globalization 1 (2):I-X.
    Cancel culture is a form of societal self-defense that becomes prominent particularly during periods of substantial moral upheaval. It can lead to the polarization of incompatible viewpoints if it is indiscriminately demonized. In this brief editorial letter, I consider framing cancel culture as an essentially contested concept (ECC), according to the theory of Walter B. Gallie, with the aim of establishing a groundwork for a more productive discourse on it. In particular, I propose that intermediate agreements and principles of reasonableness (...)
    Download  
     
    Export citation  
     
    Bookmark