17 found
Order:
  1. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  2. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  5. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - 2024 - Minds and Machines 34 (36):1-26.
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. AI as Legal Persons: Past, Patterns, and Prospects.Claudio Novelli, Luciano Floridi, Giovanni Sartor & Gunther Teubner - manuscript
    This paper examines the debate on AI legal personhood, emphasizing the role of path dependencies in shaping current trajectories and prospects. Three primary path dependencies emerge: prevailing legal theories on personhood (singularist vs. clustered), the actual participation of AI in socio-digital institutions (instrumental vs. non-instrumental), and the impact of technological advancements. We argue that these factors dynamically interact, with technological optimism fostering broader attribution of the legal entitlements to AI entities and periods of scepticism narrowing such entitlements. Additional influences include (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Digital Democracy in the Age of Artificial Intelligence.Claudio Novelli & Giulia Sandri - manuscript
    This chapter explores the influence of Artificial Intelligence (AI) on digital democracy, focusing on four main areas: citizenship, participation, representation, and the public sphere. It traces the evolution from electronic to virtual and network democracy, underscoring how each stage has broadened democratic engagement through technology. Focusing on digital citizenship, the chapter examines how AI can improve online engagement while posing privacy risks and fostering identity stereotyping. Regarding political participation, it highlights AI's dual role in mobilising civic actions and spreading misinformation. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. L’Artificial Intelligence Act Europeo: alcune questioni di implementazione.Claudio Novelli - 2024 - Federalismi 2:95-113.
    L’articolo esamina la proposta europea di regolamento sull’intelligenza artificiale, AI Act (AIA). In particolare, esamina il modello di analisi e valutazione del rischio dei sistemi di IA. L’articolo identifica tre potenziali problemi di implementazione del regolamento: (1) la predeterminazione dei livelli di rischio, (2) la genericità del giudizio di significatività del rischio e (3) l’indeterminatezza della valutazione sull’impatto dei diritti fondamentali. Il saggio suggeriscealcune soluzioni per affrontare questi tre problemi.
    Download  
     
    Export citation  
     
    Bookmark  
  10. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  72
    Recommender Systems as Commercial Speech: A Framing for US Legislation.Andrew West, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Recommender Systems (RS) on digital platforms increasingly influence user behavior, raising ethical concerns, privacy risks, harmful content promotion, and diminished user autonomy. This article examines RS within the framework of regulations and lawsuits in the United States and advocates for legislation that can withstand constitutional scrutiny under First Amendment protections. We propose (re)framing RS-curated content as commercial speech, which is subject to lessened free speech protections. This approach provides a practical path for future legislation that would allow for effective oversight (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Regulation by Design: Features, Practices, Limitations, and Governance Implications.Kostina Prifti, Jessica Morley, Claudio Novelli & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-23.
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. A Risk-Based Regulatory Approach to Autonomous Weapon Systems.Alexander Blanchard, Claudio Novelli, Luciano Floridi & Mariarosaria Taddeo - manuscript
    International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and establish risk tolerance thresholds through a risk matrix informed by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. A conceptual framework for legal personality and its application to AI.Claudio Novelli, Giorgio Bongiovanni & Giovanni Sartor - 2022 - Jurisprudence 13 (2):194-219.
    In this paper, we provide an analysis of the concept of legal personality and discuss whether personality may be conferred on artificial intelligence systems (AIs). Legal personality will be presented as a doctrinal category that holds together bundles of rights and obligations; as a result, we first frame it as a node of inferential links between factual preconditions and legal effects. However, this inferentialist reading does not account for the ‘background reasons’ of legal personality, i.e., it does not explain why (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Automating Business Process Compliance for the EU AI Act.Claudio Novelli, Guido Governatori & Antonino Rotolo - 2023 - In Giovanni Sileno, Jerry Spanakis & Gijs van Dijck, Legal Knowledge and Information Systems. Proceedings of JURIX 2023. IOS Press. pp. 125-130.
    The EU AI Act is the first step toward a comprehensive legal framework for AI. It introduces provisions for AI systems based on their risk levels in relation to fundamental rights. Providers of AI systems must conduct Conformity Assessments before market placement. Recent amendments added Fundamental Rights Impact Assessments for high-risk AI system users, focusing on compliance with EU and national laws, fundamental rights, and potential impacts on EU values. The paper suggests that automating business process compliance can help standardize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Cancel Culture: an Essentially Contested Concept?Claudio Novelli - 2023 - Athena - Critical Inquiries in Law, Philosophy and Globalization 1 (2):I-X.
    Cancel culture is a form of societal self-defense that becomes prominent particularly during periods of substantial moral upheaval. It can lead to the polarization of incompatible viewpoints if it is indiscriminately demonized. In this brief editorial letter, I consider framing cancel culture as an essentially contested concept (ECC), according to the theory of Walter B. Gallie, with the aim of establishing a groundwork for a more productive discourse on it. In particular, I propose that intermediate agreements and principles of reasonableness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. A Replica for our Democracies? On Using Digital Twins to Enhance Deliberative Democracy.Claudio Novelli, Javier Argota Sánchez-Vaquerizo, Dirk Helbing, Antonino Rotolo & Luciano Floridi - manuscript
    Deliberative democracy depends on carefully designed institutional frameworks-such as participant selection, facilitation methods, and decision-making mechanisms-that shape how deliberation occurs. However, determining which institutional design best suits a given context often proves difficult when relying solely on real-world observations or laboratory experiments, which can be resource-intensive and hard to replicate. To address these challenges, this paper explores Digital Twin (DT) technology as a regulatory sandbox for deliberative democracy. DTs enable researchers and policymakers to run "what-if" scenarios on varied deliberative designs (...)
    Download  
     
    Export citation  
     
    Bookmark