Switch to: References

Add citations

You must login to add citations.
  1. Risk and artificial general intelligence.Federico L. G. Faroldi - forthcoming - AI and Society:1-9.
    Artificial General Intelligence (AGI) is said to pose many risks, be they catastrophic, existential and otherwise. This paper discusses whether the notion of risk can apply to AGI, both descriptively and in the current regulatory framework. The paper argues that current definitions of risk are ill-suited to capture supposed AGI existential risks, and that the risk-based framework of the EU AI Act is inadequate to deal with truly general, agential systems.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain.Christopher Thomas, Alexander Blanchard & Mariarosaria Taddeo - 2024 - Philosophy and Technology 37 (1):1-21.
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) define the criteria to inform purpose- and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Teleological Approach to Information Systems Design.Mattia Fumagalli, Roberta Ferrario & Giancarlo Guizzardi - 2024 - Minds and Machines 34 (3):1-35.
    In recent years, the design and production of information systems have seen significant growth. However, these information artefacts often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of Information Systems Design (ISD). For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (falsifiability), their usage is often left to subjective experience and somewhat arbitrary choices (anecdotes), (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Regulation by Design: Features, Practices, Limitations, and Governance Implications.Kostina Prifti, Jessica Morley, Claudio Novelli & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-23.
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Brussels-Washington Consensus About the Legal Definition of Artificial Intelligence.Luciano Floridi - 2023 - Philosophy and Technology 36 (4):1-9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Government regulation or industry self-regulation of AI? Investigating the relationships between uncertainty avoidance, people’s AI risk perceptions, and their regulatory preferences in Europe.Bartosz Wilczek, Sina Thäsler-Kordonouri & Maximilian Eder - forthcoming - AI and Society:1-15.
    Artificial Intelligence (AI) has the potential to influence people’s lives in various ways as it is increasingly integrated into important decision-making processes in key areas of society. While AI offers opportunities, it is also associated with risks. These risks have sparked debates about how AI should be regulated, whether through government regulation or industry self-regulation. AI-related risk perceptions can be shaped by national cultures, especially the cultural dimension of uncertainty avoidance. This raises the question of whether people in countries with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automating Business Process Compliance for the EU AI Act.Claudio Novelli, Guido Governatori & Antonino Rotolo - 2023 - In Giovanni Sileno, Jerry Spanakis & Gijs van Dijck, Legal Knowledge and Information Systems. Proceedings of JURIX 2023. IOS Press. pp. 125-130.
    The EU AI Act is the first step toward a comprehensive legal framework for AI. It introduces provisions for AI systems based on their risk levels in relation to fundamental rights. Providers of AI systems must conduct Conformity Assessments before market placement. Recent amendments added Fundamental Rights Impact Assessments for high-risk AI system users, focusing on compliance with EU and national laws, fundamental rights, and potential impacts on EU values. The paper suggests that automating business process compliance can help standardize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Related Risk: An Epistemological Approach.Giacomo Zanotti, Daniele Chiffi & Viola Schiaffonati - 2024 - Philosophy and Technology 37 (2):1-18.
    Risks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a risk-based tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis and reduction. We show how (...)
    Download  
     
    Export citation  
     
    Bookmark