Switch to: References

Add citations

You must login to add citations.
  1. Government regulation or industry self-regulation of AI? Investigating the relationships between uncertainty avoidance, people’s AI risk perceptions, and their regulatory preferences in Europe.Bartosz Wilczek, Sina Thäsler-Kordonouri & Maximilian Eder - forthcoming - AI and Society:1-15.
    Artificial Intelligence (AI) has the potential to influence people’s lives in various ways as it is increasingly integrated into important decision-making processes in key areas of society. While AI offers opportunities, it is also associated with risks. These risks have sparked debates about how AI should be regulated, whether through government regulation or industry self-regulation. AI-related risk perceptions can be shaped by national cultures, especially the cultural dimension of uncertainty avoidance. This raises the question of whether people in countries with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automating public policy: a comparative study of conversational artificial intelligence models and human expertise in crafting briefing notes.Stany Nzobonimpa, Jean-François Savard, Isabelle Caron & Justin Lawarée - forthcoming - AI and Society:1-13.
    This paper investigates the application of artificial intelligence (AI) language models in writing policy briefing notes within the context of public administration by juxtaposing the technologies’ performance against the traditional reliance on human expertise. Briefing notes are pivotal in informing decision-making processes in government contexts, which generally require high accuracy, clarity, and issue-relevance. Given the increasing integration of AI across various sectors, this study aims to evaluate the effectiveness and acceptability of AI-generated policy briefing notes. Using a structured evaluation-by-experts methodology, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hypersuasion – On AI’s Persuasive Power and How to Deal with It.Floridi Luciano - 2024 - Philosophy and Technology 37 (2):1-10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the “black (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical governance of artificial intelligence for defence: normative tradeoffs for principle to practice guidance.Alexander Blanchard, Christopher Thomas & Mariarosaria Taddeo - forthcoming - AI and Society:1-14.
    The rapid diffusion of artificial intelligence (AI) technologies in the defence domain raises challenges for the ethical governance of these systems. A recent shift from the what to the how of AI ethics sees a nascent body of literature published by defence organisations focussed on guidance to implement AI ethics principles. These efforts have neglected a crucial intermediate step between principles and guidance concerning the elicitation of ethical requirements for specifying the guidance. In this article, we outline the key normative (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Supporting Trustworthy AI Through Machine Unlearning.Emmie Hine, Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2024 - Science and Engineering Ethics 30 (5):1-13.
    Machine unlearning (MU) is often analyzed in terms of how it can facilitate the “right to be forgotten.” In this commentary, we show that MU can support the OECD’s five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Assessing dual use risks in AI research: necessity, challenges and mitigation strategies.Andreas Brenneis - forthcoming - Research Ethics.
    This article argues that due to the difficulty in governing AI, it is essential to develop measures implemented early in the AI research process. The goal of dual use considerations is to create robust strategies that uphold AI’s integrity while protecting societal interests. The challenges of applying dual use frameworks to AI research are examined and dual use and dual use research of concern (DURC) are defined while highlighting the difficulties in balancing the technology’s benefits and risks. AI’s dual use (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Health professions students’ perceptions of artificial intelligence and its integration to health professions education and healthcare: a thematic analysis.Ejercito Mangawa Balay-Odao, Dinara Omirzakova, Srinivasa Rao Bolla, Joseph U. Almazan & Jonas Preposi Cruz - forthcoming - AI and Society:1-11.
    Artificial intelligence (AI) is being tightly integrated into healthcare today. Even though AI is being utilized in healthcare, its application in clinical settings and health professions education is still controversial. The study described the perceptions of AI and its integration into health professions education and healthcare among health professions students. This descriptive phenomenological study analyzed the data from a purposive sample of 33 health professions students at a university in Kazakhstan using the thematic approach. Data collection was conducted from March (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No recognised ethical standards, no broad consent: navigating the quandary in computational social science research.Seliem El-Sayed & Filip Paspalj - 2024 - Research Ethics 20 (3):433-452.
    Recital 33 GDPR has often been interpreted as referring to ‘broad consent’. This version of informed consent was intended to allow data subjects to provide their consent for certain areas of research, or parts of research projects, conditional to the research being in line with ‘recognised ethical standards’. In this article, we argue that broad consent is applicable in the emerging field of Computational Social Science (CSS), which lies at the intersection of data science and social science. However, the lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Owning Decisions: AI Decision-Support and the Attributability-Gap.Jannik Zeiser - 2024 - Science and Engineering Ethics 30 (4):1-19.
    Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A proposal for formal fairness requirements in triage emergency departments: publicity, accessibility, relevance, standardisability and accountability.Davide Battisti & Silvia Camporesi - forthcoming - Journal of Medical Ethics.
    This paper puts forward a wish list of requirements for formal fairness in the specific context of triage in emergency departments (EDs) and maps the empirical and conceptual research questions that need to be addressed in this context in the near future. The pandemic has brought to the fore the necessity for public debate about how to allocate resources fairly in a situation of great shortage. However, issues of fairness arise also outside of pandemics: decisions about how to allocate resources (...)
    Download  
     
    Export citation  
     
    Bookmark