Switch to: References

Add citations

You must login to add citations.
  1. Galactica’s dis-assemblage: Meta’s beta and the omega of post-human science.Nicolas Chartier-Edwards, Etienne Grenier & Valentin Goujon - forthcoming - AI and Society:1-13.
    Released mid-November 2022, Galactica is a set of six large language models (LLMs) of different sizes (from 125 M to 120B parameters) designed by Meta AI to achieve the ultimate ambition of “a single neural network for powering scientific tasks”, according to its accompanying whitepaper. It aims to carry out knowledge-intensive tasks, such as publication summarization, information ordering and protein annotation. However, just a few days after the release, Meta had to pull back the demo due to the strong hallucinatory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods and resources, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Use case cards: a use case reporting framework inspired by the European AI Act.Emilia Gómez, Sandra Baldassarri, David Fernández-Llorca & Isabelle Hupont - 2024 - Ethics and Information Technology 26 (2):1-23.
    Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases that we call use case cards, based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Three lines of defense against risks from AI.Jonas Schuett - forthcoming - AI and Society:1-15.
    Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The three lines of defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. In this article, I suggest ways in which AI companies could (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Ethics of Online Controlled Experiments (A/B Testing).Andrea Polonioli, Riccardo Ghioni, Ciro Greco, Prathm Juneja, Jacopo Tagliabue, David Watson & Luciano Floridi - 2023 - Minds and Machines 33 (4):667-693.
    Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This article fills this gap in the literature by introducing a new, soft ethics and governance framework that explicitly recognizes how the rise of an experimentation culture in industry settings brings not only unprecedented opportunities to businesses but also significant responsibilities. More precisely, the article (a) introduces a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Documentation: A path to accountability.Florian Königstorfer & Stefan Thalmann - 2022 - Journal of Responsible Technology 11 (C):100043.
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Paul B. de Laat - 2021 - Philosophy and Technology 34 (4):1135-1193.
    The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this study: (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.Shakir Mohamed, Marie-Therese Png & William Isaac - 2020 - Philosophy and Technology 33 (4):659-684.
    This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial intelligence is viewed as amongst the technological advances that will reshape modern societies and their relations. While the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • Manifestations of xenophobia in AI systems.Nenad Tomasev, Jonathan Leader Maynard & Iason Gabriel - forthcoming - AI and Society:1-23.
    Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms.Kristof Meding & Thilo Hagendorff - 2024 - Philosophy and Technology 37 (1):1-22.
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in fair AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Detecting racial inequalities in criminal justice: towards an equitable deep learning approach for generating and interpreting racial categories using mugshots.Rahul Kumar Dass, Nick Petersen, Marisa Omori, Tamara Rice Lave & Ubbo Visser - 2023 - AI and Society 38 (2):897-918.
    Recent events have highlighted large-scale systemic racial disparities in U.S. criminal justice based on race and other demographic characteristics. Although criminological datasets are used to study and document the extent of such disparities, they often lack key information, including arrestees’ racial identification. As AI technologies are increasingly used by criminal justice agencies to make predictions about outcomes in bail, policing, and other decision-making, a growing literature suggests that the current implementation of these systems may perpetuate racial inequalities. In this paper, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hard choices in artificial intelligence.Roel Dobbe, Thomas Krendl Gilbert & Yonatan Mintz - 2021 - Artificial Intelligence 300 (C):103555.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The landscape of data and AI documentation approaches in the European policy context.Josep Soler-Garrido, Blagoj Delipetrev, Isabelle Hupont & Marina Micheli - 2023 - Ethics and Information Technology 25 (4):1-21.
    Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Contestable AI by Design: Towards a Framework.Kars Alfrink, Ianus Keller, Gerd Kortuem & Neelke Doorn - 2023 - Minds and Machines 33 (4):613-639.
    As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • In the Frame: the Language of AI.Helen Bones, Susan Ford, Rachel Hendery, Kate Richards & Teresa Swist - 2020 - Philosophy and Technology 34 (1):23-44.
    In this article, drawing upon a feminist epistemology, we examine the critical roles that philosophical standpoint, historical usage, gender, and language play in a knowledge arena which is increasingly opaque to the general public. Focussing on the language dimension in particular, in its historical and social dimensions, we explicate how some keywords in use across artificial intelligence (AI) discourses inform and misinform non-expert understandings of this area. The insights gained could help to imagine how AI technologies could be better conceptualised, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents to act (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics-based auditing of automated decision-making systems: intervention points and policy implications.Jakob Mökander & Maria Axente - 2023 - AI and Society 38 (1):153-171.
    Organisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The ethnographer and the algorithm: beyond the black box.Angèle Christin - 2020 - Theory and Society 49 (5-6):897-918.
    A common theme in social science studies of algorithms is that they are profoundly opaque and function as “black boxes.” Scholars have developed several methodological approaches in order to address algorithmic opacity. Here I argue that we can explicitly enroll algorithms in ethnographic research, which can shed light on unexpected aspects of algorithmic systems—including their opacity. I delineate three meso-level strategies for algorithmic ethnography. The first, algorithmic refraction, examines the reconfigurations that take place when computational software, people, and institutions interact. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Negotiating becoming: a Nietzschean critique of large language models.Simon W. S. Fischer & Bas de Boer - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) structure the linguistic landscape by reflecting certain beliefs and assumptions. In this paper, we address the risk of people unthinkingly adopting and being determined by the values or worldviews embedded in LLMs. We provide a Nietzschean critique of LLMs and, based on the concept of will to power, consider LLMs as will-to-power organisations. This allows us to conceptualise the interaction between self and LLMs as power struggles, which we understand as negotiation. Currently, the invisibility and incomprehensibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Missed opportunities for AI governance: lessons from ELS programs in genomics, nanotechnology, and RRI.Maximilian Braun & Ruth Müller - forthcoming - AI and Society:1-14.
    Since the beginning of the current hype around Artificial Intelligence (AI), governments, research institutions, and the industry invited ethical, legal, and social sciences (ELS) scholars to research AI’s societal challenges from various disciplinary viewpoints and perspectives. This approach builds upon the tradition of supporting research on the societal aspects of emerging sciences and technologies, which started with the Ethical, Legal, and Social Implications (ELSI) Program in the Human Genome Project (HGP) in the early 1990s. However, although a diverse ELS research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Training philosopher engineers for better AI.Brian Ball & Alexandros Koliousis - 2023 - AI and Society 38 (2):861-868.
    There is a deluge of AI-assisted decision-making systems, where our data serve as proxy to our actions, suggested by AI. The closer we investigate our data (raw input, or their learned representations, or the suggested actions), we begin to discover “bugs”. Outside of their test, controlled environments, AI systems may encounter situations investigated primarily by those in other disciplines, but experts in those fields are typically excluded from the design process and are only invited to attest to the ethical features (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The role of empathy for artificial intelligence accountability.Ramya Srinivasan & Beatriz San Miguel González - 2022 - Journal of Responsible Technology 9 (C):100021.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Quantifying and alleviating political bias in language models.Ruibo Liu, Chenyan Jia, Jason Wei, Guangxuan Xu & Soroush Vosoughi - 2022 - Artificial Intelligence 304 (C):103654.
    Download  
     
    Export citation  
     
    Bookmark   1 citation