Switch to: References

Add citations

You must login to add citations.
  1. Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - forthcoming - Medicine, Health Care and Philosophy:1-10.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair Outputs.Markus Langer, Kevin Baum & Nadine Schlicker - 2024 - Minds and Machines 35 (1):1-30.
    Legislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explaining AI through mechanistic interpretability.Lena Kästner & Barnaby Crook - 2024 - European Journal for Philosophy of Science 14 (4):1-25.
    Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems understandable through a divide-and-conquer strategy. However, this fails to illuminate how trained AI systems work as a whole. Precisely this kind of functional understanding is needed, though, to satisfy important societal desiderata such as safety. To remedy this situation, we argue, AI researchers should seek mechanistic interpretability, viz. apply coordinated discovery strategies familiar from the life sciences to uncover the functional organisation of complex AI systems. Additionally, theorists (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Galactica’s dis-assemblage: Meta’s beta and the omega of post-human science.Nicolas Chartier-Edwards, Etienne Grenier & Valentin Goujon - forthcoming - AI and Society:1-13.
    Released mid-November 2022, Galactica is a set of six large language models (LLMs) of different sizes (from 125 M to 120B parameters) designed by Meta AI to achieve the ultimate ambition of “a single neural network for powering scientific tasks”, according to its accompanying whitepaper. It aims to carry out knowledge-intensive tasks, such as publication summarization, information ordering and protein annotation. However, just a few days after the release, Meta had to pull back the demo due to the strong hallucinatory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Stochastic contingency machines feeding on meaning: on the computational determination of social reality in machine learning.Richard Groß - forthcoming - AI and Society:1-14.
    In this paper, I reflect on the puzzle that machine learning presents to social theory to develop an account of its distinct impact on social reality. I start by presenting how machine learning has presented a challenge to social theory as a research subject comprising both familiar and alien characteristics (1.). Taking this as an occasion for theoretical inquiry, I then propose a conceptual framework to investigate how algorithmic models of social phenomena relate to social reality and what their stochastic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Knower Without a Voice: Co-Reasoning with Machine Learning.Eleanor Gilmore-Szott & Ryan Dougherty - 2024 - American Journal of Bioethics 24 (9):103-105.
    Bioethical consensus promotes a shared decision making model, which requires healthcare professionals to partner their knowledge with that of their patients—who, at a minimum, contribute their valu...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is motivated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  • A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Between academic standards and wild innovation: assessing big data and artificial intelligence projects in research ethics committees.Andreas Brenneis, Petra Gehring & Annegret Lamadé - 2024 - Ethik in der Medizin 36 (4):473-491.
    Definition of the problem In medicine, as well as in other disciplines, computer science expertise is becoming increasingly important. This requires a culture of interdisciplinary assessment, for which medical ethics committees are not well prepared. The use of big data and artificial intelligence (AI) methods (whether developed in-house or in the form of “tools”) pose further challenges for research ethics reviews. Arguments This paper describes the problems and suggests solving them through procedural changes. Conclusion An assessment that is interdisciplinary from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry.Tomasz Hollanek & Katarzyna Nowaczyk-Basińska - 2024 - Philosophy and Technology 37 (2):1-22.
    To analyze potential negative consequences of adopting generative AI solutions in the digital afterlife industry (DAI), in this paper we present three speculative design scenarios for AI-enabled simulation of the deceased. We highlight the perspectives of the data donor, data recipient, and service interactant – terms we employ to denote those whose data is used to create ‘deadbots,’ those in possession of the donor’s data after their death, and those who are meant to interact with the end product. We draw (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms Don’t Have A Future: On the Relation of Judgement and Calculation.Daniel Stader - 2024 - Philosophy and Technology 37 (1):1-29.
    This paper is about the opposite of judgement and calculation. This opposition has been a traditional anchor of critiques concerned with the rise of AI decision making over human judgement. Contrary to these approaches, it is argued that human judgement is not and cannot be replaced by calculation, but that it is human judgement that contextualises computational structures and gives them meaning and purpose. The article focuses on the epistemic structure of algorithms and artificial neural networks to find that they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Attention, Moral Skill, and Algorithmic Recommendation.Nick Schuster & Seth Lazar - forthcoming - Philosophical Studies.
    Recommender systems are artificial intelligence technologies, deployed by online platforms, that model our individual preferences and direct our attention to content we’re likely to engage with. As the digital world has become increasingly saturated with information, we’ve become ever more reliant on these tools to efficiently allocate our attention. And our reliance on algorithmic recommendation may, in turn, reshape us as moral agents. While recommender systems could in principle enhance our moral agency by enabling us to cut through the information (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and bureaucratic discretion.Kate Vredenburgh - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Will intelligent machines become moral patients?Parisa Moosavi - 2023 - Philosophy and Phenomenological Research 109 (1):95-116.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The black box problem revisited. Real and imaginary challenges for automated legal decision making.Bartosz Brożek, Michał Furman, Marek Jakubiec & Bartłomiej Kucharzyk - 2024 - Artificial Intelligence and Law 32 (2):427-440.
    This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moral distance, AI, and the ethics of care.Carolina Villegas-Galaviz & Kirsten Martin - forthcoming - AI and Society:1-12.
    This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The person of the category: the pricing of risk and the politics of classification in insurance and credit.Greta R. Krippner & Daniel Hirschman - 2022 - Theory and Society 51 (5):685-727.
    In recent years, scholars in the social sciences and humanities have turned their attention to how the rise of digital technologies is reshaping political life in contemporary society. Here, we analyze this issue by distinguishing between two classification technologies typical of pre-digital and digital eras that differently constitute the relationship between individuals and groups. In class-based systems, characteristic of the pre-digital era, one’s status as an individual is gained through membership in a group in which salient social identities are shared (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Elephant motorbikes and too many neckties: epistemic spatialization as a framework for investigating patterns of bias in convolutional neural networks.Raymond Drainville & Farida Vis - 2024 - AI and Society 39 (3):1079-1093.
    This article presents Epistemic Spatialization as a new framework for investigating the interconnected patterns of biases when identifying objects with convolutional neural networks (convnets). It draws upon Foucault’s notion of spatialized knowledge to guide its method of enquiry. We argue that decisions involved in the creation of algorithms, alongside the labeling, ordering, presentation, and commercial prioritization of objects, together create a distorted “nomination of the visible”: they harden the visibility of some objects, make other objects excessively visible, and consign yet (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and democratic legitimacy. The problem of publicity in public authority.Ludvig Beckman, Jonas Hultin Rosenberg & Karim Jebari - forthcoming - AI and Society.
    Machine learning algorithms are increasingly used to support decision-making in the exercise of public authority. Here, we argue that an important consideration has been overlooked in previous discussions: whether the use of ML undermines the democratic legitimacy of public institutions. From the perspective of democratic legitimacy, it is not enough that ML contributes to efficiency and accuracy in the exercise of public authority, which has so far been the focus in the scholarly literature engaging with these developments. According to one (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Framing the effects of machine learning on science.Victo J. Silva, Maria Beatriz M. Bonacelli & Carlos A. Pacheco - forthcoming - AI and Society:1-17.
    Studies investigating the relationship between artificial intelligence and science tend to adopt a partial view. There is no broad and holistic view that synthesizes the channels through which this interaction occurs. Our goal is to systematically map the influence of the latest AI techniques on science. We draw on the work of Nathan Rosenberg to develop a taxonomy of the effects of technology on science. The proposed framework comprises four categories of technology effects on science: intellectual, economic, experimental and instrumental. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Eye for Artificial Intelligence: Insights Into the Governance of Artificial Intelligence and Vision for Future Research.Ruth V. Aguilera & Deepika Chhillar - 2022 - Business and Society 61 (5):1197-1241.
    In this 60th anniversary of Business & Society essay, we seek to make three main contributions at the intersection of governance and artificial intelligence. First, we aim to illuminate some of the deeper social, legal, organizational, and democratic challenges of rising AI adoption and resulting algorithmic power by reviewing AI research through a governance lens. Second, we propose an AI governance framework that aims to better assess AI challenges as well as how different governance modalities can support AI. At the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Tensions in transparent urban AI: designing a smart electric vehicle charge point.Kars Alfrink, Ianus Keller, Neelke Doorn & Gerd Kortuem - 2023 - AI and Society 38 (3):1049-1065.
    The increasing use of artificial intelligence (AI) by public actors has led to a push for more transparency. Previous research has conceptualized AI transparency as knowledge that empowers citizens and experts to make informed choices about the use and governance of AI. Conversely, in this paper, we critically examine if transparency-as-knowledge is an appropriate concept for a public realm where private interests intersect with democratic concerns. We conduct a practice-based design research study in which we prototype and evaluate a transparent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The Right to Explanation.Kate Vredenburgh - 2021 - Journal of Political Philosophy 30 (2):209-229.
    Journal of Political Philosophy, Volume 30, Issue 2, Page 209-229, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2022 - AI and Society 37 (1):215-230.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • A sociotechnical perspective for the future of AI: narratives, inequalities, and human control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1):1-11.
    Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • (1 other version)The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark