Switch to: References

Add citations

You must login to add citations.
  1. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Criminal Justice and Artificial Intelligence: How Should we Assess the Performance of Sentencing Algorithms?Jesper Ryberg - 2024 - Philosophy and Technology 37 (1):1-15.
    Artificial intelligence is increasingly permeating many types of high-stake societal decision-making such as the work at the criminal courts. Various types of algorithmic tools have already been introduced into sentencing. This article concerns the use of algorithms designed to deliver sentence recommendations. More precisely, it is considered how one should determine whether one type of sentencing algorithm (e.g., a model based on machine learning) would be ethically preferable to another type of sentencing algorithm (e.g., a model based on old-fashioned programming). (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.Sune Holm - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-7.
    When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Harm to Nonhuman Animals from AI: a Systematic Account and Framework.Simon Coghlan & Christine Parker - 2023 - Philosophy and Technology 36 (2):1-34.
    This paper provides a systematic account of how artificial intelligence (AI) technologies could harm nonhuman animals and explains why animal harms, often neglected in AI ethics, should be better recognised. After giving reasons for caring about animals and outlining the nature of animal harm, interests, and wellbeing, the paper develops a comprehensive ‘harms framework’ which draws on scientist David Fraser’s influential mapping of human activities that impact on sentient animals. The harms framework is fleshed out with examples inspired by both (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Evaluating XAI: A comparison of rule-based and example-based explanations.Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers & Mark Neerincx - 2021 - Artificial Intelligence 291 (C):103404.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2020 - Artificial Intelligence and Law 29 (2):149-169.
    Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A top-level model of case-based argumentation for explanation: Formalisation and experiments.Henry Prakken & Rosa Ratsma - 2022 - Argument and Computation 13 (2):159-194.
    This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • On the Ethical and Epistemological Utility of Explicable AI in Medicine.Christian Herzog - 2022 - Philosophy and Technology 35 (2):1-31.
    In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Automated decision-making and the problem of evil.Andrea Berber - 2023 - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2021 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2):20.
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):370-379.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • ML interpretability: Simple isn't easy.Tim Räz - 2024 - Studies in History and Philosophy of Science Part A 103 (C):159-167.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - 2023 - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explaining AI through mechanistic interpretability.Lena Kästner & Barnaby Crook - 2024 - European Journal for Philosophy of Science 14 (4):1-25.
    Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems understandable through a divide-and-conquer strategy. However, this fails to illuminate how trained AI systems work as a whole. Precisely this kind of functional understanding is needed, though, to satisfy important societal desiderata such as safety. To remedy this situation, we argue, AI researchers should seek mechanistic interpretability, viz. apply coordinated discovery strategies familiar from the life sciences to uncover the functional organisation of complex AI systems. Additionally, theorists (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Logic Explained Networks.Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Liò, Marco Maggini & Stefano Melacci - 2023 - Artificial Intelligence 314 (C):103822.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating the knowledge of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Machines Learn Better with Better Data Ontology: Lessons from Philosophy of Induction and Machine Learning Practice.Dan Li - 2023 - Minds and Machines 33 (3):429-450.
    As scientists start to adopt machine learning (ML) as one research tool, the security of ML and the knowledge generated become a concern. In this paper, I explain how supervised ML can be improved with better data ontology, or the way we make categories and turn information into data. More specifically, we should design data ontology in such a way that is consistent with the knowledge that we have about the target phenomenon so that such ontology can help us make (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Value Alignment for Advanced Artificial Judicial Intelligence.Christoph Winter, Nicholas Hollman & David Manheim - 2023 - American Philosophical Quarterly 60 (2):187-203.
    This paper considers challenges resulting from the use of advanced artificial judicial intelligence (AAJI). We argue that these challenges should be considered through the lens of value alignment. Instead of discussing why specific goals and values, such as fairness and nondiscrimination, ought to be implemented, we consider the question of how AAJI can be aligned with goals and values more generally, in order to be reliably integrated into legal and judicial systems. This value alignment framing draws on AI safety and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Epistemic Role of AI Decision Support Systems: Neither Superiors, Nor Inferiors, Nor Peers.Rand Hirmiz - 2024 - Philosophy and Technology 37 (127):1-20.
    Despite the importance of discussions over the epistemic role that artificially intelligent decision support systems ought to play, there is currently a lack of these discussions in both the AI literature and the epistemology literature. My goal in this paper is to rectify this by proposing an account of the epistemic role of AI decision support systems in medicine and discussing what this epistemic role means with regard to how these systems ought to be utilized. In particular, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Commentary on David Watson, “On the Philosophy of Unsupervised Learning”.Tom F. Sterkenburg - 2023 - Philosophy and Technology 36 (4):1-5.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):1-33.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation