Switch to: References

Add citations

You must login to add citations.
  1. (1 other version)Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - 2024 - Minds and Machines 34 (3):1-27.
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning results (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • We Have No Satisfactory Social Epistemology of AI-Based Science.Inkeri Koskinen - 2024 - Social Epistemology 38 (4):458-475.
    In the social epistemology of scientific knowledge, it is largely accepted that relationships of trust, not just reliance, are necessary in contemporary collaborative science characterised by relationships of opaque epistemic dependence. Such relationships of trust are taken to be possible only between agents who can be held accountable for their actions. But today, knowledge production in many fields makes use of AI applications that are epistemically opaque in an essential manner. This creates a problem for the social epistemology of scientific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - 2024 - AI and Society 39 (6):2691-2702.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Expert judgment in climate science: How it is used and how it can be justified.Mason Majszak & Julie Jebeile - 2023 - Studies in History and Philosophy of Science 100 (C):32-38.
    Like any science marked by high uncertainty, climate science is characterized by a widespread use of expert judgment. In this paper, we first show that, in climate science, expert judgment is used to overcome uncertainty, thus playing a crucial role in the domain and even at times supplanting models. One is left to wonder to what extent it is legitimate to assign expert judgment such a status as an epistemic superiority in the climate context, especially as the production of expert (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.Charles Rathkopf & Bert Heinrichs - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):333-345.
    Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Why Trust a Simulation? Models, Parameters, and Robustness in Simulation-Infected Experiments.Florian J. Boge - forthcoming - British Journal for the Philosophy of Science.
    Computer simulations are nowadays often directly involved in the generation of experimental results. Given this dependency of experiments on computer simulations, that of simulations on models, and that of the models on free parameters, how do researchers establish trust in their experimental results? Using high-energy physics (HEP) as a case study, I will identify three different types of robustness that I call conceptual, methodological, and parametric robustness, and show how they can sanction this trust. However, as I will also show, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Explaining Epistemic Opacity.Ramón Alvarado - unknown
    Conventional accounts of epistemic opacity, particularly those that stem from the definitive work of Paul Humphreys, typically point to limitations on the part of epistemic agents to account for the distinct ways in which systems, such as computational methods and devices, are opaque. They point, for example, to the lack of technical skill on the part of an agent, the failure to meet standards of best practice, or even the nature of an agent as reasons why epistemically relevant elements of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Models, Parameterization, and Software: Epistemic Opacity in Computational Chemistry.Frédéric Wieber & Alexandre Hocquet - 2020 - Perspectives on Science 28 (5):610-629.
    . Computational chemistry grew in a new era of “desktop modeling,” which coincided with a growing demand for modeling software, especially from the pharmaceutical industry. Parameterization of models in computational chemistry is an arduous enterprise, and we argue that this activity leads, in this specific context, to tensions among scientists regarding the epistemic opacity transparency of parameterized methods and the software implementing them. We relate one flame war from the Computational Chemistry mailing List in order to assess in detail the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Degrees of Epistemic Opacity.Iñaki San Pedro - manuscript
    The paper analyses in some depth the distinction by Paul Humphreys between "epistemic opacity" —which I refer to as "weak epistemic opacity" here— and "essential epistemic opacity", and defends the idea that epistemic opacity in general can be made sense as coming in degrees. The idea of degrees of epistemic opacity is then exploited to show, in the context of computer simulations, the tight relation between the concept of epistemic opacity and actual scientific (modelling and simulation) practices. As a consequence, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Calculating surprises: a review for a philosophy of computer simulations: Johannes Lenhard: Calculated Surprises. A philosophy of computer simulations. New York: Oxford University Press, 2019, 256pp, 64,12 €.Juan M. Durán - 2020 - Metascience 29 (2):337-340.
    Download  
     
    Export citation  
     
    Bookmark  
  • Ciencia de la computación y filosofía: unidades de análisis del software.Juan Manuel Durán - 2018 - Principia 22 (2):203-227.
    Una imagen muy generalizada a la hora de entender el software de computador es la que lo representa como una “caja negra”: no importa realmente saber qué partes lo componen internamente, sino qué resultados se obtienen de él según ciertos valores de entrada. Al hacer esto, muchos problemas filosóficos son ocultados, negados o simplemente mal entendidos. Este artículo discute tres unidades de análisis del software de computador, esto es, las especificaciones, los algoritmos y los procesos computacionales. El objetivo central es (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computer Simulations in Science and Engineering. Concept, Practices, Perspectives.Juan Manuel Durán - 2018 - Springer.
    This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • (1 other version)Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - 2024 - Science and Engineering Ethics 30 (6):1-21.
    We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Coding To Curing. Functions, Implementations, and Correctness in Deep Learning.Nicola Angius & Alessio Plebe - 2023 - Philosophy and Technology 36 (3):1-27.
    This paper sheds light on the shift that is taking place from the practice of ‘coding’, namely developing programs as conventional in the software community, to the practice of ‘curing’, an activity that has emerged in the last few years in Deep Learning (DL) and that amounts to curing the data regime to which a DL model is exposed during training. Initially, the curing paradigm is illustrated by means of a study-case on autonomous vehicles. Subsequently, the shift from coding to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Opacity thought through: on the intransparency of computer simulations.Claus Beisbart - 2021 - Synthese 199 (3-4):11643-11666.
    Computer simulations are often claimed to be opaque and thus to lack transparency. But what exactly is the opacity of simulations? This paper aims to answer that question by proposing an explication of opacity. Such an explication is needed, I argue, because the pioneering definition of opacity by P. Humphreys and a recent elaboration by Durán and Formanek are too narrow. While it is true that simulations are opaque in that they include too many computations and thus cannot be checked (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare.Juan M. Durán - 2021 - Artificial Intelligence 297 (C):103498.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Black box algorithms in mental health apps: An ethical reflection.Tania Manríquez Roa & Nikola Biller-Andorno - 2023 - Bioethics 37 (8):790-797.
    Mental health apps bring unprecedented benefits and risks to individual and public health. A thorough evaluation of these apps involves considering two aspects that are often neglected: the algorithms they deploy and the functions they perform. We focus on mental health apps based on black box algorithms, explore their forms of opacity, discuss the implications derived from their opacity, and propose how to use their outcomes in mental healthcare, self‐care practices, and research. We argue that there is a relevant distinction (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Non-theory-driven Character of Computer Simulations and Their Role as Exploratory Strategies.Juan M. Durán - 2023 - Minds and Machines 33 (3):487-505.
    In this article, I focus on the role of computer simulations as exploratory strategies. I begin by establishing the non-theory-driven nature of simulations. This refers to their ability to characterize phenomena without relying on a predefined conceptual framework that is provided by an implemented mathematical model. Drawing on Steinle’s notion of exploratory experimentation and Gelfert’s work on exploratory models, I present three exploratory strategies for computer simulations: (1) starting points and continuation of scientific inquiry, (2) varying the parameters, and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Capturing the representational and the experimental in the modelling of artificial societies.David Anzola - 2021 - European Journal for Philosophy of Science 11 (3):1-29.
    Even though the philosophy of simulation is intended as a comprehensive reflection about the practice of computer simulation in contemporary science, its output has been disproportionately shaped by research on equation-based simulation in the physical and climate sciences. Hence, the particularities of alternative practices of computer simulation in other scientific domains are not sufficiently accounted for in the current philosophy of simulation literature. This article centres on agent-based social simulation, a relatively established type of simulation in the social sciences, to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Dark Data as the New Challenge for Big Data Science and the Introduction of the Scientific Data Officer.Björn Schembera & Juan M. Durán - 2019 - Philosophy and Technology:1-23.
    Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart. We also propose a new position tailor-made for coping with dark data and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explorations about the Family’s Role in the German Transplantation System: Epistemic Opacity and Discursive Exclusion.Iris Hilbrich & Solveig Lena Hansen - 2022 - Social Epistemology 36 (1):43-62.
    With regard to organ donation, Germany is an ‘opt-in’ country, which requires explicit consent from donors. The relatives are either asked to decide on behalf of the donors’ preferences, if these are unknown or if the potential donor has explicitly transferred the decision to them. At the core of this policy lies the sociocultural and moral premise of a rational, autonomous individual, whose rights require legal protection in order to guarantee a voluntary decision. In concrete transplantation practices, the family plays (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Formal Framework for Computer Simulations: Surveying the Historical Record and Finding Their Philosophical Roots.Juan M. Durán - 2019 - Philosophy and Technology 34 (1):105-127.
    A chronicled approach to the notion of computer simulations shows that there are two predominant interpretations in the specialized literature. According to the first interpretation, computer simulations are techniques for finding the set of solutions to a mathematical model. I call this first interpretation the problem-solving technique viewpoint. In its second interpretation, computer simulations are considered to describe patterns of behavior of a target system. I call this second interpretation the description of patterns of behavior viewpoint of computer simulations. This (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Towards trustworthy medical AI ecosystems – a proposal for supporting responsible innovation practices in AI-based medical innovation.Christian Herzog, Sabrina Blank & Bernd Carsten Stahl - forthcoming - AI and Society:1-21.
    In this article, we explore questions about the culture of trustworthy artificial intelligence (AI) through the lens of ecosystems. We draw on the European Commission’s Guidelines for Trustworthy AI and its philosophical underpinnings. Based on the latter, the trustworthiness of an AI ecosystem can be conceived of as being grounded by both the so-called rational-choice and motivation-attributing accounts—i.e., trusting is rational because solution providers deliver expected services reliably, while trust also involves resigning control by attributing one’s motivation, and hence, goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is a Simulation Model?Juan M. Durán - 2020 - Minds and Machines 30 (3):301-323.
    Many philosophical accounts of scientific models fail to distinguish between a simulation model and other forms of models. This failure is unfortunate because there are important differences pertaining to their methodology and epistemology that favor their philosophical understanding. The core claim presented here is that simulation models are rich and complex units of analysis in their own right, that they depart from known forms of scientific models in significant ways, and that a proper understanding of the type of model simulations (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Dark Data as the New Challenge for Big Data Science and the Introduction of the Scientific Data Officer.Björn Schembera & Juan M. Durán - 2020 - Philosophy and Technology 33 (1):93-115.
    Many studies in big data focus on the uses of data available to researchers, leaving without treatment data that is on the servers but of which researchers are unaware. We call this dark data, and in this article, we present and discuss it in the context of high-performance computing facilities. To this end, we provide statistics of a major HPC facility in Europe, the High-Performance Computing Center Stuttgart. We also propose a new position tailor-made for coping with dark data and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Global justice and the use of AI in education: ethical and epistemic aspects.Aleksandra Vučković & Vlasta Sikimić - forthcoming - AI and Society:1-18.
    One of the biggest contemporary challenges in education is the appropriate application of advanced digital solutions. If properly implemented, AI could benefit students, opening the door for personalized study programs. However, we need to ensure that AI in classrooms is used responsibly and that it does not pose a threat to students in any way. More specifically, we need to preserve the moral and epistemic values we wish to pass on to future generations and ensure the inclusion of underprivileged students. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation