Switch to: References

Add citations

You must login to add citations.
  1. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.Nils Freyer, Dominik Groß & Myriam Lipprandt - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Google, ChatGPT, questions of omniscience and wisdom.Frank J. Hoffman & Klairung Iso - forthcoming - Asian Philosophy:1-15.
    The article explores how platforms like Google and ChatGPT, which claim omniscience and wisdom-like attributes, prompt philosophical questions. It revisits religious perspectives on omniscience and their influence on the pursuit of wisdom. The article suggests that while Google may offer compartmentalized omniscience based on user preferences, ChatGPT’s factual accuracy challenges its characterization as omniscient. Nonetheless, ChatGPT can still help humans progress toward wisdom, by integrating the co-creation of knowledge between humans and the unfolding of divine knowledge from Process Thought and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • We Have No Satisfactory Social Epistemology of AI-Based Science.Inkeri Koskinen - 2024 - Social Epistemology 38 (4):458-475.
    In the social epistemology of scientific knowledge, it is largely accepted that relationships of trust, not just reliance, are necessary in contemporary collaborative science characterised by relationships of opaque epistemic dependence. Such relationships of trust are taken to be possible only between agents who can be held accountable for their actions. But today, knowledge production in many fields makes use of AI applications that are epistemically opaque in an essential manner. This creates a problem for the social epistemology of scientific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means ought to be (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Transparency and its roles in realizing greener AI.Omoregie Charles Osifo - 2023 - Journal of Information, Communication and Ethics in Society 21 (2):202-218.
    Purpose The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the design, development and manufacturing stages or processes of AI technologies (e.g. apps, systems, agents, tools, artifacts) and use the “explicability requirement” as an essential value within the framework of transparency in supporting arguments for realizing greener AI. Design/methodology/approach The approach of this paper is argumentative, which is supported by ideas from existing literature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The epistemological foundations of data science: a critical review.Luciano Floridi, Mariarosaria Taddeo, Vincent Wang, David Watson & Jules Desai - 2022 - Synthese 200 (6):1-27.
    The modern abundance and prominence of data have led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why algorithmic speed can be more important than algorithmic accuracy.Jakob Mainz, Lauritz Munch, Jens Christian Bjerring & Sissel Godtfredsen - 2023 - Clinical Ethics 18 (2):161-164.
    Artificial Intelligence (AI) often outperforms human doctors in terms of decisional speed. For some diseases, the expected benefit of a fast but less accurate decision exceeds the benefit of a slow but more accurate one. In such cases, we argue, it is often justified to rely on a medical AI to maximise decision speed – even if the AI is less accurate than human doctors.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Much Should You Care About Algorithmic Transparency as Manipulation?Ulrik Franke - 2022 - Philosophy and Technology 35 (4):1-7.
    Wang (_Philosophy & Technology_ 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Legitimacy and automated decisions: the moral limits of algocracy.Bartek Chomanski - 2022 - Ethics and Information Technology 24 (3):1-9.
    With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy”, and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms. One of the best-known examples of philosophical analyses of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The impact of artificial intelligence on jobs and work in New Zealand.James Maclaurin, Colin Gavaghan & Alistair Knott - 2021 - Wellington, New Zealand: New Zealand Law Foundation.
    Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • (1 other version)Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution.E. Fosch-Villaronga, S. van der Hof, C. Lutz & A. Tamò-Larrieux - 2021 - AI and Society:1-20.
    Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs’ far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Sentencing Disparity and Artificial Intelligence.Jesper Ryberg - 2023 - Journal of Value Inquiry 57 (3):447-462.
    The idea of using artificial intelligence as a support system in the sentencing process has attracted increasing attention. For instance, it has been suggested that machine learning algorithms may help in curbing problems concerning inter-judge sentencing disparity. The purpose of the present article is to examine the merits of this possibility. It is argued that, insofar as the unfairness of sentencing disparity is held to reflect a retributivist view of proportionality, it is not necessarily the case that increasing inter-judge uniformity (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • (1 other version)The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial intelligence and the value of transparency.Joel Walmsley - 2021 - AI and Society 36 (2):585-595.
    Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency may be straightforwardly achieved, what I call “functional” transparency about the inner (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing. [REVIEW]Peter Seele, Claus Dierksmeier, Reto Hofstetter & Mario D. Schultz - 2019 - Journal of Business Ethics 170 (4):697-719.
    Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on individual consumer information. Although legal, the ethicality of such approaches needs to be examined as often they trigger moral concerns and sometimes outrage. In this research paper, we provide an overview and discussion of the ethical challenges germane to algorithmic pricing. As a basis for our discussion, we perform a systematic interpretative (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Feminist AI: Can We Expect Our AI Systems to Become Feminist?Galit Wellner & Tiran Rothman - 2020 - Philosophy and Technology 33 (2):191-205.
    The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender bias (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward N. Zalta (ed.), Stanford Encylopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations