Switch to: References

Add citations

You must login to add citations.
  1. Explainable AI under contract and tort law: legal incentives and technical challenges.Philipp Hacker, Ralf Krestel, Stefan Grundmann & Felix Naumann - 2020 - Artificial Intelligence and Law 28 (4):415-439.
    This paper shows that the law, in subtle ways, may set hitherto unrecognized incentives for the adoption of explainable machine learning applications. In doing so, we make two novel contributions. First, on the legal side, we show that to avoid liability, professional actors, such as doctors and managers, may soon be legally compelled to use explainable ML models. We argue that the importance of explainability reaches far beyond data protection law, and crucially influences questions of contractual and tort liability for (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Open source intelligence and AI: a systematic review of the GELSI literature.Riccardo Ghioni, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-16.
    Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Children at Play: Thoughts about the impact of networked toys in the game of life and the role of law.Ulrich Gaspar - 2018 - International Review of Information Ethics 27.
    Information communication technology is spreading fast and wide. Driven by convenience, it enables people to undertake personal tasks and make decisions more easily and efficiently. Convenience enjoys an air of liberation as well as self-expression affecting all areas of life. The industry for children's toys is a major economic market becoming ever more tech-related and drawn into the battle for convenience. Like any other tech-related industry, this battle is about industry dominance and, currently, that involves networked toys. Networked toys aim (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Friendly Critique of Levinasian Machine Ethics.Patrick Gamez - 2022 - Southern Journal of Philosophy 60 (1):118-149.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 118-149, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):1-33.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):77-109.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution.E. Fosch-Villaronga, S. van der Hof, C. Lutz & A. Tamò-Larrieux - 2021 - AI and Society:1-20.
    Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs’ far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution.E. Fosch-Villaronga, S. van der Hof, C. Lutz & A. Tamò-Larrieux - 2023 - AI and Society 38 (1):133-152.
    Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs’ far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - forthcoming - Theory, Culture and Society:026327642096638.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Elephant motorbikes and too many neckties: epistemic spatialization as a framework for investigating patterns of bias in convolutional neural networks.Raymond Drainville & Farida Vis - forthcoming - AI and Society:1-15.
    This article presents Epistemic Spatialization as a new framework for investigating the interconnected patterns of biases when identifying objects with convolutional neural networks. It draws upon Foucault’s notion of spatialized knowledge to guide its method of enquiry. We argue that decisions involved in the creation of algorithms, alongside the labeling, ordering, presentation, and commercial prioritization of objects, together create a distorted “nomination of the visible”: they harden the visibility of some objects, make other objects excessively visible, and consign yet others (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
    I analyse an argument according to which medical artificial intelligence represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: it confuses AI with machine learning; it misses machine learning’s potential for personalised medicine through big data; it fails to distinguish between evidence-based (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Automated news recommendation in front of adversarial examples and the technical limits of transparency in algorithmic accountability.Antonin Descampe, Clément Massart, Simon Poelman, François-Xavier Standaert & Olivier Standaert - 2022 - AI and Society 37 (1):67-80.
    Algorithmic decision making is used in an increasing number of fields. Letting automated processes take decisions raises the question of their accountability. In the field of computational journalism, the algorithmic accountability framework proposed by Diakopoulos formalizes this challenge by considering algorithms as objects of human creation, with the goal of revealing the intent embedded into their implementation. A consequence of this definition is that ensuring accountability essentially boils down to a transparency question: given the appropriate reverse-engineering tools, it should be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations