Results for 'Algorithmic explainability, Explanation game, Interpretable machine learning, Pareto frontier, Relevance'

980 found
Order:
  1. (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  2. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  4. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  5. The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  6. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. An Introduction to Artificial Psychology Application Fuzzy Set Theory and Deep Machine Learning in Psychological Research using R.Farahani Hojjatollah - 2023 - Springer Cham. Edited by Hojjatollah Farahani, Marija Blagojević, Parviz Azadfallah, Peter Watson, Forough Esrafilian & Sara Saljoughi.
    Artificial Psychology (AP) is a highly multidisciplinary field of study in psychology. AP tries to solve problems which occur when psychologists do research and need a robust analysis method. Conventional statistical approaches have deep rooted limitations. These approaches are excellent on paper but often fail to model the real world. Mind researchers have been trying to overcome this by simplifying the models being studied. This stance has not received much practical attention recently. Promoting and improving artificial intelligence helps mind researchers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9.  71
    A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with numerous (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  39
    Leveraging Machine Learning Algorithms for Medical Image Classification Introduction.Ugochukwu Llodinso - manuscript
    The use of machine learning to medical image classification has seen significant development and implementation in the last several years. Computers can learn to identify patterns, make predictions, and use data to inform their judgements; this capability is known as machine learning, a branch of Artificial intelligence (AI). Classifying images according to their contents allows us to do things like identify the type of sickness, organ, or tissue depicted. Medical picture classification and interpretation using machine learning algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  15. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. The Boundaries of Meaning: A Case Study in Neural Machine Translation.Yuri Balashov - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 66.
    The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Medical Image Classification with Machine Learning Classifier.Destiny Agboro - forthcoming - Journal of Computer Science.
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with medical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  22. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Developing Artificial Human-Like Arithmetical Intelligence (and Why).Markus Pantsar - 2023 - Minds and Machines 33 (3):379-396.
    Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  22
    Detecting Experts Using a MiniRocket: Gaze Direction Time Series Classification of Real-Life Experts Playing the Sustainable Port.Gianluca Guglielmo, Michal Klincewicz, Elisabeth Huis in ’T. Veld & Pieter Spronck - 2025 - Gala 2024. Lecture Notes in Computer Science 15348:177–187.
    This study aimed to identify real-life experts working for a port authority and lay people (students) who played The Sustainable Port, a serious game aiming to simulate the dynamics occurring in a port area. To achieve this goal, we analyzed eye gaze data collected noninvasively using low-grade webcams from 28 participants working for the port authority of the Port of Rotterdam and 66 students. Such data were used for a classification task implemented using a MiniRocket classifier, an algorithm used for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - 2021
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  28. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark  
  30. Two challenges for CI trustworthiness and how to address them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. The Role of Imagination in Social Scientific Discovery: Why Machine Discoverers Will Need Imagination Algorithms.Michael Stuart - 2019 - In Mark Addis, Fernand Gobet & Peter Sozou (eds.), Scientific Discovery in the Social Sciences. Springer Verlag.
    When philosophers discuss the possibility of machines making scientific discoveries, they typically focus on discoveries in physics, biology, chemistry and mathematics. Observing the rapid increase of computer-use in science, however, it becomes natural to ask whether there are any scientific domains out of reach for machine discovery. For example, could machines also make discoveries in qualitative social science? Is there something about humans that makes us uniquely suited to studying humans? Is there something about machines that would bar them (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  32. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Interpreting the Rules of the Game.C. Mantzavinos - 2007 - In Christoph Engel Firtz Strack (ed.), The Impact of Court Procedure on the Psychology of Judicial Decision-Making. Nomos. pp. 16-30.
    After providing a brief overview of the economic theory of judicial decisions this paper presents an argument for why not only the economic theory of judicial decisions, but also the rational approach in general, most often fails in explaining decision-making. Work done within the research program of New Institutionalism is presented as a possible alternative. Within this research program judicial activity is conceptualized as the activity of "interpreting the rules of the game", i.e. the institutions that frame the economic and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   218 citations  
  35. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  36. AI as Ideology: A Marxist Reading (Crawford, Marx/Engels, Debord, Althusser).Jeffrey Reid - manuscript
    Kate Crawford presents AI as “both reflecting and producing social relations and understandings of the world”; or again, as “a form of exercising power, and a way of seeing… as a manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet”. I interpret these material insights through a Marxist understanding of ideology, with reference to Marx/Engels, Guy Debord and Louis Althusser. In the German Ideology, Marx and Engels present (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  38. Cognitive Ontologies, Task Ontologies, and Explanation in Cognitive Neuroscience.Daniel Burnston - forthcoming - In John Bickle, Carl F. Craver & Ann Sophie Barwich (eds.), Neuroscience Experiment: Philosophical and Scientific Perspectives.
    The traditional approach to explanation in cognitive neuroscience is realist about psychological constructs, and treats them as explanatory. On the “standard framework,” cognitive neuroscientists explain behavior as the result of the instantiation of psychological functions in brain activity. This strategy is questioned by results suggesting the distribution of function in the brain, the multifunctionality of individual parts of the brain, and the overlap in neural realization of purportedly distinct psychological constructs. One response to this in the field has been (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  47
    Mechanizing Induction.Ronald Ortner & Hannes Leitgeb - 2009 - In Dov Gabbay (ed.), The Handbook of the History of Logic. Elsevier. pp. 719--772.
    In this chapter we will deal with “mechanizing” induction, i.e. with ways in which theoretical computer science approaches inductive generalization. In the field of Machine Learning, algorithms for induction are developed. Depending on the form of the available data, the nature of these algorithms may be very different. Some of them combine geometric and statistical ideas, while others use classical reasoning based on logical formalism. However, we are not so much interested in the algorithms themselves, but more on the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  44. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Machine intelligence: a chimera.Mihai Nadin - 2019 - AI and Society 34 (2):215-242.
    The notion of computation has changed the world more than any previous expressions of knowledge. However, as know-how in its particular algorithmic embodiment, computation is closed to meaning. Therefore, computer-based data processing can only mimic life’s creative aspects, without being creative itself. AI’s current record of accomplishments shows that it automates tasks associated with intelligence, without being intelligent itself. Mistaking the abstract for the concrete has led to the religion of “everything is an output of computation”—even the humankind that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  46. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers.Lorna Green - manuscript
    June 2022 A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers We are in a unique moment of our history unlike any previous moment ever. Virtually all human economies are based on the destruction of the Earth, and we are now at a place in our history where we can foresee if we continue on as we are, our own extinction. As I write, the planet is in deep trouble, heat, fires, great storms, and record flooding, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  97
    OPTIMIZING CONSUMER BEHAVIOUR ANALYTICS THROUGH ADVANCED MACHINE LEARNING ALGORITHMS.S. Yoheswari - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):360-368.
    Consumer behavior analytics has become a pivotal aspect for businesses to understand and predict customer preferences and actions. The advent of machine learning (ML) algorithms has revolutionized this field by providing sophisticated tools for data analysis, enabling businesses to make data-driven decisions. However, the effectiveness of these ML algorithms significantly hinges on the optimization techniques employed, which can enhance model accuracy and efficiency. This paper explores the application of various optimization techniques in consumer behaviour analytics using machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  50. Saving the mutual manipulability account of constitutive relevance.Beate Krickel - 2018 - Studies in History and Philosophy of Science Part A 68:58-67.
    Constitutive mechanistic explanations are said to refer to mechanisms that constitute the phenomenon-to-be-explained. The most prominent approach of how to understand this constitution relation is Carl Craver’s mutual manipulability approach to constitutive relevance. Recently, the mutual manipulability approach has come under attack (Leuridan 2012; Baumgartner and Gebharter 2015; Romero 2015; Harinen 2014; Casini and Baumgartner 2016). Roughly, it is argued that this approach is inconsistent because it is spelled out in terms of interventionism (which is an approach to causation), (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
1 — 50 / 980