Results for 'Black Box'

874 found
Order:
  1. The Black Box in Stoic Axiology.Michael Vazquez - 2023 - Pacific Philosophical Quarterly 104 (1):78–100.
    The ‘black box’ in Stoic axiology refers to the mysterious connection between the input of Stoic deliberation (reasons generated by the value of indifferents) and the output (appropriate actions). In this paper, I peer into the black box by drawing an analogy between Stoic and Kantian axiology. The value and disvalue of indifferents is intrinsic, but conditional. An extrinsic condition on the value of a token indifferent is that one's selection of that indifferent is sanctioned by context-relative ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  3. Bundle Theory’s Black Box: Gap Challenges for the Bundle Theory of Substance.Robert Garcia - 2014 - Philosophia 42 (1):115-126.
    My aim in this article is to contribute to the larger project of assessing the relative merits of different theories of substance. An important preliminary step in this project is assessing the explanatory resources of one main theory of substance, the so-called bundle theory. This article works towards such an assessment. I identify and explain three distinct explanatory challenges an adequate bundle theory must meet. Each points to a putative explanatory gap, so I call them the Gap Challenges. I consider (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  4. Glanville’s ‘Black Box’: what can an Observer know?Lance Nizami - 2020 - Revista Italiana di Filosofia Del Linguaggio 14 (2):47-62.
    A ‘Black Box’ cannot be opened to reveal its mechanism. Rather, its operations are inferred through input from (and output to) an ‘observer’. All of us are observers, who attempt to understand the Black Boxes that are Minds. The Black Box and its observer constitute a system, differing from either component alone: a ‘greater’ Black Box to any further-external-observer. To Glanville (1982), the further-external-observer probes the greater-Black-Box by interacting directly with its core Black Box, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. Durán and Jongsma (2021) have recently sought to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  7. MInd and Machine: at the core of any Black Box there are two (or more) White Boxes required to stay in.Lance Nizami - 2020 - Cybernetics and Human Knowing 27 (3):9-32.
    This paper concerns the Black Box. It is not the engineer’s black box that can be opened to reveal its mechanism, but rather one whose operations are inferred through input from (and output to) a companion observer. We are observers ourselves, and we attempt to understand minds through interactions with their host organisms. To this end, Ranulph Glanville followed W. Ross Ashby in elaborating the Black Box. The Black Box and its observer together form a system (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Inferring causation in epidemiology: mechanisms, black boxes, and contrasts.Alex Broadbent - 2011 - In Phyllis McKay Illari Federica Russo, Causality in the Sciences. Oxford University Press. pp. 45--69.
    This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  9. Opening the black box of commodification: A philosophical critique of actor-network theory as critique.Henrik Rude Hvid - manuscript
    This article argues that actor-network theory, as an alternative to critical theory, has lost its critical impetus when examining commodification in healthcare. The paper claims that the reason for this, is the way in which actor-network theory’s anti-essentialist ontology seems to black box 'intentionality' and ethics of human agency as contingent interests. The purpose of this paper was to open the normative black box of commodification, and compare how Marxism, Habermas and ANT can deal with commodification and ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  57
    Federated learning, ethics, and the double black box problem in medical AI.Joshua Hatherley, Anders Søgaard, Angela Ballantyne & Ruben Pauwels - manuscript
    Federated learning (FL) is a machine learning approach that allows multiple devices or institutions to collaboratively train a model without sharing their local data with a third-party. FL is considered a promising way to address patient privacy concerns in medical artificial intelligence. The ethical risks of medical FL systems themselves, however, have thus far been underexamined. This paper aims to address this gap. We argue that medical FL presents a new variety of opacity -- federation opacity -- that, in turn, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello, Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  13. Tragbare Kontrolle: Die Apple Watch als kybernetische Maschine und Black Box algorithmischer Gouvernementalität.Anna-Verena Nosthoff & Felix Maschewski - 2020 - In Anna-Verena Nosthoff & Felix Maschewski, Black Boxes - Versiegelungskontexte und Öffnungsversuche. pp. 115-138.
    Im Beitrag wird die Apple-Watch vor dem Hintergrund ihrer „Ästhetik der Existenz“ als biopolitisches Artefakt und kontrollgesellschaftliches Dispositiv, vor allem aber als kybernetische Black Box aufgefasst und analysiert. Ziel des Aufsatzes ist es, aufzuzeigen, dass sich in dem feedbacklogischen Rückkopplungsapparat nicht nur grundlegende Diskurse des digitalen Zeitalters (Prävention, Gesundheit, bio- und psychopolitische Regulierungsformen etc.) verdichten, sondern dass dieser schon ob seiner inhärenten Logik qua Opazität Transparenz, qua Komplexität Simplizität (d.h. Orientierung) generiert und damit nicht zuletzt ein ganz spezifisches Menschenbild (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. The Panda’s Black Box: Opening Up the Intelligent Design Controversy edited by Nathaniel C. Comfort. [REVIEW]W. Malcolm Byrnes - 2008 - The National Catholic Bioethics Quarterly 8 (2):385-387.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Justice and the Grey Box of Responsibility.Carl Knight - 2010 - Theoria: A Journal of Social and Political Theory 57 (124):86-112.
    Even where an act appears to be responsible, and satisfies all the conditions for responsibility laid down by society, the response to it may be unjust where that appearance is false, and where those conditions are insufficient. This paper argues that those who want to place considerations of responsibility at the centre of distributive and criminal justice ought to take this concern seriously. The common strategy of relying on what Susan Hurley describes as a 'black box of responsibility' has (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  17. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  18. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Thermodynamics of an Empty Box.G. J. Schmitz, M. te Vrugt, T. Haug-Warberg, L. Ellingsen & P. Needham - 2023 - Entropy 25 (315):1-30.
    A gas in a box is perhaps the most important model system studied in thermodynamics and statistical mechanics. Usually, studies focus on the gas, whereas the box merely serves as an idealized confinement. The present article focuses on the box as the central object and develops a thermodynamic theory by treating the geometric degrees of freedom of the box as the degrees of freedom of a thermodynamic system. Applying standard mathematical methods to the thermody- namics of an empty box allows (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  21. In defence of post-hoc explanations in medical AI.Joshua Hatherley, Lauritz Munch & Jens Christian Bjerring - forthcoming - Hastings Center Report.
    Since the early days of the Explainable AI movement, post-hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post-hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this article, we aim to defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  40
    Deep opacity and AI: A treat to XAI and to privacy protection mechanisms.Vincent C. Müller - 2025 - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell.
    It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of “black box problem” in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (“shallow opacity”), 2) the analysts do not know what the system does (“standard black box (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  79
    (1 other version)A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity.Joshua Hatherley - 2025 - Ethics and Information Technology 27 (2):20.
    Machine learning (ML) systems are vulnerable to performance decline over time due to dataset shift. To address this problem, experts often suggest that ML systems should be regularly updated to ensure ongoing performance stability. Some scholarly literature has begun to address the epistemic and ethical challenges associated with different updating methodologies. Thus far, however, little attention has been paid to the impact of model updating on the ML-assisted decision-making process itself, particularly in the AI ethics and AI epistemology literatures. This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  6
    Designing the Unjudgeable: Structural Anti-Resonance and the Ethics of Closure.Jinho Kim - manuscript
    This paper investigates whether it is possible to intentionally construct a structure that is judgementally inaccessible—i.e., a form that resists or negates the conditions of the Judgemental Triad: Constructivity, Coherence, and Resonance. We examine examples from architecture, cryptography, Al black-box systems, and abstract art to assess whether such structures truly lie beyond the reach of judgement, or whether their unjudgeability is a product of strategic opacity. We argue that the creation of judgementally closed systems is structurally possible but ontologically (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  26. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini, Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. The epistemic imagination revisited.Arnon Levy & Ori Kinberg - 2023 - Philosophy and Phenomenological Research 107 (2):319-336.
    Recently, various philosophers have argued that we can obtain knowledge via the imagination. In particular, it has been suggested that we can come to know concrete, empirical matters of everyday significance by appropriately imagining relevant scenarios. Arguments for this thesis come in two main varieties: black box reliability arguments and constraints-based arguments. We suggest that both strategies are unsuccessful. Against black-box arguments, we point to evidence from empirical psychology, question a central case-study, and raise concerns about a (claimed) (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  30. From understanding to justifying: Computational reliabilism for AI-based forensic evidence evaluation.Juan Manuel Durán, David van der Vloed, Arnout Ruifrok & Rolf J. F. Ypma - 2024 - Forensic Science International: Synergy 9.
    Techniques from artificial intelligence (AI) can be used in forensic evidence evaluation and are currently applied in biometric fields. However, it is generally not possible to fully understand how and why these algorithms reach their conclusions. Whether and how we should include such ‘black box’ algorithms in this crucial part of the criminal law system is an open question that has not only scientific but also ethical, legal, and philosophical angles. Ideally, the question should be debated by people with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Scaffolding Natural Selection.Walter Veit - 2022 - Biological Theory 17 (2):163-180.
    Darwin provided us with a powerful theoretical framework to explain the evolution of living systems. Natural selection alone, however, has sometimes been seen as insufficient to explain the emergence of new levels of selection. The problem is one of “circularity” for evolutionary explanations: how to explain the origins of Darwinian properties without already invoking their presence at the level they emerge. That is, how does evolution by natural selection commence in the first place? Recent results in experimental evolution suggest a (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  33. Realism and instrumentalism in Bayesian cognitive science.Danielle Williams & Zoe Drayson - 2023 - In Tony Cheng, Ryoji Sato & Jakob Hohwy, Expected Experiences: The Predictive Mind in an Uncertain World. Routledge.
    There are two distinct approaches to Bayesian modelling in cognitive science. Black-box approaches use Bayesian theory to model the relationship between the inputs and outputs of a cognitive system without reference to the mediating causal processes; while mechanistic approaches make claims about the neural mechanisms which generate the outputs from the inputs. This paper concerns the relationship between these two approaches. We argue that the dominant trend in the philosophical literature, which characterizes the relationship between black-box and mechanistic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Applications of cybernetics to psychological theory: Historical and conceptual explorations.Shantanu Tilak, Michael Glassman, Irina Kuznetcova & Geoffrey Pelfrey - 2022 - Theory & Psychology 32 (2):298-325.
    This article outlines links between cybernetics and psychology through the black box metaphor using a tripartite narrative. The first part explores first-order cybernetic approaches to opening the black box. These developments run parallel to the decline of radical behaviorism and advancements in information processing theory and neuropsychology. We then describe how cybernetics migrates towards a second-order approach (expanding and questioning features of first-order inquiry), understanding applications of rule-based tools to sociocultural phenomena and dynamic mental models, inspiring radical constructivism, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. The scope of inductive risk.P. D. Magnus - 2022 - Metaphilosophy 53 (1):17-24.
    The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  41. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  40
    BTPK-based interpretable method for NER tasks based on Talmudic Public Announcement Logic.Yulin Chen, Beishui Liao, Bruno Bentzen, Bo Yuan, Zelai Yao, Haixiao Chi & Dov Gabbay - 2023 - In Bruno Bentzen, Beishui Liao, Davide Liga, Reka Markovich, Bin Wei, Minghui Xiong & Tianwen Xu, Logics for AI and Law: Joint Proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence and the International Workshop on Logic, AI and Law, September 8-9 and 11-12, 2023, Hangzhou. College Publications. pp. 127–133.
    As one of the basic tasks in natural language processing (NLP), named entity recognition (NER) is an important basic tool for downstream tasks of NLP, such as information extraction, syntactic analysis, machine translation and so on. The internal operation logic of the current name entity recognition model is black-box to the user, so the user has no basis to determine which name entity makes more sense. Therefore, a user-friendly explainable recognition process would be very useful for many people. In (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. AI4Science and the Context Distinction.Moti Mizrahi - forthcoming - AI and Ethics.
    “AI4Science” refers to the use of Artificial Intelligence (AI) in scientific research. As AI systems become more widely used in science, we need guidelines for when such uses are acceptable and when they are unacceptable. To that end, I propose that the distinction between the context of discovery and the context of justification, which comes from philosophy of science, may provide a preliminary but still useful guideline for acceptable uses of AI in science. Given that AI systems used in scientific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Algorithmic Nudging: The Need for an Interdisciplinary Oversight.Christian Schmauder, Jurgis Karpus, Maximilian Moll, Bahador Bahrami & Ophelia Deroy - 2023 - Topoi 42 (3):799-807.
    Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Group Field Theories: Decoupling Spacetime Emergence from the Ontology of non-Spatiotemporal Entities.Marco Forgione - 2024 - European Journal for Philosophy of Science 14 (22):1-23.
    With the present paper I maintain that the group field theory (GFT) approach to quantum gravity can help us clarify and distinguish the problems of spacetime emergence from the questions about the nature of the quanta of space. I will show that the mechanism of phase transition suggests a form of indifference between scales (or phases) and that such an indifference allows us to black-box questions about the nature of the ontology of the fundamental levels of the theory. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47.  35
    Free Will and Consciousness as functional phenomena.Gabriel Erez - manuscript
    This essay proposes a functional reinterpretation of the concepts of free will and consciousness. Rather than treating them as metaphysical properties or ontological facts, it analyzes them as perceived phenomena that serve practical cognitive and behavioral functions. Drawing on a structured input-output model using the “black box” analogy, the essay classifies different types of behavioral patterns—deterministic, unstable, unique, and adaptive—and associates each with intuitive labels such as determinism, madness, free will, and consciousness, respectively. It argues that these labels function (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser, Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Economic Security of the Enterprise Within the Conditions of Digital Transformation.Yuliia Samoilenko, Igor Britchenko, Iaroslava Levchenko, Peter Lošonczi, Oleksandr Bilichenko & Olena Bodnar - 2022 - Economic Affairs 67 (04):619-629.
    In the context of the digital economy development, the priority component of the economic security of an enterprise is changing from material to digital, constituting an independent element of enterprise security. The relevance of the present research is driven by the need to solve the issue of modernizing the economic security of the enterprise taking into account the new risks and opportunities of digitalization. The purpose of the academic paper lies in identifying the features of preventing internal and external negative (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 874