Results for 'Black-box Metaphor'

961 found
Order:
  1. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  2. Situated Intelligence: An Introspective Model of Consciousness.Stephen G. Perrin -
    The model of consciousness developed here is a cooperative venture between mind, brain, body, nature, culture, community, and family. The overall unity of consciousness is provided by the loop of engagement that conducts intentional action into the ambient. Each successive round of engagement between subject and world generates a gap of disparity between remembrance of purpose or intent and the effect achieved on the operative level of understanding within a larger taxonomic scheme in experience. That gap sends a delta signal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. The Black Box in Stoic Axiology.Michael Vazquez - 2023 - Pacific Philosophical Quarterly 104 (1):78–100.
    The ‘black box’ in Stoic axiology refers to the mysterious connection between the input of Stoic deliberation (reasons generated by the value of indifferents) and the output (appropriate actions). In this paper, I peer into the black box by drawing an analogy between Stoic and Kantian axiology. The value and disvalue of indifferents is intrinsic, but conditional. An extrinsic condition on the value of a token indifferent is that one's selection of that indifferent is sanctioned by context-relative ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  5. Bundle Theory’s Black Box: Gap Challenges for the Bundle Theory of Substance.Robert Garcia - 2014 - Philosophia 42 (1):115-126.
    My aim in this article is to contribute to the larger project of assessing the relative merits of different theories of substance. An important preliminary step in this project is assessing the explanatory resources of one main theory of substance, the so-called bundle theory. This article works towards such an assessment. I identify and explain three distinct explanatory challenges an adequate bundle theory must meet. Each points to a putative explanatory gap, so I call them the Gap Challenges. I consider (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  6. Glanville’s ‘Black Box’: what can an Observer know?Lance Nizami - 2020 - Revista Italiana di Filosofia Del Linguaggio 14 (2):47-62.
    A ‘Black Box’ cannot be opened to reveal its mechanism. Rather, its operations are inferred through input from (and output to) an ‘observer’. All of us are observers, who attempt to understand the Black Boxes that are Minds. The Black Box and its observer constitute a system, differing from either component alone: a ‘greater’ Black Box to any further-external-observer. To Glanville (1982), the further-external-observer probes the greater-Black-Box by interacting directly with its core Black Box, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. MInd and Machine: at the core of any Black Box there are two (or more) White Boxes required to stay in.Lance Nizami - 2020 - Cybernetics and Human Knowing 27 (3):9-32.
    This paper concerns the Black Box. It is not the engineer’s black box that can be opened to reveal its mechanism, but rather one whose operations are inferred through input from (and output to) a companion observer. We are observers ourselves, and we attempt to understand minds through interactions with their host organisms. To this end, Ranulph Glanville followed W. Ross Ashby in elaborating the Black Box. The Black Box and its observer together form a system (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. Durán and Jongsma (2021) have recently sought to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  9. Opening the black box of commodification: A philosophical critique of actor-network theory as critique.Henrik Rude Hvid - manuscript
    This article argues that actor-network theory, as an alternative to critical theory, has lost its critical impetus when examining commodification in healthcare. The paper claims that the reason for this, is the way in which actor-network theory’s anti-essentialist ontology seems to black box 'intentionality' and ethics of human agency as contingent interests. The purpose of this paper was to open the normative black box of commodification, and compare how Marxism, Habermas and ANT can deal with commodification and ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Inferring causation in epidemiology: mechanisms, black boxes, and contrasts.Alex Broadbent - 2011 - In Phyllis McKay Illari Federica Russo (ed.), Causality in the Sciences. Oxford University Press. pp. 45--69.
    This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  11. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  12. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello (eds.), Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Tragbare Kontrolle: Die Apple Watch als kybernetische Maschine und Black Box algorithmischer Gouvernementalität.Anna-Verena Nosthoff & Felix Maschewski - 2020 - In Anna-Verena Nosthoff & Felix Maschewski (eds.), Black Boxes - Versiegelungskontexte und Öffnungsversuche. pp. 115-138.
    Im Beitrag wird die Apple-Watch vor dem Hintergrund ihrer „Ästhetik der Existenz“ als biopolitisches Artefakt und kontrollgesellschaftliches Dispositiv, vor allem aber als kybernetische Black Box aufgefasst und analysiert. Ziel des Aufsatzes ist es, aufzuzeigen, dass sich in dem feedbacklogischen Rückkopplungsapparat nicht nur grundlegende Diskurse des digitalen Zeitalters (Prävention, Gesundheit, bio- und psychopolitische Regulierungsformen etc.) verdichten, sondern dass dieser schon ob seiner inhärenten Logik qua Opazität Transparenz, qua Komplexität Simplizität (d.h. Orientierung) generiert und damit nicht zuletzt ein ganz spezifisches Menschenbild (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The Panda’s Black Box: Opening Up the Intelligent Design Controversy edited by Nathaniel C. Comfort. [REVIEW]W. Malcolm Byrnes - 2008 - The National Catholic Bioethics Quarterly 8 (2):385-387.
    Download  
     
    Export citation  
     
    Bookmark  
  15. Black Holes: Artistic metaphors for the contemporaneity.Gustavo Ruiz da Silva & Gustavo Ottero Gabetti - 2023 - Unigou Remote 2023.
    This paper investigates the cultural significance of black holes and suns as metaphors in continental European literature and art, drawing on theoretical insights from French continental authors such as Jean-François Lyotard and Ray Brassier. Lyotard suggests that black holes signify the ultimate form of the sublime, representing the displacement of humanity and our unease with our place in the cosmos. On the other hand, Brassier views black holes as a consequence of the entropic dissolution of matter, reflecting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Review of the book Algorithmic Desire: Toward a New Structuralist Theory of Social Media, by Matthew Flisfeder. [REVIEW]Jack Black - 2024 - Postdigital Science and Education 6 (2):691--704.
    It is this very contention that sits at the heart of Matthew Flisfeder’s, Algorithmic Desire: Towards a New Structuralist Theory of Social Media (2021). In spite of the accusation that, today, our social media is in fact hampering democracy and subjecting us to increasing forms of online and offline surveillance, for Flisfeder (2021: 3), ‘[s]ocial media remains the correct concept for reconciling ourselves with the structural contradictions of our media, our culture, and our society’. With almost every aspect of our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  18. Justice and the Grey Box of Responsibility.Carl Knight - 2010 - Theoria: A Journal of Social and Political Theory 57 (124):86-112.
    Even where an act appears to be responsible, and satisfies all the conditions for responsibility laid down by society, the response to it may be unjust where that appearance is false, and where those conditions are insufficient. This paper argues that those who want to place considerations of responsibility at the centre of distributive and criminal justice ought to take this concern seriously. The common strategy of relying on what Susan Hurley describes as a 'black box of responsibility' has (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  20. Generating and Interpreting Metaphors with NETMET.Eric Steinhart - 2005 - APA Newsletter on Philosophy and Computers 4 (2).
    The structural theory of metaphor (STM) uses techniques from possible worlds semantics to generate and interpret metaphors. STM is presented in detail in The Logic of Metaphor: Analogous Parts of Possible Worlds (Steinhart, 2001). STM is based on Kittay’s semantic field theory of metaphor (1987) and ultimately on Black’s interactionist theory (1962, 1979). STM uses an intensional calculus to specify truth-conditions for many grammatical forms of metaphor. The truth-conditional analysis in STM is inspired in part (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Thermodynamics of an Empty Box.G. J. Schmitz, M. te Vrugt, T. Haug-Warberg, L. Ellingsen & P. Needham - 2023 - Entropy 25 (315):1-30.
    A gas in a box is perhaps the most important model system studied in thermodynamics and statistical mechanics. Usually, studies focus on the gas, whereas the box merely serves as an idealized confinement. The present article focuses on the box as the central object and develops a thermodynamic theory by treating the geometric degrees of freedom of the box as the degrees of freedom of a thermodynamic system. Applying standard mathematical methods to the thermody- namics of an empty box allows (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Signs of Morality in David Bowie's "Black Star" Video Clip.May Kokkidou & Elvina Paschali - 2017 - Philosophy Study 7 (12).
    Black Star” music video was released two days before Bowie’s death. It bears various implications of dying and the notion of mortality is both literal and metaphorical. It is highly autobiographical and serves as a theatrical stage for Bowie to act both as a music performer and as a self-conscious human being. In this paper, we discuss the signs of mortality in Bowie’s “Black Star” music video-clip. We focus on video’s cinematic techniques and codes, on its motivic elements (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. NETMET: A Program for Generating and Interpreting Metaphors.Eric Steinhart - 1995 - Computers and Humanities 28 (6):383-392.
    Metaphors have computable semantics. A program called NETMET both generates metaphors and produces partial literal interpretations of metaphors. NETMET is based on Kittay's semantic field theory of metaphor and Black's interaction theory of metaphor. Input to NETMET consists of a list of literal propositions. NETMET creates metaphors by finding topic and source semantic fields, producing an analogical map from source to topic, then generating utterances in which terms in the source are identified with or predicated of terms (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Why Metaphors have no Meaning: Considering Metaphoric Meaning in Davidson.Ben Kotzé - 2001 - South African Journal of Philosophy 20 (3-4):291-308.
    Since the publication of Donald Davidson's essay “What Metaphors Mean” (1978) – in which he famously asserts that metaphor has no meaning – the views expressed in it have mostly met with criticism: prominently from Mary Hesse and Max Black. This article attempts to explain Davidson's surprise-move regarding metaphor by relating it to elements in the rest of his work in semantics, such as the principle of compositionality, radical interpretation and the principle of charity. I conclude that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25.  47
    Imaginative Frames for Scientific Inquiry: Metaphors, Telling Facts, and Just-So Stories.Elisabeth Camp - 2019 - In Arnon Levy & Peter Godfrey-Smith (eds.), The Scientific Imagination. New York, US: Oup Usa. pp. 304-336.
    I distinguish among a range of distinct representational devices, which I call "frames", all of which have the function of providing a perspective on a subject: an overarching intuitive principle or for noticing, explaining, and responding to it. Starting with Max Black's metaphor of metaphor as etched lines on smoked glass, I explain what makes frames in general powerful cognitive tools. I distinguish metaphor from some of its close cousins, especially telling details, just-so stories, and analogies, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. The Color of Childhood: The Role of the Child/Human Binary in the Production of Anti-Black Racism.Toby Rollo - 2018 - Journal of Black Studies 49 (4):307-329.
    The binary between the figure of the child and the fully human being is invoked with regularity in analyses of race, yet its centrality to the conception of race has never been fully explored. For most commentators, the figure of the child operates as a metaphoric or rhetorical trope, a non-essential strategic tool in the perpetuation of White supremacy. As I show in the following, the child/human binary does not present a contingent or merely rhetorical construction but, rather, a central (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Understanding life through metaphors. [REVIEW]Bartlomiej Swiatczak - 2022 - Metascience 2022:1-3.
    There is a deep-seated neopositivist view which regards the language of science as a neutral medium of communication, radically different from indirect symbolic forms of discourse characteristic of arts and humanities. But naturalists, like poets and social scientists, also draw on the dominant images in their culture to organize their thoughts and simplify complex concepts. By conceptualizing one thing in terms of another, metaphors in science not only aid mutual communication between researchers but also structure their understanding of experience and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  31. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Scaffolding Natural Selection.Walter Veit - 2022 - Biological Theory 17 (2):163-180.
    Darwin provided us with a powerful theoretical framework to explain the evolution of living systems. Natural selection alone, however, has sometimes been seen as insufficient to explain the emergence of new levels of selection. The problem is one of “circularity” for evolutionary explanations: how to explain the origins of Darwinian properties without already invoking their presence at the level they emerge. That is, how does evolution by natural selection commence in the first place? Recent results in experimental evolution suggest a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  33. The epistemic imagination revisited.Arnon Levy & Ori Kinberg - 2023 - Philosophy and Phenomenological Research 107 (2):319-336.
    Recently, various philosophers have argued that we can obtain knowledge via the imagination. In particular, it has been suggested that we can come to know concrete, empirical matters of everyday significance by appropriately imagining relevant scenarios. Arguments for this thesis come in two main varieties: black box reliability arguments and constraints-based arguments. We suggest that both strategies are unsuccessful. Against black-box arguments, we point to evidence from empirical psychology, question a central case-study, and raise concerns about a (claimed) (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  34. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Realism and instrumentalism in Bayesian cognitive science.Danielle Williams & Zoe Drayson - 2023 - In Tony Cheng, Ryoji Sato & Jakob Hohwy (eds.), Expected Experiences: The Predictive Mind in an Uncertain World. Routledge.
    There are two distinct approaches to Bayesian modelling in cognitive science. Black-box approaches use Bayesian theory to model the relationship between the inputs and outputs of a cognitive system without reference to the mediating causal processes; while mechanistic approaches make claims about the neural mechanisms which generate the outputs from the inputs. This paper concerns the relationship between these two approaches. We argue that the dominant trend in the philosophical literature, which characterizes the relationship between black-box and mechanistic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. The Unobserved Anatomy: Negotiating the Plausibility of AI-Based Reconstructions of Missing Brain Structures in Clinical MRI Scans.Paula Muhr - 2023 - In Antje Flüchter, Birte Förster, Britta Hochkirchen & Silke Schwandt (eds.), Plausibilisierung und Evidenz: Dynamiken und Praktiken von der Antike bis zur Gegenwart. Bielefeld University Press. pp. 169-192.
    Vast archives of fragmentary structural brain scans that are routinely acquired in medical clinics for diagnostic purposes have so far been considered to be unusable for neuroscientific research. Yet, recent studies have proposed that by deploying machine learning algorithms to fill in the missing anatomy, clinical scans could, in future, be used by researchers to gain new insights into various brain disorders. This chapter focuses on a study published in2019, whose authors developed a novel unsupervised machine learning algorithm for synthesising (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  41. The scope of inductive risk.P. D. Magnus - 2022 - Metaphilosophy 53 (1):17-24.
    The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Algorithmic Nudging: The Need for an Interdisciplinary Oversight.Christian Schmauder, Jurgis Karpus, Maximilian Moll, Bahador Bahrami & Ophelia Deroy - 2023 - Topoi 42 (3):799-807.
    Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. THE SPECTACLE OF REFLECTION: ON DREAMS, NEURAL NETWORKS AND THE VISUAL NATURE OF THOUGHT.Magdalena Szalewicz - manuscript
    The article considers the problem of images and the role they play in our reflection turning to evidence provided by two seemingly very distant theories of mind together with two sorts of corresponding visions: dreams as analyzed by Freud who claimed that they are pictures of our thoughts, and their mechanical counterparts produced by neural networks designed for object recognition and classification. Freud’s theory of dreams has largely been ignored by philosophers interested in cognition, most of whom focused solely on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Kreativität: Eine Philosophische Analyse.Simone Mahrenholz - 2011 - Berlin, Germany: Akademie Verlag.
    (For English, scroll down) „Kreativität“ ist ein sehr junger Begriff und ein sehr altes Phänomen. Sie gilt als unaufklärbares Rätsel, als eine Art „Black Box“ des Denkens. Dem kollektiven Bewußtsein zufolge ist sie etwas Rares, Flüchtiges, strapaziös zu erzielen und nur wenige Glückliche begünstigend. Das vorliegende Buch präsentiert eine logische Grundidee zur Entstehung von schöpferisch Neuem – Elemente aus Logik, Symbotheorie, Informations-, Kommunikations- und Medientheorie verbindend. Diese „Formel“ wird an philosophischen Stationen von der Antike bis zur Gegenwart erprobt und (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. My Social Networking Profile: Copy, Resemblance, or Simulacrum? A Poststructuralist Interpretation of Social Information Systems.David Kreps - 2010 - European Journal of Information Systems 19:104-115.
    This paper offers an introduction to poststructuralist interpretivist research in information systems, through a poststructuralist theoretical reading of the phenomenon and experience of social networking websites, such as Facebook. This is undertaken through an exploration of how loyally a social networking profile can represent the essence of an individual, and whether Platonic notions of essence, and loyalty of copy, are disturbed by the nature of a social networking profile, in ways described by poststructuralist thinker Deleuze’s notions of the reversal of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Modeling the invention of a new inference rule: The case of ‘Randomized Clinical Trial’ as an argument scheme for medical science.Jodi Schneider & Sally Jackson - 2018 - Argument and Computation 9 (2):77-89.
    A background assumption of this paper is that the repertoire of inference schemes available to humanity is not fixed, but subject to change as new schemes are invented or refined and as old ones are obsolesced or abandoned. This is particularly visible in areas like health and environmental sciences, where enormous societal investment has been made in finding ways to reach more dependable conclusions. Computational modeling of argumentation, at least for the discourse in expert fields, will require the possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 961