Results for 'Black-box medicine'

962 found
Order:
  1. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  2. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. Durán and Jongsma (2021) have recently sought to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Opening the black box of commodification: A philosophical critique of actor-network theory as critique.Henrik Rude Hvid - manuscript
    This article argues that actor-network theory, as an alternative to critical theory, has lost its critical impetus when examining commodification in healthcare. The paper claims that the reason for this, is the way in which actor-network theory’s anti-essentialist ontology seems to black box 'intentionality' and ethics of human agency as contingent interests. The purpose of this paper was to open the normative black box of commodification, and compare how Marxism, Habermas and ANT can deal with commodification and ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  5. Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  6. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. The Black Box in Stoic Axiology.Michael Vazquez - 2023 - Pacific Philosophical Quarterly 104 (1):78–100.
    The ‘black box’ in Stoic axiology refers to the mysterious connection between the input of Stoic deliberation (reasons generated by the value of indifferents) and the output (appropriate actions). In this paper, I peer into the black box by drawing an analogy between Stoic and Kantian axiology. The value and disvalue of indifferents is intrinsic, but conditional. An extrinsic condition on the value of a token indifferent is that one's selection of that indifferent is sanctioned by context-relative ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Bundle Theory’s Black Box: Gap Challenges for the Bundle Theory of Substance.Robert Garcia - 2014 - Philosophia 42 (1):115-126.
    My aim in this article is to contribute to the larger project of assessing the relative merits of different theories of substance. An important preliminary step in this project is assessing the explanatory resources of one main theory of substance, the so-called bundle theory. This article works towards such an assessment. I identify and explain three distinct explanatory challenges an adequate bundle theory must meet. Each points to a putative explanatory gap, so I call them the Gap Challenges. I consider (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  11. Glanville’s ‘Black Box’: what can an Observer know?Lance Nizami - 2020 - Revista Italiana di Filosofia Del Linguaggio 14 (2):47-62.
    A ‘Black Box’ cannot be opened to reveal its mechanism. Rather, its operations are inferred through input from (and output to) an ‘observer’. All of us are observers, who attempt to understand the Black Boxes that are Minds. The Black Box and its observer constitute a system, differing from either component alone: a ‘greater’ Black Box to any further-external-observer. To Glanville (1982), the further-external-observer probes the greater-Black-Box by interacting directly with its core Black Box, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Inferring causation in epidemiology: mechanisms, black boxes, and contrasts.Alex Broadbent - 2011 - In Phyllis McKay Illari Federica Russo (ed.), Causality in the Sciences. Oxford University Press. pp. 45--69.
    This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  13. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  14. MInd and Machine: at the core of any Black Box there are two (or more) White Boxes required to stay in.Lance Nizami - 2020 - Cybernetics and Human Knowing 27 (3):9-32.
    This paper concerns the Black Box. It is not the engineer’s black box that can be opened to reveal its mechanism, but rather one whose operations are inferred through input from (and output to) a companion observer. We are observers ourselves, and we attempt to understand minds through interactions with their host organisms. To this end, Ranulph Glanville followed W. Ross Ashby in elaborating the Black Box. The Black Box and its observer together form a system (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Tragbare Kontrolle: Die Apple Watch als kybernetische Maschine und Black Box algorithmischer Gouvernementalität.Anna-Verena Nosthoff & Felix Maschewski - 2020 - In Anna-Verena Nosthoff & Felix Maschewski (eds.), Black Boxes - Versiegelungskontexte und Öffnungsversuche. pp. 115-138.
    Im Beitrag wird die Apple-Watch vor dem Hintergrund ihrer „Ästhetik der Existenz“ als biopolitisches Artefakt und kontrollgesellschaftliches Dispositiv, vor allem aber als kybernetische Black Box aufgefasst und analysiert. Ziel des Aufsatzes ist es, aufzuzeigen, dass sich in dem feedbacklogischen Rückkopplungsapparat nicht nur grundlegende Diskurse des digitalen Zeitalters (Prävention, Gesundheit, bio- und psychopolitische Regulierungsformen etc.) verdichten, sondern dass dieser schon ob seiner inhärenten Logik qua Opazität Transparenz, qua Komplexität Simplizität (d.h. Orientierung) generiert und damit nicht zuletzt ein ganz spezifisches Menschenbild (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello (eds.), Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. The Panda’s Black Box: Opening Up the Intelligent Design Controversy edited by Nathaniel C. Comfort. [REVIEW]W. Malcolm Byrnes - 2008 - The National Catholic Bioethics Quarterly 8 (2):385-387.
    Download  
     
    Export citation  
     
    Bookmark  
  18.  70
    Adherence to the Request Criterion in Jurisdictions Where Assisted Dying is Lawful? A Review of the Criteria and Evidence in the Netherlands, Belgium, Oregon, and Switzerland.Penney Lewis & Isra Black - 2013 - Journal of Law, Medicine and Ethics 41 (4):885-898.
    Some form of assisted dying (voluntary euthanasia and/or assisted suicide) is lawful in the Netherlands, Belgium, Oregon, and Switzerland. In order to be lawful in these jurisdictions, a valid request must precede the provision of assistance to die. Non-adherence to the criteria for valid requests for assisted dying may be a trigger for civil and/or criminal liability, as well as disciplinary sanctions where the assistor is a medical professional. In this article, we review the criteria and evidence in respect of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Justice and the Grey Box of Responsibility.Carl Knight - 2010 - Theoria: A Journal of Social and Political Theory 57 (124):86-112.
    Even where an act appears to be responsible, and satisfies all the conditions for responsibility laid down by society, the response to it may be unjust where that appearance is false, and where those conditions are insufficient. This paper argues that those who want to place considerations of responsibility at the centre of distributive and criminal justice ought to take this concern seriously. The common strategy of relying on what Susan Hurley describes as a 'black box of responsibility' has (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  21. Thermodynamics of an Empty Box.G. J. Schmitz, M. te Vrugt, T. Haug-Warberg, L. Ellingsen & P. Needham - 2023 - Entropy 25 (315):1-30.
    A gas in a box is perhaps the most important model system studied in thermodynamics and statistical mechanics. Usually, studies focus on the gas, whereas the box merely serves as an idealized confinement. The present article focuses on the box as the central object and develops a thermodynamic theory by treating the geometric degrees of freedom of the box as the degrees of freedom of a thermodynamic system. Applying standard mathematical methods to the thermody- namics of an empty box allows (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  23. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Scaffolding Natural Selection.Walter Veit - 2022 - Biological Theory 17 (2):163-180.
    Darwin provided us with a powerful theoretical framework to explain the evolution of living systems. Natural selection alone, however, has sometimes been seen as insufficient to explain the emergence of new levels of selection. The problem is one of “circularity” for evolutionary explanations: how to explain the origins of Darwinian properties without already invoking their presence at the level they emerge. That is, how does evolution by natural selection commence in the first place? Recent results in experimental evolution suggest a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  25. The epistemic imagination revisited.Arnon Levy & Ori Kinberg - 2023 - Philosophy and Phenomenological Research 107 (2):319-336.
    Recently, various philosophers have argued that we can obtain knowledge via the imagination. In particular, it has been suggested that we can come to know concrete, empirical matters of everyday significance by appropriately imagining relevant scenarios. Arguments for this thesis come in two main varieties: black box reliability arguments and constraints-based arguments. We suggest that both strategies are unsuccessful. Against black-box arguments, we point to evidence from empirical psychology, question a central case-study, and raise concerns about a (claimed) (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  26. Realism and instrumentalism in Bayesian cognitive science.Danielle Williams & Zoe Drayson - 2023 - In Tony Cheng, Ryoji Sato & Jakob Hohwy (eds.), Expected Experiences: The Predictive Mind in an Uncertain World. Routledge.
    There are two distinct approaches to Bayesian modelling in cognitive science. Black-box approaches use Bayesian theory to model the relationship between the inputs and outputs of a cognitive system without reference to the mediating causal processes; while mechanistic approaches make claims about the neural mechanisms which generate the outputs from the inputs. This paper concerns the relationship between these two approaches. We argue that the dominant trend in the philosophical literature, which characterizes the relationship between black-box and mechanistic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  28. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Algorithmic Nudging: The Need for an Interdisciplinary Oversight.Christian Schmauder, Jurgis Karpus, Maximilian Moll, Bahador Bahrami & Ophelia Deroy - 2023 - Topoi 42 (3):799-807.
    Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. The scope of inductive risk.P. D. Magnus - 2022 - Metaphilosophy 53 (1):17-24.
    The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. http://wellnesssupplement.com/nutrawill-garcinia/.Garcinia NutraWill (ed.) - 2017 - new york: oxford.
    -/- Anyone else have NutraWill Garcinia ? Doing this is compact. By far the easiest trick of getting a NutraWill Garcinia that levels a culture for a NutraWill Garcinia . That is a no brainer. If you are a novice in the world of NutraWill Garcinia, you could locate yourself overwhelmed. -/- That black box was designed to order. By all means, it isn't a much used statement. Recently I was talking to my family dealing with a symbol and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Ethical and legal race‐responsive vaccine allocation.Bastian Steuwer & Nir Eyal - 2023 - Bioethics 37 (8):814-821.
    In many countries, the COVID‐19 pandemic varied starkly between different racial and ethnic groups. Before vaccines were approved, some considered assigning priority access to worse‐hit racial groups. That debate can inform rationing in future pandemics and in some of the many areas outside COVID‐19 that admit of racial health disparities. However, concerns were raised that “race‐responsive” prioritizations would be ruled unlawful for allegedly constituting wrongful discrimination. This legal argument relies on an understanding of discrimination law as demanding color‐blindness. We argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. The Unobserved Anatomy: Negotiating the Plausibility of AI-Based Reconstructions of Missing Brain Structures in Clinical MRI Scans.Paula Muhr - 2023 - In Antje Flüchter, Birte Förster, Britta Hochkirchen & Silke Schwandt (eds.), Plausibilisierung und Evidenz: Dynamiken und Praktiken von der Antike bis zur Gegenwart. Bielefeld University Press. pp. 169-192.
    Vast archives of fragmentary structural brain scans that are routinely acquired in medical clinics for diagnostic purposes have so far been considered to be unusable for neuroscientific research. Yet, recent studies have proposed that by deploying machine learning algorithms to fill in the missing anatomy, clinical scans could, in future, be used by researchers to gain new insights into various brain disorders. This chapter focuses on a study published in2019, whose authors developed a novel unsupervised machine learning algorithm for synthesising (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Group Field Theories: Decoupling Spacetime Emergence from the Ontology of non-Spatiotemporal Entities.Marco Forgione - 2024 - European Journal for Philosophy of Science 14 (22):1-23.
    With the present paper I maintain that the group field theory (GFT) approach to quantum gravity can help us clarify and distinguish the problems of spacetime emergence from the questions about the nature of the quanta of space. I will show that the mechanism of phase transition suggests a form of indifference between scales (or phases) and that such an indifference allows us to black-box questions about the nature of the ontology of the fundamental levels of the theory. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Economic Security of the Enterprise Within the Conditions of Digital Transformation.Yuliia Samoilenko, Igor Britchenko, Iaroslava Levchenko, Peter Lošonczi, Oleksandr Bilichenko & Olena Bodnar - 2022 - Economic Affairs 67 (04):619-629.
    In the context of the digital economy development, the priority component of the economic security of an enterprise is changing from material to digital, constituting an independent element of enterprise security. The relevance of the present research is driven by the need to solve the issue of modernizing the economic security of the enterprise taking into account the new risks and opportunities of digitalization. The purpose of the academic paper lies in identifying the features of preventing internal and external negative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Kreativität: Eine Philosophische Analyse.Simone Mahrenholz - 2011 - Berlin, Germany: Akademie Verlag.
    (For English, scroll down) „Kreativität“ ist ein sehr junger Begriff und ein sehr altes Phänomen. Sie gilt als unaufklärbares Rätsel, als eine Art „Black Box“ des Denkens. Dem kollektiven Bewußtsein zufolge ist sie etwas Rares, Flüchtiges, strapaziös zu erzielen und nur wenige Glückliche begünstigend. Das vorliegende Buch präsentiert eine logische Grundidee zur Entstehung von schöpferisch Neuem – Elemente aus Logik, Symbotheorie, Informations-, Kommunikations- und Medientheorie verbindend. Diese „Formel“ wird an philosophischen Stationen von der Antike bis zur Gegenwart erprobt und (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  96
    Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Modeling the invention of a new inference rule: The case of ‘Randomized Clinical Trial’ as an argument scheme for medical science.Jodi Schneider & Sally Jackson - 2018 - Argument and Computation 9 (2):77-89.
    A background assumption of this paper is that the repertoire of inference schemes available to humanity is not fixed, but subject to change as new schemes are invented or refined and as old ones are obsolesced or abandoned. This is particularly visible in areas like health and environmental sciences, where enormous societal investment has been made in finding ways to reach more dependable conclusions. Computational modeling of argumentation, at least for the discourse in expert fields, will require the possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Commentaries on David Hodgson's "a plain person's free will".Graham Cairns-Smith, Thomas W. Clark, Ravi Gomatam, Robert H. Kane, Nicholas Maxwell, J. J. C. Smart, Sean A. Spence & Henry P. Stapp - 2005 - Journal of Consciousness Studies 12 (1):20-75.
    REMARKS ON EVOLUTION AND TIME-SCALES, Graham Cairns-Smith; HODGSON'S BLACK BOX, Thomas Clark; DO HODGSON'S PROPOSITIONS UNIQUELY CHARACTERIZE FREE WILL?, Ravi Gomatam; WHAT SHOULD WE RETAIN FROM A PLAIN PERSON'S CONCEPT OF FREE WILL?, Gilberto Gomes; ISOLATING DISPARATE CHALLENGES TO HODGSON'S ACCOUNT OF FREE WILL, Liberty Jaswal; FREE AGENCY AND LAWS OF NATURE, Robert Kane; SCIENCE VERSUS REALIZATION OF VALUE, NOT DETERMINISM VERSUS CHOICE, Nicholas Maxwell; COMMENTS ON HODGSON, J.J.C. Smart; THE VIEW FROM WITHIN, Sean Spence; COMMENTARY ON HODGSON, Henry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. A Promethean Philosophy of External Technologies, Empiricism, & the Concept: Second-Order Cybernetics, Deep Learning, and Predictive Processing.Ekin Erkan - 2020 - Media Theory 4 (1):87-146.
    Beginning with a survey of the shortcoming of theories of organology/media-as-externalization of mind/body—a philosophical-anthropological tradition that stretches from Plato through Ernst Kapp and finds its contemporary proponent in Bernard Stiegler—I propose that the phenomenological treatment of media as an outpouching and extension of mind qua intentionality is not sufficient to counter the ̳black-box‘ mystification of today‘s deep learning‘s algorithms. Focusing on a close study of Simondon‘s On the Existence of Technical Objectsand Individuation, I argue that the process-philosophical work of Gilbert (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. My Social Networking Profile: Copy, Resemblance, or Simulacrum? A Poststructuralist Interpretation of Social Information Systems.David Kreps - 2010 - European Journal of Information Systems 19:104-115.
    This paper offers an introduction to poststructuralist interpretivist research in information systems, through a poststructuralist theoretical reading of the phenomenon and experience of social networking websites, such as Facebook. This is undertaken through an exploration of how loyally a social networking profile can represent the essence of an individual, and whether Platonic notions of essence, and loyalty of copy, are disturbed by the nature of a social networking profile, in ways described by poststructuralist thinker Deleuze’s notions of the reversal of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 962