Contents
8 found
Order:
  1. Deep Learning as Method-Learning: Pragmatic Understanding, Epistemic Strategies and Design-Rules.Phillip H. Kieval & Oscar Westerblad - manuscript
    We claim that scientists working with deep learning (DL) models exhibit a form of pragmatic understanding that is not reducible to or dependent on explanation. This pragmatic understanding comprises a set of learned methodological principles that underlie DL model design-choices and secure their reliability. We illustrate this action-oriented pragmatic understanding with a case study of AlphaFold2, highlighting the interplay between background knowledge of a problem and methodological choices involving techniques for constraining how a model learns from data. Building successful models (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Defining Generative Artificial Intelligence: An Attempt to Resolve the Confusion about Diffusion.Raphael Ronge, Markus Maier & Benjamin Rathgeber - manuscript
    The concept of Generative Artificial Intelligence (GenAI) is ubiquitous in the public and semi-technical domain, yet rarely defined precisely. We clarify main concepts that are usually discussed in connection to GenAI and argue that one ought to distinguish between the technical and the public discourse. In order to show its complex development and associated conceptual ambiguities, we offer a historical-systematic reconstruction of GenAI and explicitly discuss two exemplary cases: the generative status of the Large Language Model BERT and the differences (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Exploiting the In-Distribution Embedding Space with Deep Learning and Bayesian inference for Detection and Classification of an Out-of-Distribution Malware (Extended Abstract).Tosin ige, Christopher Kiekintveld & Aritran Piplai - forthcoming - Aaai Conferenece Proceeding.
    Current state-of-the-art out-of-distribution algorithm does not address the variation in dynamic and static behavior between malware variants from the same family as evidence in their poor performance against an out-of-distribution malware attack. We aims to address this limitation by: 1) exploitation of the in-dimensional embedding space between variants from the same malware family to account for all variations 2) exploitation of the inter-dimensional space between different malware family 3) building a deep learning-based model with a shallow neural network with maximum (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Sources of Richness and Ineffability for Phenomenally Conscious States.Xu Ji, Eric Elmoznino, George Deane, Axel Constant, Guillaume Dumas, Guillaume Lajoie, Jonathan A. Simon & Yoshua Bengio - 2024 - Neuroscience of Consciousness 2024 (1).
    Conscious states—state that there is something it is like to be in—seem both rich or full of detail and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Imagine This: Opaque DLMs are Reliable in the Context of Justification.Logan Carter - manuscript
    Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep learning models (DLMs) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark