7 found
Order:
  1. Why and how to construct an epistemic justification of machine learning?Petr Spelda & Vit Stritecky - 2024 - Synthese 204 (2):1-24.
    Consider a set of shuffled observations drawn from a fixed probability distribution over some instance domain. What enables learning of inductive generalizations which proceed from such a set of observations? The scenario is worthwhile because it epistemically characterizes most of machine learning. This kind of learning from observations is also inverse and ill-posed. What reduces the non-uniqueness of its result and, thus, its problematic epistemic justification, which stems from a one-to-many relation between the observations and many learnable generalizations? The paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  69
    Learnability of state spaces of physical systems is undecidable.Petr Spelda & Vit Stritecky - 2024 - Journal of Computational Science 83 (December 2024):1-7.
    Despite an increasing role of machine learning in science, there is a lack of results on limits of empirical exploration aided by machine learning. In this paper, we construct one such limit by proving undecidability of learnability of state spaces of physical systems. We characterize state spaces as binary hypothesis classes of the computable Probably Approximately Correct learning framework. This leads to identifying the first limit for learnability of state spaces in the agnostic setting. Further, using the fact that finiteness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - 2021 - ACM Computing Surveys 54 (3):1-18.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. What Can Artificial Intelligence Do for Scientific Realism?Petr Spelda & Vit Stritecky - 2020 - Axiomathes 31 (1):85-104.
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. No-Regret Learning Supports Voters’ Competence.Petr Spelda, Vit Stritecky & John Symons - 2024 - Social Epistemology 38 (5):543-559.
    Procedural justifications of democracy emphasize inclusiveness and respect and by doing so come into conflict with instrumental justifications that depend on voters’ competence. This conflict raises questions about jury theorems and makes their standing in democratic theory contested. We show that a type of no-regret learning called meta-induction can help to satisfy the competence assumption without excluding voters or diverse opinion leaders on an a priori basis. Meta-induction assigns weights to opinion leaders based on their past predictive performance to determine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Expanding Observability via Human-Machine Cooperation.Petr Spelda & Vit Stritecky - 2022 - Axiomathes 32 (3):819-832.
    We ask how to use machine learning to expand observability, which presently depends on human learning that informs conceivability. The issue is engaged by considering the question of correspondence between conceived observability counterfactuals and observable, yet so far unobserved or unconceived, states of affairs. A possible answer lies in importing out of reference frame content which could provide means for conceiving further observability counterfactuals. They allow us to define high-fidelity observability, increasing the level of correspondence in question. To achieve high-fidelity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The (...)
    Download  
     
    Export citation  
     
    Bookmark