5 found
Order:
  1. No-Regret Learning Supports Voters’ Competence.Petr Spelda, Vit Stritecky & John Symons - forthcoming - Social Epistemology:1-17.
    Procedural justifications of democracy emphasize inclusiveness and respect and by doing so come into conflict with instrumental justifications that depend on voters’ competence. This conflict raises questions about jury theorems and makes their standing in democratic theory contested. We show that a type of no-regret learning called meta-induction can help to satisfy the competence assumption without excluding voters or diverse opinion leaders on an a priori basis. Meta-induction assigns weights to opinion leaders based on their past predictive performance to determine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. What Can Artificial Intelligence Do for Scientific Realism?Petr Spelda & Vit Stritecky - 2020 - Axiomathes 31 (1):85-104.
    The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Expanding Observability via Human-Machine Cooperation.Petr Spelda & Vit Stritecky - 2022 - Axiomathes 32 (3):819-832.
    We ask how to use machine learning to expand observability, which presently depends on human learning that informs conceivability. The issue is engaged by considering the question of correspondence between conceived observability counterfactuals and observable, yet so far unobserved or unconceived, states of affairs. A possible answer lies in importing out of reference frame content which could provide means for conceiving further observability counterfactuals. They allow us to define high-fidelity observability, increasing the level of correspondence in question. To achieve high-fidelity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - forthcoming - ACM Computing Surveys.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Download  
     
    Export citation  
     
    Bookmark