Mechanical Turkeys

Journal of Philosophical Logic (forthcoming)
  Copy   BIBTEX

Abstract

Some learning strategies that work well when computational considerations are abstracted away from become severely limiting when such considerations are taken into account. We illustrate this phenomenon for agents who attempt to extrapolate patterns in binary data streams chosen from among a countable family of possibilities. If computational constraints are ignored, then two strategies that will always work are learning by enumeration (enumerate the possibilities---in order of simplicity, say---then search for the one earliest in the ordering that agrees with your data and use it to predict the next data point) and Bayesian learning. But there are many types of computable data streams that, although they can be successfully extrapolated by computable agents, cannot be handled by any computable learner by enumeration. And while there is a sense in which Bayesian learning is a fully general strategy for computable learners, the ability to mimic powerful learners comes at a price for Bayesians: they cannot, in general, become highly confident of their predictions in the limit of large data sets and they cannot, in general, use priors that incorporate all relevant background knowledge.

Author's Profile

Gordon Belot
University of Michigan, Ann Arbor

Analytics

Added to PP
2024-11-29

Downloads
26 (#101,006)

6 months
26 (#99,229)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?