PDP Learnability and Innate Knowledge of Language

Connectionism 3:297-322 (1992)
  Copy   BIBTEX

Abstract

It is sometimes argued that if PDP networks can be trained to make correct judgements of grammaticality we have an existence proof that there is enough information in the stimulus to permit learning grammar by inductive means alone. This seems inconsistent superficially with Gold's theorem and at a deeper level with the fact that networks are designed on the basis of assumptions about the domain of the function to be learned. To clarify the issue I consider what we should learn from Gold's theorem, then go on to inquire into what it means to say that knowledge is domain specific. I first try sharpening the intuitive notion of domain specific knowledge by reviewing the alleged difference between processing limitations due to shortage of resources vs shortages of knowledge. After rejecting different formulations of this idea, I suggest that a model is language specific if it transparently refer to entities and facts about language as opposed to entities and facts of more general mathematical domains. This is a useful but not necessary condition. I then suggest that a theory is domain specific if it belongs to a model family which is attuned in a law-like way to domain regularities. This leads to a comparison of PDP and parameter setting models of language learning. I conclude with a novel version of the poverty of stimulus argument.

Author's Profile

David Kirsh
University of California, San Diego

Analytics

Added to PP
2013-02-20

Downloads
954 (#13,146)

6 months
97 (#39,597)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?