Switch to: References

Add citations

You must login to add citations.
  1. Does rank have its privilege? Inductive inferences within folkbiological taxonomies.John D. Coley, Douglas L. Medin & Scott Atran - 1997 - Cognition 64 (1):73-112.
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Robust reasoning: integrating rule-based and similarity-based reasoning.Ron Sun - 1995 - Artificial Intelligence 75 (2):241-295.
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Extracting the coherent core of human probability judgement: a research program for cognitive psychology.Daniel Osherson, Eldar Shafir & Edward E. Smith - 1994 - Cognition 50 (1-3):299-313.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Extrapolating human probability judgment.Daniel Osherson, Edward E. Smith, Tracy S. Myers, Eldar Shafir & Michael Stob - 1994 - Theory and Decision 36 (2):103-129.
    We advance a model of human probability judgment and apply it to the design of an extrapolation algorithm. Such an algorithm examines a person's judgment about the likelihood of various statements and is then able to predict the same person's judgments about new statements. The algorithm is tested against judgments produced by thirty undergraduates asked to assign probabilities to statements about mammals.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Source of Bayesian Priors.Daniel Osherson, Edward E. Smith, Eldar Shafir, Antoine Gualtierotti & Kevin Biolsi - 1995 - Cognitive Science 19 (3):377-405.
    Establishing reasonable, prior distributions remains a significant obstacle for the construction of probabilistic expert systems. Human assessment of chance is often relied upon for this purpose, but this has the drawback of being inconsistent with axioms of probability. This article advances a method for extracting a coherent distribution of probability from human judgment. The method is based on a psychological model of probabilistic reasoning, followed by a correction phase using linear programming.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Emergence of Organizing Structure in Conceptual Representation.Brenden M. Lake, Neil D. Lawrence & Joshua B. Tenenbaum - 2018 - Cognitive Science 42 (S3):809-832.
    Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form—where form could be a tree, ring, chain, grid, etc.. Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Structured statistical models of inductive reasoning.Charles Kemp & Joshua B. Tenenbaum - 2009 - Psychological Review 116 (1):20-58.
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large‐Scale Text Corpora.Marius Cătălin Iordan, Tyler Giallanza, Cameron T. Ellis, Nicole M. Beckage & Jonathan D. Cohen - 2022 - Cognitive Science 46 (2):e13085.
    Cognitive Science, Volume 46, Issue 2, February 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  • Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large‐Scale Text Corpora.Marius Cătălin Iordan, Tyler Giallanza, Cameron T. Ellis, Nicole M. Beckage & Jonathan D. Cohen - 2022 - Cognitive Science 46 (2):e13085.
    Applying machine learning algorithms to automatically infer relationships between concepts from large-scale collections of documents presents a unique opportunity to investigate at scale how human semantic knowledge is organized, how people use it to make fundamental judgments (“How similar are cats and bears?”), and how these judgments depend on the features that describe concepts (e.g., size, furriness). However, efforts to date have exhibited a substantial discrepancy between algorithm predictions and human empirical judgments. Here, we introduce a novel approach to generating (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Opposite of Republican: Polarization and Political Categorization.Evan Heit & Stephen P. Nicholson - 2010 - Cognitive Science 34 (8):1503-1516.
    Two experiments examined the typicality structure of contrasting political categories. In Experiment 1, two separate groups of participants rated the typicality of 15 individuals, including political figures and media personalities, with respect to the categories Democrat or Republican. The relation between the two sets of ratings was negative, linear, and extremely strong, r = −.9957. Essentially, one category was treated as a mirror image of the other. Experiment 2 replicated this result, showing some boundary conditions, and extending the result to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Categorical induction from uncertain premises: Jeffrey's doesn't completely rule.Constantinos Hadjichristidis, Steven A. Sloman & David E. Over - 2014 - Thinking and Reasoning 20 (4):405-431.
    Studies of categorical induction typically examine how belief in a premise (e.g., Falcons have an ulnar artery) projects on to a conclusion (e.g., Robins have an ulnar artery). We study induction in cases in which the premise is uncertain (e.g., There is an 80% chance that falcons have an ulnar artery). Jeffrey's rule is a normative model for updating beliefs in the face of uncertain evidence. In three studies we tested the descriptive validity of Jeffrey's rule and a related probability (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What should default reasoning be, by default?Jeff Pelletier - unknown
    This is a position paper concerning the role of empirical studies of human default reasoning in the formalization of AI theories of default reasoning. We note that AI motivates its theoretical enterprise by reference to human skill at default reasoning, but that the actual research does not make any use of this sort of information and instead relies on intuitions of individual investigators. We discuss two reasons theorists might not consider human performance relevant to formalizing default reasoning: (a) that intuitions (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations