Switch to: References

Add citations

You must login to add citations.
  1. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines.Christopher Manning - manuscript
    mentation for languages such as Chinese. Almost no NLP task is truly standalone. The end-to-end performance of natural Most current systems for higher-level, aggre-.
    Download  
     
    Export citation  
     
    Bookmark  
  • Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy.Thomas Icard - 2020 - Journal of Mathematical Psychology 95.
    A probabilistic Chomsky–Schützenberger hierarchy of grammars is introduced and studied, with the aim of understanding the expressive power of generative models. We offer characterizations of the distributions definable at each level of the hierarchy, including probabilistic regular, context-free, (linear) indexed, context-sensitive, and unrestricted grammars, each corresponding to familiar probabilistic machine classes. Special attention is given to distributions on (unary notations for) positive integers. Unlike in the classical case where the "semi-linear" languages all collapse into the regular languages, using analytic tools (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Appellate Court Modifications Extraction for Portuguese.William Paulo Ducca Fernandes, Luiz José Schirmer Silva, Isabella Zalcberg Frajhof, Guilherme da Franca Couto Fernandes de Almeida, Carlos Nelson Konder, Rafael Barbosa Nasser, Gustavo Robichez de Carvalho, Simone Diniz Junqueira Barbosa & Hélio Côrtes Vieira Lopes - 2020 - Artificial Intelligence and Law 28 (3):327-360.
    Appellate Court Modifications Extraction consists of, given an Appellate Court decision, identifying the proposed modifications by the upper Court of the lower Court judge’s decision. In this work, we propose a system to extract Appellate Court Modifications for Portuguese. Information extraction for legal texts has been previously addressed using different techniques and for several languages. Our proposal differs from previous work in two ways: our corpus is composed of Brazilian Appellate Court decisions, in which we look for a set of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Expectation-based syntactic comprehension.Roger Levy - 2008 - Cognition 106 (3):1126-1177.
    Download  
     
    Export citation  
     
    Bookmark   204 citations  
  • The Stanford typed dependencies representation.Christopher D. Manning - unknown
    This paper examines the Stanford typed dependencies representation, which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding. For such purposes, we argue that dependency schemes must follow a simple design and provide semantically contentful information, as well as offer an automatic procedure to extract the relations. We consider the underlying design principles of the Stanford scheme from this perspective, and compare it to the GR and PARC representations. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Finding contradictions in text.Christopher Manning - manuscript
    Marie-Catherine de Marneffe, Anna N. Rafferty and Christopher D. Manning Linguistics Department Computer Science Department Stanford University Stanford University Stanford, CA 94305 Stanford, CA 94305 {rafferty,manning}@stanford.edu [email protected]..
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A note on the expressive power of probabilistic context free grammars.Gabriel Infante-Lopez & Maarten De Rijke - 2006 - Journal of Logic, Language and Information 15 (3):219-231.
    We examine the expressive power of probabilistic context free grammars (PCFGs), with a special focus on the use of probabilities as a mechanism for reducing ambiguity by filtering out unwanted parses. Probabilities in PCFGs induce an ordering relation among the set of trees that yield a given input sentence. PCFG parsers return the trees bearing the maximum probability for a given sentence, discarding all other possible trees. This mechanism is naturally viewed as a way of defining a new class of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unsupervised and few-shot parsing from pretrained language models.Zhiyuan Zeng & Deyi Xiong - 2022 - Artificial Intelligence 305 (C):103665.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts.Truong-Son Nguyen, Le-Minh Nguyen, Satoshi Tojo, Ken Satoh & Akira Shimazu - 2018 - Artificial Intelligence and Law 26 (2):169-199.
    This paper proposes several recurrent neural network-based models for recognizing requisite and effectuation parts in Legal Texts. Firstly, we propose a modification of BiLSTM-CRF model that allows the use of external features to improve the performance of deep learning models in case large annotated corpora are not available. However, this model can only recognize RE parts which are not overlapped. Secondly, we propose two approaches for recognizing overlapping RE parts including the cascading approach which uses the sequence of BiLSTM-CRF models (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Natural Logic for Textual Inference.Christopher D. Manning - unknown
    This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PAS- CAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Generating Typed Dependency Parses from Phrase Structure Parses.Christopher Manning - unknown
    This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Efficient, Feature-based, Conditional Random Field Parsing.Christopher D. Manning - unknown
    Discriminative feature-based methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, featurerich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. Our efficiency is primarily due to the use of stochastic optimization techniques, as well as parallelization and chart prefiltering. On WSJ15, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Max-Margin parsing.Christopher Manning - manuscript
    Ben Taskar Dan Klein Michael Collins Computer Science Dept. Computer Science Dept. CS and AI Lab Stanford University Stanford University.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Unsupervised discovery of a statistical verb lexicon.Christopher Manning - manuscript
    tic structure. Determining the semantic roles of a verb’s dependents is an important step in natural..
    Download  
     
    Export citation  
     
    Bookmark  
  • The infinite tree.Christopher Manning - manuscript
    number of hidden categories is not fixed, but when the number of hidden states is unknown (Beal et al., 2002; Teh et al., 2006). can grow with the amount of training data.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Syntax-aware entity representations for neural relation extraction.Zhengqiu He, Wenliang Chen, Zhenghua Li, Wei Zhang, Hao Shao & Min Zhang - 2019 - Artificial Intelligence 275 (C):602-617.
    Download  
     
    Export citation  
     
    Bookmark  
  • Semantic sensitive tensor factorization.Makoto Nakatsuji, Hiroyuki Toda, Hiroshi Sawada, Jin Guang Zheng & James A. Hendler - 2016 - Artificial Intelligence 230 (C):224-245.
    Download  
     
    Export citation  
     
    Bookmark  
  • Faster shift-reduce constituent parsing with a non-binary, bottom-up strategy.Daniel Fernández-González & Carlos Gómez-Rodríguez - 2019 - Artificial Intelligence 275 (C):559-574.
    Download  
     
    Export citation  
     
    Bookmark  
  • Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics?Christopher D. Manning - unknown
    I examine what would be necessary to move part-of-speech tagging performance from its current level of about 97.3% token accuracy (56% sentence accuracy) to close to 100% accuracy. I suggest that it must still be possible to greatly increase tagging performance and examine some useful improvements that have recently been made to the Stanford Part-of-Speech Tagger. However, an error analysis of some of the remaining errors suggests that there is limited further mileage to be had either from better machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Modeling Semantic Containment and Exclusion in Natural Language Inference.Christopher D. Manning - unknown
    We propose an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Learning to recognize features of valid textual entailments.Christopher Manning - unknown
    separated from evaluating entailment. Current approaches to semantic inference in question answer-.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • When variables align: A Bayesian multinomial mixed-effects model of English permissive constructions.Natalia Levshina - 2016 - Cognitive Linguistics 27 (2):235-268.
    Name der Zeitschrift: Cognitive Linguistics Jahrgang: 27 Heft: 2 Seiten: 235-268.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Note on the Expressive Power of Probabilistic Context Free Grammars.Gabriel Infante-Lopez & Maarten Rijke - 2006 - Journal of Logic, Language and Information 15 (3):219-231.
    We examine the expressive power of probabilistic context free grammars (PCFGs), with a special focus on the use of probabilities as a mechanism for reducing ambiguity by filtering out unwanted parses. Probabilities in PCFGs induce an ordering relation among the set of trees that yield a given input sentence. PCFG parsers return the trees bearing the maximum probability for a given sentence, discarding all other possible trees. This mechanism is naturally viewed as a way of defining a new class of (...)
    Download  
     
    Export citation  
     
    Bookmark