On this page we list semi-supervised learning work appearing at ACL this year. For each paper, we give its abstract, rather than our own summary. If there is one take-away message from this page, it's that semi-supervised learning for NLP is an interesting and active research area. We encourage you to attend the talks and posters for each of papers below!
Terry Koo, Xavier Carrera, and Michael Collins. Simple Semi-supervised Dependency Parsing.
We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92:02% to 93:16%, and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86:13% to 87:13%. In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance.
Percy Liang and Dan Klein. Analyzing the errors of unsupervised learning.
We identify four types of errors that unsupervised induction systems make and study each one in turn. Our contributions include (1) using a meta-model to analyze the incorrect biases of a model in a systematic way, (2) providing an efficient and robust method of measuring distance between two parameter settings of a model, and (3) showing that local optima issues which typically plague EM can be somewhat alleviated by increasing the number of training examples. We conduct our analyses on three models: the HMM, the PCFG, and a simple dependency model.
Gideon Mann and Andrew McCallum. Generalized Expectation Criteria for Semi-supervised Learning of Conditional Random Fields.
This paper presents a semi-supervised training method for linear-chain conditional random fields that takes use of labeled features rather than labeled instances. This is accomplished by using generalized expectation criteria to express a preference for parameter settings in which the model’s distribution on unlabeled data matches a target distribution. We induce target conditional probability distributions of labels given features from both annotated feature occurrences in context and adhoc feature majority label assignment. The use of generalized expectation criteria allows for a dramatic reduction in annotation time by shifting from traditional instance-labeling to feature-labeling, and the methods presented outperform traditional CRF training and other semi-supervised methods when limited human effort is available.
David McClosky and Eugene Charniak. Self-Training for Biomedical Parsing.
Parser self-training is the technique of taking an existing parser, parsing extra data and then creating a second parser by treating the extra data as further training data. Here we apply this technique to parser adaptation. In particular, we self-train the standard Charniak/Johnson Penn Treebank parser using unlabeled biomedical abstracts. This achieves an f-score of 84.3% on a standard test set of biomedical abstracts from the Genia corpus. This is a 20% error reduction over the best previous result on biomedical data (80.2% on the same test set).
Marius Pasca and Benjamin Van Durme. Weakly-Supervised Acquisition of Open-Domain Classes and Class Attributes from Web Documents and Query Logs.
A new approach to large-scale information extraction exploits both Web documents and query logs to acquire thousands of open-domain classes of instances, along with relevant sets of open-domain class attributes at precision levels previously obtained only on small-scale, manually-assembled classes.
Jun Suzuki and Hideki Isozaki. Semi-Supervised Sequential Labeling and Segmentation using Giga-word Scale Unlabeled Data.
This paper provides evidence that the use of more unlabeled data in semi-supervised learning can improve the performance of NLP tasks, such as part-of-speech tagging, syntactic chunking, and named entity recognition. We first propose a simple yet powerful semi-supervised discriminative model appropriate for handling large scale unlabeled data. Then, we describe experiments performed on widely used test collections, namely, PTB III data, CoNLL’00 and ’03 shared task data for the above three NLP tasks, respectively. We incorporate up to 1G-words (one billion tokens) of unlabeled data, which is the largest amount of unlabeled data ever used for these tasks, to explore the performance improvement. In addition, our results surpass the best reported results for all of the above test collections.
Qin Wang, Dale Schuurmans, and Dekang Lin. Semi-supervised Convex Training for Dependency Parsing.
We present a novel semi-supervised training algorithm for learning dependency parsers. By combining a supervised large margin loss with an unsupervised least squares loss, a discriminative, convex, semi-supervised learning algorithm can be obtained that is applicable to large-scale problems. To demonstrate the benefits of this approach, we apply the technique to learning dependency parsers from combined labeled and unlabeled corpora. Using a stochastic gradient descent algorithm, a parsing model can be efficiently learned from semi-supervised data that significantly outperforms corresponding supervised methods.