2009 SemiSupervisedSequenceLabe

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited by

Quotes

Abstract

Typical information extraction (IE) systems can be seen as tasks assigning labels to words in a natural language sequence. The performance is restricted by the availability of labeled words. To tackle this issue, we propose a semi-supervised approach to improve the sequence labeling procedure in IE through a class of algorithms with self-learned features (SLF). A supervised classifier can be trained with annotated text sequences and used to classify each word in a large set of unannotated sentences. By averaging predicted labels over all cases in the unlabeled corpus, SLF training builds class label distribution patterns for each word (or word attribute) in the dictionary and re-trains the current model iteratively adding these distributions as extra word features. Basic SLF models how likely a word could be assigned to target class types. Several extensions are proposed, such as learning words' class boundary distributions. SLF exhibits robust and scalable behaviour and is easy to tune. We applied this approach on four classical IE tasks: named entity recognition (German and English), part-of-speech tagging (English) and one gene name recognition corpus. Experimental results show effective improvements over the supervised baselines on all tasks. In addition, when compared with the closely related self-training idea, this approach shows favorable advantages.


,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2009 SemiSupervisedSequenceLabeYanjun Qi
Pavel Kuksa
Ronan Collobert
Kunihiko Sadamasa
Koray Kavukcuoglu
Jason Weston
Semi-Supervised Sequence Labeling with Self-Learned FeaturesICDM 2009 Proceedings10.1109/ICDM.2009.402009