2010 LearningWordClassLatticesforDef
- (Navigli & Velardi, 2010) ⇒ Roberto Navigli, and Paola Velardi. (2010). “Learning Word-class Lattices for Definition and Hypernym Extraction.” In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-2010).
Subject Headings: Textual Definition, Definition Extraction.
Notes
Cited By
- Google Scholar: ~ 177 Citations, Retrieved: 2020-11-08.
- http://dl.acm.org/citation.cfm?id=1858681.1858815&preflayout=flat#citedby
2013
- (Boella & Di Caro, 2013) ⇒ Guido Boella, and Luigi Di Caro. (2013). “Extracting Definitions and Hypernym Relations Relying on Syntactic Dependencies and Support Vector Machines.” In: Proceedings of the 51st annual meeting of the association for computational linguistics (ACL-2013).
2012
- (Navigli & Ponzetto, 2012) ⇒ Roberto Navigli, and Simone Paolo Ponzetto. (2012). “BabelNet: The Automatic Construction, Evaluation and Application of a Wide-coverage Multilingual Semantic Network.” In: Artificial Intelligence Journal, 193. doi:10.1016/j.artint.2012.07.001
2011
- (Navigli et al., 2011) ⇒ Roberto Navigli, Paola Velardi, and Stefano Faralli. (2011). “A Graph-based Algorithm for Inducing Lexical Taxonomies from Scratch.” In: Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three. doi:10.5591/978-1-57735-516-8/IJCAI11-313
Quotes
Abstract
Definition extraction is the task of automatically identifying definitional sentences within texts. The task has proven useful in many research areas including ontology learning, relation extraction and question answering. However, current approaches – mostly focused on lexico-syntactic patterns – suffer from both low recall and precision, as definitional sentences occur in highly variable syntactic structures. In this paper, we propose Word-Class Lattices (WCLs), a generalization of word lattices that we use to model textual definitions. Lattices are learned from a dataset of definitions from Wikipedia. Our method is applied to the task of definition and hypernym extraction and compares favorably to other pattern generalization methods proposed in the literature.
1 Introduction
Textual definitions constitute a fundamental source to look up when the meaning of a term is sought. Definitions are usually collected in dictionaries and domain glossaries for consultation purposes. However, manually constructing and updating glossaries requires the cooperative effort of a team of domain experts. Further, in the presence of new words or usages, and – even worse – new domains, such resources are of no help. Nonetheless, terms are attested in texts and some (usually few) of the sentences in which a term occurs are typically definitional, that is they provide a formal explanation for the term of interest. While it is not feasible to manually search texts for definitions, this task can be automatized by means of Machine Learning (ML) and Natural Language Processing (NLP) techniques.
Automatic definition extraction is useful not only in the construction of glossaries, but also in many other NLP tasks. In ontology learning, definitions are used to create and enrich concepts with textual information (Gangemi et al., 2003), and extract taxonomic and non-taxonomic relations (Snow et al., 2004; Navigli and Velardi, 2006; Navigli, 2009a). Definitions are also harvested in Question Answering to deal with "what is"
questions (Cui et al., 2007; Saggion, 2004). In eLearning, they are used to help students assimilate knowledge (Westerhout and Monachesi, 2007), etc.
Much of the current literature focuses on the use of lexico-syntactic patterns, inspired by Hearst’s (1992) seminal work. However, these methods suffer both from low recall and precision, as definitional sentences occur in highly variable syntactic structures, and because the most frequent definitional pattern – $X $is a $Y$ – is inherently very noisy.
In this paper, we propose a generalized form of word lattices, called Word-Class Lattices (WCLs), as an alternative to lexico-syntactic pattern learning. A lattice is a directed acyclic graph (DAG), a subclass of non-deterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. In computational linguistics, lattices have been used to model in a compact way many sequences of symbols, each representing an alternative hypothesis. Lattice-based methods differ in the types of nodes (words, phonemes, concepts), the interpretation of links (representing either a sequential or hierarchical ordering between nodes), their means of creation, and the scoring method used to extract the best consensus output from the lattice (|Schroeder et al., 2009). In speech processing, phoneme or word lattices (Campbell et al., 2007; Mathias and Byrne, 2006; Collins et al., 2004) are used as an interface between speech recognition and understanding. Lattices are adopted also in Chinese word segmentation (Jiang et al., 2008), decompounding in German (Dyer, 2009), and to represent classes of translation models in machine translation (Dyer et al., 2008; Schroeder et al., 2009). In more complex text processing tasks, such as information retrieval, information extraction and summarization, the use of word lattices has been postulated but is considered unrealistic because of the dimension of the hypothesis space.
To reduce this problem, concept lattices have been proposed (Carpineto and Romano, 2005; Klein, 2008; Zhong et al., 2008). Here links represent hierarchical relations, rather than the sequential order of symbols like in word/phoneme lattices, and nodes are clusters of salient words aggregated using synonymy, similarity, or subtrees of a thesaurus. However, salient word selection and aggregation is non-obvious and furthermore it falls into word sense disambiguation, a notoriously AI-hard problem (Navigli, 2009b).
In definition extraction, the variability of patterns is higher than for “traditional” applications of lattices, such as translation and speech, however not as high as in unconstrained sentences. The methodology that we propose to align patterns is based on the use of star (wildcard *) characters to facilitate sentence clustering. Each cluster of sentences is then generalized to a lattice of word classes (each class being either a frequent word or a part of speech). A key feature of our approach is its inherent ability to both identify definitions and extract hypernyms. The method is tested on an annotated corpus of Wikipedia sentences and a large Web corpus, in order to demonstrate the independence of the method from the annotated dataset. WCLs are shown to generalize over lexico-syntactic patterns, and outperform well-known approaches to definition and hypernym extraction.
The paper is organized as follows: Section 2 discusses related work, WCLs are introduced in Section 3 and illustrated by means of an example in Section 4, experiments are presented in Section 5. We conclude the paper in Section 6.
2 Related Work
Definition Extraction. A great deal of work is concerned with definition extraction in several languages (Klavans and Muresan, 2001; Storrer and Wellinghoff, 2006; Gaudio and Branco, 2007; Iftene et al., 2007; Westerhout and Monachesi, 2007; Przepio´rkowski et al., 2007; Dego´rski et al., 2008). The majority of these approaches use symbolic methods that depend on lexico-syntactic patterns or features, which are manually crafted or semi-automatically learned (Zhang and Jiang, 2009; Hovy et al., 2003; Fahmi and Bouma, 2006; Westerhout, 2009). Patterns are either very simple sequences of words (e.g. “refers to”, “is defined as”, “is a”) or more complex sequences of words, parts of speech and chunks. A fully automated method is instead proposed by Borg et al. (2009): they use genetic programming to learn simple features to distinguish between definitions and non-definitions, and then they apply a genetic algorithm to learn individual weights of features. However, rules are learned for only one category of patterns, namely “is” patterns. As we already remarked, most methods suffer from both low recall and precision, because definitional sentences occur in highly variable and potentially noisy syntactic structures. Higher performance (around 60-70% F1-measure) is obtained only for specific domains (e.g., an ICT corpus) and patterns (Borg et al., 2009).
Only few papers try to cope with the generality of patterns and domains in real-world corpora (like the Web). In the GlossExtractor web-based system (Velardi et al., 2008), to improve precision while keeping pattern generality, candidates are pruned using more refined stylistic patterns and lexical filters. Cui et al. (2007) propose the use of probabilistic lexico-semantic patterns, called soft patterns, for definitional question answering in the TREC contest[1]. The authors describe two soft matching models: one is based on an n-gram language model (with the Expectation Maximization algorithm used to estimate the model parameter), the other on Profile Hidden Markov Models (PHMM). Soft patterns generalize over lexico-syntactic “hard” patterns in that they allow a partial matching by calculating a generative degree of match probability between the test instance and the set of training instances. Thanks to its generalization power, this method is the most closely related to our work, however the task of definitional question answering to which it is applied is slightly different from that of definition extraction, so a direct performance comparison is not possible. [2] In fact, the TREC evaluation datasets cannot be considered true definitions, but rather text fragments providing some relevant fact about a target term. For example, sentences like: “Bollywood is a Bombay-based film industry” and “700 or more films produced by India with 200 or more from Bollywood” are both “vital” answers for the question “Bollywood”, according to TREC classification, but the second sentence is not a definition.
Hypernym Extraction]]. The literature on hypernym extraction offers a higher variability of methods, from simple lexical patterns (Hearst, 1992; Oakes, 2005) to statistical and machine learning techniques (Agirre et al., 2000; Caraballo, 1999; Dolan et al., 1993; Sanfilippo and Poznan´ski, 1992; Ritter et al., 2009). One of the highest-coverage methods is proposed by Snow et al. (2004). They first search sentences that contain two terms which are known to be in a taxonomic relation (term pairs are taken from WordNet (Miller et al., 1990)); then they parse the sentences, and automatically extract patterns from the parse trees. Finally, they train a hypernym classifer based on these features. Lexico-syntactic patterns are generated for each sentence relating a term to its hypernym, and a dependency parser is used to represent them.
3 Word-Class Lattices
3.1 Preliminaries
Notion of definition. In our work, we rely on a formal notion of textual definition. Specifically, given a definition, e.g.: “In computer science, a closure is a first-class function with free variables that are bound in the lexical environment”, we assume that it contains the following fields (Storrer and Wellinghoff, 2006):
- The DEFINIENDUM field (DF): this part of the definition includes the definiendum (that is, the word being defined) and its modifiers (e.g., “In computer science, a closure”);
- The DEFINITOR field (VF): it includes the verb phrase used to introduce the definition (e.g., “is”);
- The DEFINIENS field (GF): it includes the genus phrase (usually including the hypernym, e.g., “a first-class function”);
- The REST field (RF): it includes additional clauses that further specify the differentia of the definiendum with respect to its genus (e.g., “with free variables that are bound in the lexical environment”).
Further examples of definitional sentences annotated with the above fields are shown in Table 1. For each sentence, the definiendum (that is, the word being defined) and its hypernym are marked in bold and italic, respectively. Given the lexico-syntactic nature of the definition extraction models we experiment with, training and test sentences are part-of-speech tagged with the TreeTagger system, a part-of-speech tagger available for many languages (Schmid, 1995).
Word Classes and Generalized Sentences. We now introduce our notion of word class, on which our learning model is based. Let T be the set of training sentences, manually bracketed with the DF, VF, GF and RF fields. We first determine the set F of words in T whose frequency is above a threshold ? (e.g., the, a, is, of, refer, etc.). In our training sentences, we replace the term being defined with (TARGET), thus this frequent token is also included in F .
We use the set of frequent words F to generalize words to “word classes”. We define a word class as either a word itself or its part of speech. Given a sentence [math]\displaystyle{ s = w1, w2, . . . , w_{|s|} }[/math], where w_i is the i-th word of s, we generalize its words w_i to word classes ?i as follows:
- [math]\displaystyle{ w_i= w_i \text{if} w_i \in F \text{otherwise} POS(w_i) }[/math]
that is, a word w_i is left unchanged if it occurs frequently in the training corpus (i.e., [math]\displaystyle{ w_i ? F }[/math] ) or is transformed to its part of speech ([math]\displaystyle{ POS(w_i) }[/math]) otherwise. As a result, we obtain a generalized sentence [math]\displaystyle{ s_t = ?_1, ?_2, . . . , ?_{|s|} }[/math] . For instance, given the first sentence in Table 1, we obtain the corresponding generalized sentence: “In NN, a (TARGET) is a JJ NN”, where NN and JJ indicate the noun and adjective classes, respectively.
3.2 Algorithm
We now describe our learning algorithm based on Word-Class Lattices. …
…
3.2.1 Star Patterns
…
3.2.2 Sentence Clustering
…
3.2.3 Word-Class Lattice Construction
Finally, the third step consists of the construction of a Word-Class Lattice for each sentence cluster. Given such a cluster Ci 2 C, we apply a greedy algorithm that iteratively constructs the WCL.
…
Figure 1: The Word-Class Lattice for the sentences in Table 1. The support of each word class is reported beside the corresponding node.
…
3.2.4 Variants of the WCL Model
So far, we have assumed that our WCL model learns lattices from the training sentences in their entirety (we call this model WCL-1). We now propose a second model that learns separate WCLs for each field of the definition, namely: the DEFINIENDUM (DF), DEFINITOR (VF) and DEFINIENS (GF) fields (see Section 3.1). We refer to this latter model as WCL-3. Rather than applying the WCL algorithm to the entire sentence, the very same method is applied to the sentence fragments tagged with one of the three definition fields. The reason for introducing the WCL-3 model is that, while definitional patterns are highly variable, DF, VF and GF individually exhibit a lower variability, thus WCL-3 should improve the generalization power. 3.2.5 Classification Once the learning process is over, a set of WCLs is produced. Given a test sentence s, the classification phase for the WCL-1 model consists of determining whether it exists a lattice that matches s. In the case of WCL-3, we consider any combination
where s is the candidate sentence, lDF , lVF and lGF are three lattices one for each definition field, coverage is the fraction of words of the input sentence covered by the three lattices, and support is the sum of the number of sentences in the star patterns corresponding to the three lattices. Finally, when a sentence is classified as a definition, its hypernym is extracted by selecting the words in the input sentence that are marked as “hypernyms” in the WCL-1 lattice (or in the WCL-3 GF lattice).
4 Example
5 Experiments
5.1 Experimental Setup
Datasets. We conducted experiments on two different datasets:
- A corpus of 4,619 Wikipedia sentences, that contains 1,908 definitional and 2,711 nondefinitional sentences. The former were obtained from a random selection of the first sentences of Wikipedia articles[3]. The defined terms belong to different Wikipedia domain categories[4], so as to capture a representative and cross-domain sample of lexical and syntactic patterns for definitions. These sentences were manually annotated with DEFINIENDUM, DEFINITOR, DEFINIENS and REST fields by an expert annotator, who also marked the hypernyms. The associated set of negative examples (“syntactically plausible” false definitions) was obtained by extracting from the same Wikipedia articles sentences in which the page title occurs.
- A subset of the ukWaC Web corpus (Ferraresi et al., 2008), a large corpus of the English language constructed by crawling the .uk domain of the Web. The subset includes over 300,000 sentences in which occur any of 239 terms selected from the terminology of four different domains (COMPUTER SCIENCE, ASTRONOMY, CARDIOLOGY, AVIATION).
The reason for using the ukWaC corpus is that, unlike the “clean” Wikipedia dataset, in which relatively simple patterns can achieve good results, ukWaC represents a real-world test, with many complex cases. For example, there are sentences that should be classified as definitional according to Section 3.1 but are rather uninformative, like “dynamic programming was the brainchild of an american mathematician”, as well as informative sentences that are not definitional (e.g., they do not have a hypernym), like “cubism was characterised by muted colours and fragmented images”. Even more frequently, the dataset includes sentences which are not definitions but have a definitional pattern (“A Pacific Northwest tribe’s saga refers to a young woman who [..]”), or sentences with very complex definitional patterns (“white body cells are the body’s clean up squad” and “joule is also an expression of electric energy”). These cases can be correctly handled only with fine-grained patterns. Additional details on the corpus and a more thorough linguistic analysis of complex cases can be found in Navigli et al. (2010).
Systems. For definition extraction, we experiment with the following systems:
- WCL-1 and WCL-3: these two classifiers are based on our Word-Class Lattice model. WCL-1 learns from the training set a lattice for each cluster of sentences, whereas WCL-3 identifies clusters (and lattices) separately for each sentence field (DEFINIENDUM, DEFINITOR and DEFINIENS) and classifies a sentence as a definition if any combination from the three sets of lattices matches (cf. Section 3.2.4, the best combination is selected).
- Star patterns: a simple classifier based on the patterns learned as a result of step 1 of our WCL learning algorithm (cf. Section 3.2.1): a sentence is classified as a definition if it matches any of the star patterns in the model.
- Bigrams: an implementation of the bigram classifier for soft pattern matching proposed by Cui et al. (2007). The classifier selects as definitions all the sentences whose probability is above a specific threshold. The probability is calculated as a mixture of bigram and unigram probabilities, with Laplace smoothing on the latter. We use the very same settings of Cui et al. (2007), including threshold values. While the authors propose a second soft-pattern approach based on Profile HMM (cf. Section 2), their results do not show significant improvements over the bigram language model.
For hypernym extraction, we compared WCL-1 and WCL-3 with Hearst’s patterns, a system that extracts hypernyms from sentences based on the lexico-syntactic patterns specified in [[Hearst’s seminal work (1992)]]. These include (hypernym in italic): “such NP as {NP, } {(or | and)} NP
”, “NP f, NPg f,g or other NP”, “NP f,g including f NP ,g for j andg NP”, “NP f,g especially f NP ,g for j andg NP”, and variants thereof. However, it should be noted that hypernym extraction methods in the literature do not extract hypernyms from definitional sentences, like we do, but rather from specific patterns like “X such as Y”. Therefore a direct comparison with these methods is not possible. Nonetheless, we decided to implement Hearst’s patterns for the sake of completeness. We could not replicate the more refined approach by Snow et al. (2004) because it requires the annotation of a possibly very large dataset of sentence fragments. In any case Snow et al. (2004) reported the following performance figures on a corpus of dimension and complexity comparable with ukWaC: the recall-precision graph indicates precision 85% at recall 10% and precision 25% at recall of 30% for the hypernym classifier. A variant of the classifier that includes evidence from coordinate terms (terms with a common ancestor in a taxonomy) obtains an increased precision of 35% at recall 30%. We see no reasons why these figures should vary dramatically on the ukWaC.
Finally, we compare all systems with the random baseline, that classifies a sentence as a definition with probability [math]\displaystyle{ \half }[/math].
Measures. To assess the performance of our systems, we calculated the following measures: ? precision – the number of definitional sentences correctly retrieved by the system over the number of sentences marked by the system as definitional. ? recall – the number of definitional sentences correctly retrieved by the system over the number of definitional sentences in the dataset. ? the F1-measure – a harmonic mean of precision (P) and recall (R) given by 2PR P+R. ? accuracy – the number of correctly classified sentences (either as definitional or nondefinitional) over the total number of sentences in the dataset.
5.2 Results and Discussion
Definition Extraction. …
Hypernym Extraction. …
6 Conclusions
In this paper, we have presented a lattice-based approach to definition and hypernym extraction. The novelty of our approach is:
- The use of a lattice structure to generalize over lexico-syntactic definitional patterns;
- The ability of the system to jointly identify definitions and extract hypernyms;
- The generality of the method, which applies to genericWeb documents in any domain and style, and needs no parameter tuning;
- The high performance as compared with the best-known methods for both definition and hypernym extraction. Our approach outperforms the other systems particularly where the task is more complex, as in real-world documents (i.e., the ukWaC corpus).
Even though definitional patterns are learned from a manually annotated dataset, the dimension and heterogeneity of the training dataset ensures that training needs not to be repeated for specific domains[5], as demonstrated by the cross-domain evaluation on the ukWaC corpus.
The datasets used in our experiments are available from http://lcl.uniroma1.it/wcl. We also plan to release our system to the research community. In the near future, we aim to apply the output of our classifiers to the task of automated taxonomy building, and to test the WCL approach on other information extraction tasks, like hypernym extraction from generic sentence fragments, as in Snow et al. (2004).
Footnotes
- ↑ Text REtrieval Conferences: http://trec.nist.gov
- ↑ 2In the paper, a 55% recall and 34% precision is achieved with the best experiment on TREC-13 data. Furthermore, the classifier of Cui et al. (2007) is based on soft patterns but also on a bag-of-word relevance heuristic. However, the relative influence of the two methods on the final performance is not discussed.
- ↑ The first sentence of Wikipedia entries is, in the large majority of cases, a definition of the page title.
- ↑ http://en.wikipedia.org/wiki/Wikipedia:Categories
- ↑ Of course, it would need some additional work if applied to languages other than English. However, the approach does not need to be adapted to the language of interest.
References
- 1. Eneko Agirre, Ansa Olatz, Xabier Arregi, Xabier Artola, Arantza Daz De Ilarraza Snchez, Mikel Lersundi, David Martnez, Kepa Sarasola, and Ruben Urizar. 2000. Extraction of Semantic Relations from a Basque Monolingual Dictionary Using Constraint Grammar. In Proceedings of Euralex.
- 2. Claudia Borg, Mike Rosner, Gordon Pace, Evolutionary Algorithms for Definition Extraction, Proceedings of the 1st Workshop on Definition Extraction, p.26-32, September 18-18, 2009, Borovets, Bulgaria
- 3. William M. Campbell, M. F. Richardson, and D. A. Reynolds. 2007. Language Recognition with Word Lattices and Support Vector Machines. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), Pages 989--992, Honolulu, HI.
- 4. Sharon A. Caraballo, Automatic Construction of a Hypernym-labeled Noun Hierarchy from Text, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, p.120-126, June 20-26, 1999, College Park, Maryland doi:10.3115/1034678.1034705
- 5. Claudio Carpineto, Giovanni Romano, Using Concept Lattices for Text Retrieval and Mining, Formal Concept Analysis: Foundations and Applications, Springer-Verlag, Berlin, Heidelberg, 2005
- 6. Christopher Collins, Bob Carpenter, Gerald Penn, Head-driven Parsing for Word Lattices, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, p.231-es, July 21-26, 2004, Barcelona, Spain doi:10.3115/1218955.1218985
- 7. Thomas T. Cormen, Charles E. Leiserson, Ronald L. Rivest, Introduction to Algorithms, MIT Press, Cambridge, MA, 1990
- 8. Hang Cui, Min-Yen Kan, Tat-Seng Chua, Soft Pattern Matching Models for Definitional Question Answering, ACM Transactions on Information Systems (TOIS), v.25 n.2, p.8-es, April 2007 doi:10.1145/1229179.1229182
- 9. Łukasz Degórski, Michał Marcinczuk, and Adam Przepiórkowski. 2008. Definition Extraction Using a Sequential Combination of Baseline Grammars and Machine Learning Classifiers. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco.
- 10. William Dolan, Lucy Vanderwende, and Stephen D. Richardson. 1993. Automatically Deriving Structured Knowledge Bases from on-line Dictionaries. In Proceedings of the First Conference of the Pacific Association for Computational Linguistics, Pages 5--14.
- 11. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing Word Lattice Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2008), Pages 1012--1020, Columbus, Ohio, USA.
- 12. Chris Dyer, Using a Maximum Entropy Model to Build Segmentation Lattices for MT, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, May 31-June 05, 2009, Boulder, Colorado
- 13. Ismail Fahmi and Gosse Bouma. 2006. Learning to Identify Definitions Using Syntactic Features. In Proceedings of the EACL 2006 Workshop on Learning Structured Information in Natural Language Applications, Pages 64--71, Trento, Italy.
- 14. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and Evaluating Ukwac, a Very Large Web-derived Corpus of English. In Proceedings of the 4th Web As Corpus Workshop (WAC-4), Marrakech, Morocco.
- 15. Aldo Gangemi, Roberto Navigli, and Paola Velardi. 2003. The OntoWordNet Project: Extension and Axiomatization of Conceptual Relations in WordNet. In Proceedings of the International Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003), Pages 820--838, Catania, Italy.
- 16. Rosa Del Gaudio, António Branco, Automatic Extraction of Definitions in Portuguese: A Rule-based Approach, Proceedings of the Aritficial Intelligence 13th Portuguese Conference on Progress in Artificial Intelligence, December 03-07, 2007, Guimarães, Portugal
- 17. Marti A. Hearst, Automatic Acquisition of Hyponyms from Large Text Corpora, Proceedings of the 14th Conference on Computational Linguistics, August 23-28, 1992, Nantes, France doi:10.3115/992133.992154
- 18. Eduard Hovy, Andrew Philpot, Judith Klavans, Ulrich Germann, Peter Davis, Samuel Popper, Extending Metadata Definitions by Automatically Extracting and Organizing Glossary Definitions, Proceedings of the 2003 Annual National Conference on Digital Government Research, p.1-6, May 18-21, 2003, Boston, MA
- 19. Adrian Iftene, Diana Trandabă, and Ionut Pistol. 2007. Natural Language Processing and Knowledge Representation for Elearning Environments. In: Proc. of Applications for Romanian. Proceedings of RANLP Workshop, Pages 19--25.
- 20. Wenbin Jiang, Haitao Mi, Qun Liu, Word Lattice Reranking for Chinese Word Segmentation and Part-of-speech Tagging, Proceedings of the 22nd International Conference on Computational Linguistics, p.385-392, August 18-22, 2008, Manchester, United Kingdom
- 21. Judith Klavans and Smaranda Muresan. 2001. Evaluation of the DEFINDER System for Fully Automatic Glossary Construction. In: Proc. of the American Medical Informatics Association (AMIA) Symposium.
- 22. Michael Tully Klein. 2008. Understanding English with Lattice-Learning, Master Thesis. MIT, Cambridge, MA, USA.
- 23. Lambert Mathias and William Byrne. 2006. Statistical Phrase-based Speech Translation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2006), Toulouse, France.
- 24. George A. Miller, R. T. Beckwith, Christiane D. Fellbaum, D. Gross, and K. Miller. 1990. WordNet: An Online Lexical Database. International Journal of Lexicography, 3(4):235--244.
- 25. Roberto Navigli, Paola Velardi, Ontology Enrichment through Automatic Semantic Annotation of on-line Glossaries, Proceedings of the 15th International Conference on Managing Knowledge in a World of Networks, October 02-06, 2006, Podêbrady, Czech Republic doi:10.1007/11891451_14
- 26. Roberto Navigli, Paola Velardi, and Juana María Ruiz-Martínez. 2010. An Annotated Dataset for Extracting Definitions and Hypernyms from the Web. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), Valletta, Malta.
- 27. Roberto Navigli, Using Cycles and Quasi-cycles to Disambiguate Dictionary Glosses, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, p.594-602, March 30-April 03, 2009, Athens, Greece
- 28. Roberto Navigli, Word Sense Disambiguation: A Survey, ACM Computing Surveys (CSUR), v.41 n.2, p.1-69, February 2009 doi:10.1145/1459352.1459355
- 29. Michael P. Oakes. 2005. Using Hearst's Rules for the Automatic Acquisition of Hyponyms for Mining a Pharmaceutical Corpus. In Proceedings of the Workshop Text Mining Research.
- 30. Adam Przepiórkowski, Łukasz Degórski, Beata Wójtowicz, Miroslav Spousta, Vladislav Kuboň, Kiril Simov, Petya Osenova, Lothar Lemnitzer, Towards the Automatic Extraction of Definitions in Slavic, Proceedings of the Workshop on Balto-Slavonic Natural Language Processing: Information Extraction and Enabling Technologies, June 29-29, 2007, Prague, Czech Republic
- 31. Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is This, Anyway: Automatic Hypernym Discovery. In Proceedings of the 2009 AAAI Spring Symposium on Learning by Reading and Learning to Read, Pages 88--93.
- 32. Horacio Saggion. 2004. Identifying Denitions in Text Collections for Question Answering. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal.
- 33. Antonio Sanfilippo, Victor Poznański, The Acquisition of Lexical Knowledge from Combined Machine-readable Dictionary Sources, Proceedings of the Third Conference on Applied Natural Language Processing, March 31-April 03, 1992, Trento, Italy doi:10.3115/974499.974514
- 34. Helmut Schmid. 1995. Improvements in Part-of-speech Tagging with An Application to German. In Proceedings of the ACL SIGDAT-Workshop, Pages 47--50.
- 35. Josh Schroeder, Trevor Cohn, Philipp Koehn, Word Lattices for Multi-source Translation, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, p.719-727, March 30-April 03, 2009, Athens, Greece
- 36. Rion Snow, Dan Jurafsky, and Andrew Y. Ng. 2004. Learning Syntactic Patterns for Automatic Hypernym Discovery. In Proceedings of Advances in Neural Information Processing Systems, Pages 1297--1304.
- 37. Angelika Storrer and Sandra Wellinghoff. 2006. Automated Detection and Annotation of Term Definitions in German Text Corpora. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC 2006), Genova, Italy.
- 38. Paola Velardi, Roberto Navigli, Pierluigi D'Amadio, Mining the Web to Create Specialized Glossaries, IEEE Intelligent Systems, v.23 n.5, p.18-25, September 2008 doi:10.1109/MIS.2008.88
- 39. Eline Westerhout and Paola Monachesi. 2007. Extraction of Dutch Definitory Contexts for ELearning Purposes. In Proceedings of CLIN.
- 40. Eline Westerhout, Definition Extraction Using Linguistic and Structural Features, Proceedings of the 1st Workshop on Definition Extraction, p.61-67, September 18-18, 2009, Borovets, Bulgaria
- 41. Chunxia Zhang and Peng Jiang. 2009. Automatic Extraction of Definitions. In Proceedings of 2nd IEEE International Conference on Computer Science and Information Technology, Pages 364--368.
- 42. Zhao-man Zhong, Zong-tian Liu, Yan Guan, Precise Information Extraction from Text Based on Two-Level Concept Lattice, Proceedings of the 2008 International Symposiums on Information Processing, p.275-279, May 23-25, 2008 doi:10.1109/ISIp.2008.40;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2010 LearningWordClassLatticesforDef | Roberto Navigli Paola Velardi | Learning Word-class Lattices for Definition and Hypernym Extraction | 2010 |