Named Entity Disambiguation (NED) System
A Named Entity Disambiguation (NED) System is a text processing system that can solve an NED task by implementing an NED algorithm.
- AKA: Entity Linking System, Named Entity Linking System, Entity Disambiguation System, Named Entity Recognition and Disambiguation System, Named Entity Normalization System.
- Context:.
- It ranges from being an Unsupervised Named Entity Disambiguation System to being a Supervised Named Entity Disambiguation System.
- Example(s):
- Counter-Example(s):
- See: Semantic Search System, Document To Entity Record Resolution System, Entity Mention Coreference Resolution System, Wikipedia, Knowledge Base.
References
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Entity_linking Retrieved:2019-6-16.
- In natural language processing, entity linking, named entity linking (NEL), named entity disambiguation (NED), named entity recognition and disambiguation (NERD) or named entity normalization (NEN) [1] is the task of determining the identity of entities mentioned in text. For example, given the sentence "Paris is the capital of France", the idea is to determine that "Paris" refers to the city of Paris and not to Paris Hilton or any other entity that could be referred as "Paris". NED is different from named entity recognition (NER) in that NER identifies the occurrence or mention of a named entity in text but it does not identify which specific entity it is.
Entity linking requires a knowledge base containing the entities to which entity mentions can be linked. A popular choice for entity linking on open domain text are knowledge-bases based on Wikipedia, [2] in which each page is regarded as a named entity. NED using Wikipedia entities has been also called wikification (see Wikify! an early entity linking system[3] [4] or manually built.
Named entity mentions can be highly ambiguous; any entity linking method must address this inherent ambiguity. Various approaches to tackle this problem have been tried to date. In the seminal approach of Milne and Witten, supervised learning is employed using the anchor texts of Wikipedia entities as training data. [5] Other approaches also collected training data based on unambiguous synonyms. . Kulkarni et al. exploited the common property that topically coherent documents refer to entities belonging to strongly related types.
Entity linking has been used to improve the performance of information retrieval systems and to improve search performance on digital libraries. [6] [7] NED is also a key input for Semantic Search. [8]
- In natural language processing, entity linking, named entity linking (NEL), named entity disambiguation (NED), named entity recognition and disambiguation (NERD) or named entity normalization (NEN) [1] is the task of determining the identity of entities mentioned in text. For example, given the sentence "Paris is the capital of France", the idea is to determine that "Paris" refers to the city of Paris and not to Paris Hilton or any other entity that could be referred as "Paris". NED is different from named entity recognition (NER) in that NER identifies the occurrence or mention of a named entity in text but it does not identify which specific entity it is.
- ↑ M. A. Khalid, V. Jijkoun and M. de Rijke (2008). The impact of named entity normalization on information retrieval for question answering. Proc. ECIR.
- ↑ (Han et al., 2011) ⇒ Xianpei Han, Le Sun, and Jun Zhao. (2011). “Collective Entity Linking in Web Text: A Graph-based Method.” In: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. doi:10.1145/2009916.2010019
- ↑ (Mihalcea & Csomai, 2007) ⇒ Rada Mihalcea, and Andras Csomai. (2007). “Wikify!: Linking documents to encyclopedic knowledge.” In: Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management (CIKM 2007). doi:10.1145/1321440.1321475]
- ↑ Aaron M. Cohen (2005). Unsupervised gene/protein named entity normalization using automatically extracted dictionaries. Proc. ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics, pp. 17–24.
- ↑ (Milne & Witten, 2008a) ⇒ David N. Milne, and Ian H. Witten. (2008). “Learning to Link with Wikipedia.” In: Proceeding of the 17th ACM Conference on Information and Knowledge Management, (CIKM 2008). doi:10.1145/1458082.1458150
- ↑ Hui Han, Hongyuan Zha, C. Lee Giles, "Name disambiguation in author citations using a K-way spectral clustering method," ACM/IEEE Joint Conference on Digital Libraries 2005 (JCDL 2005): 334-343, 2005
- ↑ [1]
- ↑ STICS
2013
- (Hachey et al., 2013) ⇒ Ben Hachey, Will Radford, Joel Nothman, Matthew Honnibal, and James R. Curran. (2013). “Evaluating Entity Linking with Wikipedia.” In: Artificial Intelligence, 194.
2011a
- (Han et al., 2011) ⇒ Xianpei Han, Le Sun, and Jun Zhao. (2011). “Collective Entity Linking in Web Text: A Graph-based Method.” In: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval. doi:10.1145/2009916.2010019
- QUOTE: The key issue is to correctly link the name mentions in a document with their referent entities in the knowledge base, which is usually referred to as Entity Linking (EL for short). For example, in Figure 1 an entity linking system should link the name mentions “Bulls”, “Jordan” and “Space Jam” to their corresponding referent entities Chicago Bulls, Michael Jordan and Space Jam in the knowledge base.
Figure 1. An illustration of entity linking.
- QUOTE: The key issue is to correctly link the name mentions in a document with their referent entities in the knowledge base, which is usually referred to as Entity Linking (EL for short). For example, in Figure 1 an entity linking system should link the name mentions “Bulls”, “Jordan” and “Space Jam” to their corresponding referent entities Chicago Bulls, Michael Jordan and Space Jam in the knowledge base.
2011b
- (Ji et al., 2011) ⇒ Heng Ji, and Ralph Grishman. (2011). “Knowledge Base Population: Successful Approaches and Challenges.” In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies.
- QUOTE: The main goal of KBP is to promote research in discovering facts about entities and augmenting a knowledge base (KB) with these facts. This is done through two tasks, Entity Linking -- linking names in context to entities in the KB -- and Slot Filling -- adding information about an entity to the KB.
2009
- (Kulkarni et al., 2009) ⇒ Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. (2009). “Collective Annotation of Wikipedia Entities in Web Text.” In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2009). doi:10.1145/1557019.1557073
- QUOTE: We proposed new models and algorithms for a highly motivated problem: annotating unstructured (Web) text with entity IDs from an entity catalog (Wikipedia). Unlike prior work that is biased toward specific entity types like persons and places, with low recall and high precision, our intention is aggressive, high-recall open-domain annotation for indexing and mining tasks downstream.
2008a
- (Milne & Witten, 2008) ⇒ David N. Milne, and Ian H. Witten. (2008). “Learning to Link with Wikipedia.” In: Proceeding of the 17th ACM Conference on Information and Knowledge Management, (CIKM 2008). doi:10.1145/1458082.1458150
- QUOTE: We have developed a machine-learning approach to disambiguation that uses the links found within Wikipedia articles for training. For every link, a Wikipedian has manually—and probably with some effort—selected the correct destination to represent the intended sense of the anchor. This provides millions of manually-defined ground truth examples to learn from.
2008b
- (Leitner et al., 2008) ⇒ Florian Leitner, Martin Krallinger, Carlos Rodriguez-Penagos, Jörg Hakenberg, Conrad Plake, Cheng-Ju Kuo, Chun-Nan Hsu, Richard Tzong-Han Tsai, Hsi-Chuan Hung William W Lau, Calvin A Johnson, Rune Sætre, Kazuhiro Yoshida, Yan Hua Chen, Sun Kim, Soo-Yong Shin, Byoung-Tak Zhang, William A. Baumgartner Jr, Lawrence Hunter, Barry Haddow, Michael Matthews, Xinglong Wang, Patrick Ruch, Frédéric Ehrler, Arzucan Özgür, Günes Erkan, Dragomir Radev, Michael Krauthammer, ThaiBinh Luong, Robert Hoffmann, Chris Sander, Alfonso Valencia. (2008). “Introducing Meta-Services for Biomedical Information Extraction.” In: Genome Biology, 9(Suppl 2):S6 doi:10.1186/gb-2008-9-s2-s6
- QUOTE: Entity mention normalization is based on large lexicon of known names and synonyms, which are kept in main memory at all times for efficiency. Once a potential named entity has been found, we further identify it using context profiles in case multiple entities share the same name [15];
Gene/protein normalization (GN): detect which genes or proteins are mentioned, assigning sequence database identifiers to the text.
The annotations we currently provide are gene mention normalization (32,795 human genes from EntrezGene)
We introduce the first meta-service for information extraction in molecular biology, the BioCreative MetaServer (BCMS; http://bcms.bioinfo.cnio.es/ webcite). This prototype platform is a joint effort of 13 research groups and provides automatically generated annotations for PubMed/Medline abstracts. Annotation types cover gene names, gene IDs, species, and protein-protein interactions. The annotations are distributed by the meta-server in both human and machine readable formats (HTML/XML). This service is intended to be used by biomedical researchers and database annotators, and in biomedical language processing. The platform allows direct comparison, unified access, and result aggregation of the annotations.
- QUOTE: Entity mention normalization is based on large lexicon of known names and synonyms, which are kept in main memory at all times for efficiency. Once a potential named entity has been found, we further identify it using context profiles in case multiple entities share the same name [15];
2008c
- (Jijkoun et al., 2008) ⇒ Valentin Jijkoun, Mahboob Alam Khalid, Maarten Marx, and Maarten de Rijke\n. (2008). “Named entity normalization in user generated content.” In: Proceedings of the second workshop on Analytics for Noisy Unstructured Text Data (AND 2008:23-30).
- QUOTE: Research on named entity extraction and normalization has been carried out in both restricted and open domains. For example, for the case of scientific articles on genomics, where gene and protein names can be both synonymous and ambiguous, Cohen [3] normalizes entities using dictionaries automatically extracted from gene databases.
2007
- (Cucerzan, 2007) ⇒ Silviu Cucerzan. (2007). “Large-Scale Named Entity Disambiguation Based on Wikipedia Data.” In: Proceedings of Empirical Methods in Natural Language Processing Conference (EMNLP 2007).
- QUOTE: The system discussed in this paper performs both named entity identification and disambiguation. The entity identification and in-document coreference components resemble the Nominator system (Wacholder et al., 1997). However, while Nominator made heavy use of heuristics and lexical clues to solve the structural ambiguity of entity mentions, we employ statistics extracted from Wikipedia and Web search results. The disambiguation component, which constitutes the main focus of the paper, employs a vast amount of contextual and category information automatically extracted from Wikipedia over a space of 1.4 million distinct entities/concepts, making extensive use of the highly interlinked structure of this collection. We augment the Wikipedia category information with information automatically extracted from Wikipedia list pages and use it in conjunction with the context information in a vectorial model that employs a novel disambiguation method.