INEX Entity Ranking Track
Jump to navigation
Jump to search
An INEX Entity Ranking Track is a track within the INEX initiative that focuses on Entity Retrieval (rather than Text Retrieval).
- Context:
- The track uses the Wikipedia data, where systems may exploit the category metadata associated with entities in entity retrieval.
- For example, consider a category "Dutch politicians". The relevant entities are assumed to be labelled with this category or other closely related category in the categorization hierarchy, e.g. “politicians".
- The Entity Ranking (XER) task requires the return of entities that satisfy a topic described in natural language text. This task can incorporate the List Completion (LC) task, where a number of examples are given. The aim is to complete this partial list of answers. The List Completion runs will contribute to the pool of documents used for the relevance assessments.
- The Entity Relation Search (ERS) task focuses on the relations between two types of entities. One example is: find US city A and US state B, so that A is the capital of B. For this example, it has been specified that A is an US city, B is an US state, and the topic is "capital" meaning that A is the capital of B.
- See: INEX Efficiency Track.
References
2012
- (Raviv et al., 2012) ⇒ Hadas Raviv, David Carmel, and Oren Kurland. (2012). “A Ranking Framework for Entity Oriented Search using Markov Random Fields.” In: Proceedings of the 1st Joint International Workshop on Entity-oriented and Semantic Search (JIWES 2012)
- … We evaluated the performance of our model using the INEX datasets. Our results show that our ranking model significantly outperforms leading INEX systems in the tracks of 2007 and 2008, and is equivalent to the best results achieved in the 2009 track. …
2008
- (VTP08) ⇒ Anne-Marie Vercoustre, James A. Thom, and Jovan Pehcevski. (2008). “Entity ranking in Wikipedia.” In: Proceedings of the 2008 ACM symposium on Applied computing.