Context Vector
A Context Vector is a Text-Item Vector that contains context information.
- AKA: Contextual Sequence Vector, Context Vector Representation, Vectorized Context Representation.
- Context:
- It is usually a fixed-length vector representation.
- It can be used as input vector in NLP, NLG, NLU, and NMT tasks.
- Example(s):
- Counter-Example(s):
- See: Word Embedding, Context Vector Disambiguation Task, Lexical Context Window, Context, Vectorized Word Representation, Automatic Context Vector Generation Task, MatchPlus System, Word Sense Disambiguation, Information Retrieval.
References
2018
- (Khatri et al., 2018) ⇒ Chandra Khatri, Gyanit Singh, and Nish Parikh (2018). "Abstractive and Extractive Text Summarization using Document Context Vector and Recurrent Neural Networks". In: arXiv preprint arXiv:1807.08000.
- QUOTE: Context vector $C_{s}$ is induced from seller provided metadata ms={title, subtitle, taxon, other key-valued metadata such as brand, color etc.}. For $w \in V$ let $C_{s} (w)$ be the value of the dimension $w$ in $C_s$ . It is defined as follows,
- QUOTE: Context vector $C_{s}$ is induced from seller provided metadata ms={title, subtitle, taxon, other key-valued metadata such as brand, color etc.}. For $w \in V$ let $C_{s} (w)$ be the value of the dimension $w$ in $C_s$ . It is defined as follows,
$C_{s}(w)=\left\{\begin{array}{ll} \text { frequency }(w) \text { in } m_{s}, & \text { if } w \in \text { seller metadata } \\ 0, & \text { otherwise } \end{array}\right.$
2013
- (Bouamor et al., 2013) ⇒ Dhouha Bouamor, Nasredine Semmar, and Pierre Zweigenbaum (2013). "Context Vector Disambiguation for Bilingual Lexicon Extraction from Comparable Corpora". In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
- QUOTE: Once translated into the target language, the context vectors disambiguation process intervenes. This process operates locally on each context vector and aims at finding the most prominent translations of polysemous words.
2003
- (Caid & Carleton, 2003) ⇒ William R. Caid, and Joel L. Carleton (2003). "Context Vector-Based Text Retrieval". In: Fair Isaac White Paper, 1-20.
- QUOTE: A context vector is associated with every unique word in the training corpus. A self organization-based learning approach is used to derive these context vectors such that vectors for words that are used in similar contexts will point in similar directions. Unlike other vector space techniques that associate vectors with words, the MatchPlus approach exploits local rather than document-wide context for learning these similarity of usage relationships. As a result, the MatchPlus approach can learn word usages and can perform word sense disambiguation.
Once word context vectors have been learned, context vectors for documents and queries can be computed as a weighted sum of word context vectors. Document retrieval is performed based on Euclidean distance between a query context vector and document context vectors. Document context vectors can be clustered to form self organized subject indices. Index contents can be identified (summarized) by determining word vectors that are close to cluster centers.
- QUOTE: A context vector is associated with every unique word in the training corpus. A self organization-based learning approach is used to derive these context vectors such that vectors for words that are used in similar contexts will point in similar directions. Unlike other vector space techniques that associate vectors with words, the MatchPlus approach exploits local rather than document-wide context for learning these similarity of usage relationships. As a result, the MatchPlus approach can learn word usages and can perform word sense disambiguation.
1998
- (Gallant, 1998) ⇒ Stephen I. Gallant (1998). "Context Vectors: A Step Toward a "Grand Unified Representation"". In: International Workshop on Hybrid Neural Systems (pp. 204-210). Springer, Berlin, Heidelberg.
- QUOTE: Context Vectors are fixed-length vector representations useful for document retrieval and word sense disambiguation. Context vectors were motivated by four goals:
1.Capture “similarity of use” among words (“car” is similar to “auto”, but not similar to “hippopotamus”).
2. Quickly find constituent objects (e.g., documents that contain specified words).
3. Generate context vectors automatically from an unlabeled corpus.
4. Use context vectors as input to standard learning algorithms.
- QUOTE: Context Vectors are fixed-length vector representations useful for document retrieval and word sense disambiguation. Context vectors were motivated by four goals: