SentEval Library

From GM-RKB
Jump to navigation Jump to search

A SentEval Library is a sentence embedding evaluation system (for evaluating the quality of sentence embeddings in the context of a variety of downstream tasks).



References

2024

  • https://github.com/facebookresearch/SentEval
    • QUOTE: SentEval is a library for evaluating the quality of sentence embeddings. We assess their generalization power by using them as features on a broad and diverse set of "transfer" tasks. SentEval currently includes 17 downstream tasks. We also include a suite of 10 probing tasks which evaluate what linguistic properties are encoded in sentence embeddings. Our goal is to ease the study and the development of general-purpose fixed-size sentence representations.
    • NOTES
      • SentEval Library is a library designed to evaluate the quality of sentence embeddings, leveraging them as features in a wide array of "transfer" tasks, including 17 downstream tasks and a suite of 10 probing tasks to analyze linguistic properties encoded in the embeddings​``【oaicite:6】``​.
      • It was updated to include new probing tasks for better assessment of linguistic properties encoded in sentence embeddings, reflecting its commitment to enhancing the comprehensiveness of evaluation​``【oaicite:5】``​.
      • It supports example scripts for three sentence encoders: SkipThought-LN, GenSen, and Google-USE, showcasing its adaptability to various sentence encoding methods​``【oaicite:4】``​.
      • It requires Python 2/3 with NumPy/SciPy, Pytorch (version 0.4 or later), and scikit-learn (version 0.18.0 or later) as dependencies, indicating its reliance on a Python-based ecosystem for machine learning​``【oaicite:3】``​.
      • It includes a diverse set of downstream tasks such as movie review sentiment analysis, product reviews, subjectivity status, and natural language inference among others, demonstrating its wide application in evaluating sentence embeddings across different contexts.
      • It comes with a series of probing tasks designed to evaluate specific linguistic properties encoded in sentence embeddings, such as sentence length prediction, word content analysis, and verb tense prediction, highlighting its focus on linguistic detail​``【oaicite:1】``​.
      • It requires the implementation of two functions (prepare and batcher) by the user to adapt SentEval for their specific sentence embeddings, emphasizing its flexible and customizable framework for embedding evaluation.

2022

2019

2018a

  • (Conneau & Kiela, 2018) ⇒ Alexis Conneau and Douwe Kiela. “SentEval: An evaluation toolkit for universal sentence representations". In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018.