2023 LEXTREMEAMultiLingualandMultiTa
- (Niklaus et al., 2023) ⇒ Joel Niklaus, Veton Matoshi, Pooja Rani, Andrea Galassi, Matthias Stürmer, and Ilias Chalkidis. (2023). “LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain.” In: Findings of the Association for Computational Linguistics: EMNLP 2023. doi:10.18653/v1/2023.findings-emnlp.200
Subject Headings: LEXTREME.
Notes
- Also as arXiv preprint arXiv:2301.13126.
- It introduces LEXTREME, a multi-lingual and multi-task benchmark specifically designed for the legal domain.
- It includes 11 datasets encompassing 24 different languages, addressing the multilingual aspect of legal NLP.
- It proposes two unique aggregate scores: a dataset aggregate score and a language aggregate score for comprehensive model evaluation.
- It emphasizes the challenges unique to legal language, such as specialized terminologies and complex sentence structures.
- It establishes baseline evaluations for several models, providing a comparative framework for future research in legal NLP.
- It makes LEXTREME publicly available on the Huggingface platform, encouraging community participation and development.
- It includes necessary code for model evaluation and integrates a Weights and Biases project for enhanced transparency and reproducibility.
Cited By
Quotes
Abstract
Lately, propelled by the phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well curated and challenging benchmarks are crucial. However, most benchmarks are English only and in legal NLP specifically there is no multilingual benchmark available yet. Additionally, many benchmarks are saturated, with the best models clearly outperforming the best humans and achieving near perfect scores. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To provide a fair comparison, we propose two aggregate scores, one based on the datasets and one on the languages. The best baseline (XLM-R large) achieves both a dataset aggregate score a language aggregate score of 61.3. This indicates that LEXTREME is still very challenging and leaves ample room for improvement. To make it easy for researchers and practitioners to use, we release LEXTREME on huggingface together with all the code required to evaluate models and a public Weights and Biases project with all the runs.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2023 LEXTREMEAMultiLingualandMultiTa | Ilias Chalkidis Joel Niklaus Veton Matoshi Pooja Rani Andrea Galassi Matthias Stürmer | LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain | 10.18653/v1/2023.findings-emnlp.200 | 2023 |