Legal (Natural) Language Processing Task

From GM-RKB
(Redirected from Legal NLP Task)
Jump to navigation Jump to search

A Legal (Natural) Language Processing Task is a Natural Language Processing Task that specializes in interpreting and analyzing legal language and documents.



References

2024

2023a

2023b

Cognitive Level ID Task Data Source Metric Type
Legal Knowledge Memorization 1-1 Article Recitation FLK Rouge-L Generation
1-2 Knowledge Question Answering JEC_QA Accuracy SLC
2-1 Document Proofreading CAIL2022 F0.5 Generation
2-2 Dispute Focus Identification LAIC2021 F1 MLC
2-3 Marital Disputes Identification AIStudio F1 MLC
Legal Knowledge Understanding 2-4 Issue Topic Identification Reading Comprehension CrimeKgAssitant CAIL2019 Accuracy rc-F1 SLC
2-5 Reading Comprehension CAIL2019 rc-F1 Extraction
2-6 Named-Entity Recognition CAIL2022 soft-F1 Extraction
2-7 Opinion Summarization CAIL2021 Rouge-L Generation
2-8 Argument Mining CAIL2022 Accuracy SLC
2-9 Event Detection LEVEN F1 MLC
2-10 Trigger Word Extraction LEVEN soft-F1 Extraction
3-1 Fact-based Article Prediction CAIL2018 F1 MLC
3-2 Scene-based Article Prediction LawGPT Rouge-L Generation
3-3 Charge Prediction CAIL2018 F1 MLC
Legal Knowledge Applying 3-4 Prison Term Prediction w.o. Article CAIL2018 nLog-distance Regression
3-5 Prison Term Prediction w. Article CAIL2018 nLog-distance Regression
3-6 Case Analysis JEC_QA Accuracy SLC
3-7 Criminal Damages Calculation LAIC2021 Accuracy Regression
3-8 Consultation hualv.com Rouge-L Generation

2022

2021

2020

  • (Chalkidis et al., 2020) ⇒ Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. (2020). “LEGAL-BERT: The Muppets Straight Out of Law School.” arXiv preprint arXiv:2010.02559 DOI:10.48550/arXiv.2010.02559.
    • ABSTRACT: BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.