2021 CUADAnExpertAnnotatedNlpDataset: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
(Created page with "* (Hendrycks et al., 2021) ⇒ author::Dan Hendrycks, author::Collin Burns, author::Anya Chen, and author::Spencer Ball. (year::2021). “[https://arxiv.org/pdf/2103.06268.pdf Cuad: An Expert-annotated Nlp Dataset for Legal Contract Review].” In: arXiv preprint arXiv:2103.06268. [http://dx.doi.org/10.48550/arXiv.2103.06268 doi:10.48550/arXiv.2103.06268] <B>Subject Headings:</B> ==Notes== ==Cit...")
 
(ContinuousReplacement)
Tag: continuous replacement
Line 1: Line 1:
* ([[2021_CUADAnExpertAnnotatedNlpDataset|Hendrycks et al., 2021]]) &rArr; [[author::Dan Hendrycks]], [[author::Collin Burns]], [[author::Anya Chen]], and [[author::Spencer Ball]]. ([[year::2021]]). &ldquo;[https://arxiv.org/pdf/2103.06268.pdf Cuad: An Expert-annotated Nlp Dataset for Legal Contract Review].&rdquo; In: arXiv preprint arXiv:2103.06268. [http://dx.doi.org/10.48550/arXiv.2103.06268 doi:10.48550/arXiv.2103.06268]  
* ([[2021_CUADAnExpertAnnotatedNlpDataset|Hendrycks et al., 2021]]) [[author::Dan Hendrycks]], [[author::Collin Burns]], [[author::Anya Chen]], and [[author::Spencer Ball]]. ([[year::2021]]). &ldquo;[https://arxiv.org/pdf/2103.06268.pdf Cuad: An Expert-annotated Nlp Dataset for Legal Contract Review].&rdquo; In: arXiv preprint arXiv:2103.06268. [http://dx.doi.org/10.48550/arXiv.2103.06268 doi:10.48550/arXiv.2103.06268]  


<B>Subject Headings:</B>  
<B>Subject Headings:</B>  


==Notes==
== Notes ==


==Cited By==
== Cited By ==
* http://scholar.google.com/scholar?q=%222021%22+Cuad%3A+An+Expert-annotated+Nlp+Dataset+for+Legal+Contract+Review
* http://scholar.google.com/scholar?q=%222021%22+Cuad%3A+An+Expert-annotated+Nlp+Dataset+for+Legal+Contract+Review


==Quotes==
== Quotes ==


===Abstract===
=== Abstract ===


Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.  
Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.  


==References==
== References ==
{{#ifanon:|
{{#ifanon:|



Revision as of 16:41, 7 August 2023

Subject Headings:

Notes

Cited By

Quotes

Abstract

Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2021 CUADAnExpertAnnotatedNlpDatasetDan Hendrycks
Collin Burns
Anya Chen
Spencer Ball
CUAD: An Expert-annotated Nlp Dataset for Legal Contract Review10.48550/arXiv.2103.062682021