Legal Contract Review Benchmark Task
(Redirected from Contract Review Benchmark Task)
Jump to navigation
Jump to search
A Legal Contract Review Benchmark Task is a legal contract review task that is a legal benchmark task.
- Context:
- It can be solved by a Contract Review System, such as an AI-assisted contract review system.
- ...
- Example(s):
- a Contract Understanding Atticus Dataset (CUAD).
- a Contract Understanding Atticus Dataset (CUAD).
- a Contract Risk Identification Benchmark Task.
- a Contract Error Identification Benchmark Task.
- a Contract Inconsistency Identification Benchmark Task.
- a Contract Improvement Suggestion Benchmark Task.
- a Contract Compliance Review Benchmark Task.
- a Contract-Type Specific Review Benchmark Task, such as: NDA review (of NDAs) or Purchase Agreement review (of purchase agreements).
- ...
- Counter-Example(s):
- See: Contract, Contract Management, Legal Technology, ContractNLI, Contract Agreement, e-Discovery.
References
2021
- (Hendrycks et al., 2021) ⇒ Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. (2021). “CUAD: An Expert-annotated Nlp Dataset for Legal Contract Review.” In: arXiv preprint arXiv:2103.06268. doi:10.48550/arXiv.2103.06268
- ABSTRACT: Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
2021
- (Koreeda & Manning, 2021) ⇒ Yuta Koreeda, and Christopher D. Manning. (2021). “ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts.” doi:10.48550/arXiv.2110.01799