Law-Related Question Answering (QA) Task
(Redirected from Legal QA)
Jump to navigation
Jump to search
A Law-Related Question Answering (QA) Task is a law-related query-replying task that is a domain-specific question answering task.
- Context:
- input: a Law-Related Question.
- It can be solved by a Law-Related QA System (that implements a law-related QA algorithm.
- ...
- Example(s):
- a Legal Contract-Related QA, such as “What is the obligation duration of this contract?”
- ...
- Counter-Example(s):
- See: Medical QA.
References
2024d
- (Roegiest & Chitta, 2024) ⇒ [[::Adam Roegiest]], and [[::Radha Chitta]]. ([[::2024]]). “Answering Questions in Stages: Prompt Chaining for Contract QA.” doi:10.48550/arXiv.2410.12840
- NOTES: The paper contributes to Law-Related Question Answering (QA) by introducing a novel two-stage prompt chaining approach for complex legal clauses. The method aims to improve the performance of large language models in generating structured answers to multiple-choice and multiple-select questions about contracts. By first summarizing relevant legal text and then mapping this summary to predefined answer options, the approach shows promise in handling nuanced legal language, particularly for questions about change of control, assignment, and insurance clauses. However, it also reveals limitations when dealing with high linguistic variation, as seen in force majeure clauses.
2023
- (Roegiest et al., 2023) ⇒ Adam Roegiest, Radha Chitta, Jonathan Donnelly, Maya Lash, Alexandra Vtyurina, and François Longtin. (2023). “A Search for Prompts: Generating Structured Answers from Contracts.” In: arXiv preprint arXiv:2310.10141. doi:10.48550/arXiv.2310.10141
- QUOTE:
- "In many legal processes, lawyers look to take an action on a particular answer option rather than a summary or the raw text of a clause itself."
- "An ideal system would produce a concise and consistent answer when asked “who indemnifies whom?” with respect to the clause."
- "After showing that unstructured generative question answering can have questionable outcomes for such a task, we discuss our exploration methodology for legal question answering prompts using OpenAI's GPT-3.5-Turbo and provide a summary of insights."
- "Using insights gleaned from our qualitative experiences, we compare our proposed template prompts against a common semantic matching approach and find that our prompt templates are far more accurate despite being less reliable in the exact response return."
- "We are able to further improve the performance of our proposed strategy while maximizing the reliability of responses as best we can."
- NOTES:
- It investigates Structure Output Legal QA (structured output QA).
- Answering "who indemnifies whom?" for environmental indemnity clauses in contracts. The possible structured answers are predefined options like "Tenant indemnifies Landlord" or "There is mutual indemnification."
- Answering "For what purpose are the parties sharing information according to the clause?" for confidential information sharing clauses. Possible structured answers include options like "For use in the ordinary course of business" or "For regulatory compliance purposes."
- QUOTE: