Question-Answering Benchmark Task
(Redirected from Question-Answering Benchmark)
Jump to navigation
Jump to search
A Question-Answering Benchmark Task is a question answering task that is a benchmark task.
- Context:
- It can (typically) include a QA Benchmark Dataset.
- It can (typically) include a QA Performance Measure.
- ...
- Example(s):
- a TREC QA Task.
- a Jeopardy! Game.
- a VQA Benchmark Task.
- a SQuAD Benchmark Task.
- …
- Counter-Example(s):
- an NER Benchmark Task (for NER tasks).
- a IE Benchmark Task (for IE tasks).
- a Parsing Benchmark Task (for NL parsing tasks).
- See: QA System.
References
2023
- (Aksitov et al., 2023) ⇒ Renat Aksitov, Sobhan Miryoosefi, Zonglin Li, Daliang Li, Sheila Babayan, Kavya Kopparapu, Zachary Fisher, Ruiqi Guo, Sushant Prakash, Pranesh Srinivasan, Manzil Zaheer, Felix Yu, and Sanjiv Kumar. (2023). “ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent.” doi:10.48550/arXiv.2312.10003
- QUOTE: ... Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model that achieves comparable performance on challenging compositional question-answering benchmarks with two orders of magnitude fewer parameters.