SQuADRUn Dataset
(Redirected from SQuADRUn)
Jump to navigation
Jump to search
A SQuADRUn Dataset is a reading comprehension dataset that combines SQuAD 1.1 datasets with a new dataset containing over 50,775 unanswerable questions about the same paragraphs.
- AKA: SQuAD with adveRsarial UNanswerable questions (SQuADRUn) Dataset, SQuAD 2.0 Dataset.
- Context:
- Datasets available online at : https://rajpurkar.github.io/SQuAD-explorer/
- Example(s):
- Counter-Example(s):
- a CoQA Dataset,
- a SQuAD Dataset,
- a RACE Dataset.
- See: Question-Answering System, Natural Language Processing Task, Natural Language Understanding Task, Natural Language Generation Task.
References
2020a
- (CodeLab, 2020) ⇒ https://worksheets.codalab.org/worksheets/0x9a15a170809f4e2cb7940e1f256dee55/ Retrieved: 2020-12-13.
- QUOTE: This paper introduces SQuADRUn, the 2.0 release of the Stanford Question Answering Dataset (SQuAD). SQuADRUn combines the existing data in SQuAD 1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to appear answerable. To succeed on SQuADRUn, models must both answer questions when possible, and abstain from answering when no answer is supported by the text.
2020b
- (Rajpurkar, 2020) ⇒ https://rajpurkar.github.io/SQuAD-explorer/ Retrieved: 2020-12-13.
- QUOTE: SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
2018
- (Rajpurkar et al., 2018) ⇒ Pranav Rajpurkar, Robin Jia, and Percy Liang. (2018). “Know What You Don't Know: Unanswerable Questions for SQuAD". In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), Volume 2: Short Papers.
- QUOTE: In this work, we construct SQuADRUn[1], a new dataset that combines the existing questions in SQuAD with 53,775 new, unanswerable questions about the same paragraphs. Crowdworkers crafted these questions so that (1) they are relevant to the paragraph, and (2) the paragraph contains a plausible answer—something of the same type as what the question asks for.