Test:2016 SQuAD100000QuestionsforMachineC
- (Rajpurkar et al., 2016) ⇒ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. (2016). “SQuAD: 100, 000+ Questions for Machine Comprehension of Text.” In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), Austin, Texas, USA, November 1-4, 2016.
Subject Headings: Natural Language Understanding; Question-Answering System; Squad
Notes
Cited By
Quotes
Abstract
We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100, 000 + questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https UR
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2016 SQuAD100000QuestionsforMachineC | Percy Liang Pranav Rajpurkar Jian Zhang Konstantin Lopyrev | SQuAD: 100, 000+ Questions for Machine Comprehension of Text | 2016 |