Question Answering (QA) from a Corpus Task
(Redirected from QA from Corpus)
Jump to navigation
Jump to search
A Question Answering (QA) from a Corpus Task is a question answering task that is restricted to a corpus.
- Context:
- It can be solved by a QA from Corpus System (solving for a QA from corpus task).
- It can range from being a QA from Single Document to QA from Multiple Documents.
- It can (typically) require reading comprehension and reasoning abilities.
- It can (typically) be supported by Information Retrieval, Information Extraction, Context Understanding, and Entity Relationship Recognition.
- ...
- Example(s):
- one based on SQuAD (Stanford Question Answering Dataset) - a large-scale reading comprehension dataset.
- TriviaQA - a large-scale QA dataset containing questions about trivia and facts.
- MS MARCO (Microsoft MAchine Reading COmprehension) - a dataset focusing on real-world questions and answers.
- ...
- Counter-Example(s):
- QA from Knowledge Base - answering questions using a structured knowledge base rather than a corpus of documents.
- See: Reading Documents, Information Retrieval, Machine Reading Comprehension.
References
2016
- (Miller et al., 2016) ⇒ Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. (2016). “Key-Value Memory Networks for Directly Reading Documents.” In: arXiv Journal, 1606.03126.
- QUOTE: Directly reading documents and being able to answer questions from them is an unsolved challenge.