Question Answering (QA) Task
A question answering (QA) task is a query-replying task that requires question answers to questions (produced by a question act).
- Context:
- Input: a Question Text Item.
- optional: a Question Type.
- optional: a Question-Answer Pair sample set.
- optional: a Corpus.
- optional: a Knowledge Base.
- output: A Question's Answer.
- optional: evidence (often in the form of relevant text passages)
- It can range from being a Human-solved QA Task to being an Automated QA Task (solved by a QA system that implements a QA algorithm).
- It can range from being a Domain-Specific Question Answering Task to being an Open-Domain Question Answering Task.
- It can range from being an Batch QA Task to being a Real-Time QA Task.
- It can range from being a Simple QA Task (such as factoid QA) to being a Complex QA Task.
- It can range from being a Single-Query QA Task to being a Multi-Query QA Task.
- It can range from being a Stateless QA Task to being a Stateful QA Task.
- It can range from being a Natural Language QA Task to being a Structured QA Task.
- It can range from being a User-Initiated QA Task to being a System-Initiated QA Task.
- It can range from being a Text-Based QA Task to being a Multimodal QA Task.
- It can range from being a Public Information-based QA Task to being a Private Information-based QA Task.
- It can be a member of a Question Answering Set (such as a QA dataset).
- …
- Input: a Question Text Item.
- Example(s):
- a Factoid QA, for factoid questions, such as: “How many calories are there in a Big Mac?” and “How tall is Mount Everest?”
- a List QA, for list-style questions, such as: “List the names of chewing gums.”
- a Definition QA, for definitional questions, such as: “What is a golden parachute?”
- a Visual QA Task.
- a QA from Corpus Task.
- a QA from Knowledge Base Task.
- a QA Benchmark Task, such as: TREC Question Answering Track.
- a Contract-Related Question Answering (QA) Task,
- …
- Counter-Example(s):
- See: QA Dataset, Natural Language Processing, Question & Answering Dialogue, Lambda Calculus, Formal Meaning Representation, Semantic Construction Mechanism, Dependency-Based Compositional Semantics (DCS), Compositional Combination Grammar (CCG), Word Sense Disambiguation (WSD), Hidden Markov Model (HMM).
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Question_answering#Types_of_question_answering Retrieved:2023-9-10.
- Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list, definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.
- Answering questions related to an article in order to evaluate reading comprehension is one of the simpler form of question answering, since a given article is relatively short compared to the domains of other types of question-answering problems. An example of such a question is "What did Albert Einstein win the Nobel Prize for?" after an article about this subject is given to the system.
- Closed-book question answering is when a system has memorized some facts during training and can answer questions without explicitly being given a context. This is similar to humans taking closed-book exams.
- Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance) and can exploit domain-specific knowledge frequently formalized in ontologies. Alternatively, "closed-domain" might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. Question answering systems machine reading applications have also been constructed in the medical domain, for instance Alzheimer's disease. Roser Morante, Martin Krallinger, Alfonso Valencia and Walter Daelemans. Machine Reading of Biomedical Texts about Alzheimer's Disease. CLEF 2012 Evaluation Labs and Workshop. September 17, 2012
- Open-domain question answering deals with questions about nearly anything and can only rely on general ontologies and world knowledge. Systems designed for open-domain question answering usually have much more data available from which to extract the answer. An example of an open-domain question is "What did Albert Einstein win the Nobel Prize for?" while no article about this subject is given to the system. ...
- Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list, definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.
2015a
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/question_answering Retrieved:2015-9-10.
- Question Answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language.
A QA implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base. More commonly, QA systems can pull answers from an unstructured collection of natural language documents.
Some examples of natural language document collections used for QA systems include:
- a local collection of reference texts
- internal organization documents and web pages
- compiled newswire reports
- a set of Wikipedia pages
- a subset of World Wide Web pages
- QA research attempts to deal with a wide range of question types including: fact, list, definition, How, Why, hypothetical, semantically constrained, and cross-lingual questions.
- Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance), and can be seen as an easier task because NLP systems can exploit domain-specific knowledge frequently formalized in ontologies. Alternatively, closed-domain might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. QA systems in the context of machine reading applications have also been constructed in the medical domain, for instance related to Alzheimers disease [1]
- Open-domain question answering deals with questions about nearly anything, and can only rely on general ontologies and world knowledge. On the other hand, these systems usually have much more data available from which to extract the answer.
- Question Answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP), which is concerned with building systems that automatically answer questions posed by humans in a natural language.
- ↑ Roser Morante , Martin Krallinger , Alfonso Valencia and Walter Daelemans. Machine Reading of Biomedical Texts about Alzheimer's Disease. CLEF 2012 Evaluation Labs and Workshop. September 17, 2012
2015b
- (Liu et al., 2015) ⇒ Kang Liu, Jun Zhao, Shizhu He, and Yuanzhe Zhang. (2015). “Question Answering over Knowledge Bases.” In: Intelligent Systems, IEEE, 30(5). doi:10.1109/MIS.2015.70
- QUOTE: Question answering over knowledge bases is a challenging task for next-generation search engines. The core of this task is to understand the meaning of questions and translate them into structured language-based queries.
2006
- (Strzalkowski & Harabagiu, 2006) ⇒ Tomek Strzalkowski (editor), and Sanda M. Harabagiu (editor). (2006). “Advances in Open Domain Question Answering." Springer. doi:10.1007/978-1-4020-4746-6 ISBN:978-1-4020-4744-2
- QUOTE: Automatic question answering has long been studied in artificial intelligence research. Early systems were constrained to answering questions about limited domains, for example, baseball statistics or lunar rocks. By the 1990s, question answering has become a less active field of research. During the 1990s the U.S. Government funded the Tipster program for research on information retrieval and information extraction. As discussed by Maiorano in his chapter “Question answering: Technology for intelligence analysis,” it was hoped that the template-filling of Tipster’s information extraction tasks would require researchers to develop systems capable of deep natural language understanding. Instead, shallow techniques were able to perform well on those tasks. Similarly, the AQUAINT program is seen as building on information retrieval and information-extraction technology to provide systems that can extract answers from open-domain free text for information seekers, rather than just ranked lists of documents that might answer the question when read. Again, there is the view that achieving a question-answering capability will require deep natural language understanding. Although some of the chapters in this volume describe long-range goals of achieving levels of question answering requiring deep understanding, much of the research described here so far has focused on simpler question-answering tasks, such as the factoid question answering of the TREC QA track. As the book shows, however, factoid, or slightly more complex, question answering can benefit from a variety of more or less deep approaches.
2002
- (Rinaldi et al., 2002) ⇒ Fabio Rinaldi, James Dowdall, Michael Hess, Diego Molla, and Rolf Schwitter. (2002). “Towards Answer Extraction: An application to technical domains.” In: Proceedings of the 15th European Conference on Artificial Intelligence.
- QUOTE: Answer Extraction (also called Question Answering, or QA) is a recently developed field, which tries to solve some of the problems described above. Answer Extraction systems typically allow the user to ask arbitrary questions and aim at retrieving, in a given corpus, a small snippet of text which provides an answer to that questions. Research in this area has been promoted in the past couple of years by, in particular, the QA track of the TREC competitions ...
2003
- (Voorhees, 2003) ⇒ Ellen Voorhees. (2003). “Overview of the TREC 2003 Question Answering Track.” In: Proceedings of the TREC-12 Conference. NIST.
- QUOTE: TREC introduced the first question answering (QA) track in TREC-8 (1999). The goal of the track is to foster research on systems that retrieve answers rather than documents in response to a question, with particular emphasis on systems that can function in unrestricted domains. The tasks in the track have evolved over the years to focus research on particular aspects of the problem deemed important to improving the state-of-the-art.
2001
- (Voorhees, 2001) ⇒ Ellen M. Voorhees. (2001). “Overview of the TREC 2003 Question Answering Track.” In: Proceedings of the TREC-10 Conference. NIST.
- QUOTE: : As mentioned above, one of the goals for the TREC 2001 QA track was to require systems to assemble an answer from information located in multiple documents. Such questions are harder to answer than the questions used in the main task since information duplicated in the documents must be detected and reported only once.(...)
List results were evaluated using accuracy, the number of distinct responses divided by the target number of instances. Note that since unsupported responses could be marked distinct, the reported accuracy is a lenient evaluation. Table 4 gives the average accuracy scores for all of the list task submissions. Given the way the questions were constructed for the list task, the list task questions were intrinsically easier than the questions in the main task. Most systems found at least one instance for most questions. Each system returned some duplicate responses, but duplication was not a major source of error for any of the runs. (Each run contained many more wrong responses than duplicate responses.) With just 18 runs, there is not enough data to know if the lack of duplication is because the systems are good at recognizing and eliminating duplicate responses, or if there simply wasn't all that much duplication in the document set. (...)
The list task will be repeated in essentially the same form as TREC 2001. NIST will attempt to find naturally occurring list questions in logs, but appropriate questions are rare, so some constructed questions may also be used. We hope also to have a new context task, though the exact nature of that task is still undefined.
The main focus of the ARDA AQUAINT program is to move beyond the simple factoid questions that have been the focus of the TREC tracks. Of particular concern for evaluation is how to score responses that cannot be marked simply correct/incorrect, but instead need to incorporate a fine-grained measure of the quality of the response.
- QUOTE: : As mentioned above, one of the goals for the TREC 2001 QA track was to require systems to assemble an answer from information located in multiple documents. Such questions are harder to answer than the questions used in the main task since information duplicated in the documents must be detected and reported only once.(...)
1977
- (Lehnert, 1977) ⇒ Wendy G. Lehnert. (1977). “The Process of Question Answering - A Computer Simulation of Cognition." Yale University. ISBN:0-470-26485-3
- ABSTRACT: Problems in computational question answering assume a new perspective when question answering is viewed as a problem in natural language processing. A theory of question answering has been proposed which relies on ideas in conceptual information processing and theories of human memory organization. This theory of question answering has been implemented in a computer program, QUALM, currently being used by two story understanding systems to complete a natural language processing system which reads stories and answers questions about what was read. The processes in QUALM are divided into 4 phases: (1) Conceptual categorization which guides subsequent processing by dictating which specific inference mechanisms and memory retrieval strategies should be invoked in the course of answering a question; (2) Inferential analysis which is responsible for understanding what the questioner really meant when a question should not be taken literally; (3) Content specification which determines how much of an answer should be returned in terms of detail and elaborations, and (4) Retrieval heuristics which do the actual digging to extract an answer from memory.