2016 LearningExecutableSemanticParse
- (Liang, 2016) ⇒ Percy Liang. (2016). “Learning Executable Semantic Parsers for Natural Language Understanding.” In: Communications of the ACM Journal, 59(9). doi:10.1145/2866568
Subject Headings: Natural Language Understanding, Semantic Parsing.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222016%22+Learning+Executable+Semantic+Parsers+for+Natural+Language+Understanding
- http://dl.acm.org/citation.cfm?id=2991470.2866568&preflayout=flat#citedby
Quotes
Abstract
Semantic parsing is a rich fusion of the logical and the statistical worlds.
Body
A long-standing goal of artificial intelligence (AI) is to build systems capable of understanding natural language. To focus the notion of " understanding " a bit, let us say the system must produce an appropriate action upon receiving an input utterance from a human. For example: ... We are interested in utterances such as the ones listed here, which require deep understanding and reasoning. This article focuses on semantic parsing, an area within the field of natural language processing (NLP), which has been growing over the last decade. Semantic parsers map input utterances into semantic representations called logical forms that support this form of reasoning. For example, the first utterance listed previously would map onto the logical form max (primes ∩ (−∞, 10)). We can think of the logical form as a program that is executed to yield the desired behavior (for example, answering 7). The second utterance would map onto a database query; the third, onto an invocation of a calendar API.
Semantic parsing is rooted in formal semantics, pioneered by logician Montague, 25 who famously argued that there is " no important theoretical difference between natural languages and the artificial languages of logicians. “ Semantic parsing, by residing in the practical realm, is more exposed to the differences between natural language and logic, but it inherits two general insights from formal semantics: The first idea is model theory, which states that expressions (for example, primes) are mere symbols that only obtain their meaning or denotation (for example, {2, 3, 5,...}) by executing the expression with respect to a model, or in our terminology, a context. This property allows us to factor out the understanding of language (semantic parsing) from world knowledge (execution). Indeed, one can understand the utterance " What is the largest prime less than 10? " without actually computing the answer. The second idea is compositionality, a principle often attributed to Gottlob Frege, which states that the denotation of an expression is defined recursively in terms of the denotations of its subexpressions. For example, primes denotes the set of primes, (−∞, 10) denotes the set of numbers smaller than 10, and so primes ∩ (−∞, 10) denotes the intersection of those two sets. This compositionality is what allows us to have a succinct characterization of meaning for a combinatorial range of possible utterances.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2016 LearningExecutableSemanticParse | Percy Liang | Learning Executable Semantic Parsers for Natural Language Understanding | 10.1145/2866568 | 2016 |