Automated Text Understanding (NLU) System
(Redirected from Natural Language Understanding system)
Jump to navigation
Jump to search
An Automated Text Understanding (NLU) System is a natural language processing system that implements an NLU algorithm to solve an automated text understanding task.
- AKA: NL Comprehender.
- Context:
- It can be composed of a Text Intent Classification System, Entity Mention Segmentation System, Entity Mention Classification System, and Entity Mention Disambiguation System.
- It can range from being a Shallow Text Understanding System to being a Deep Text Understanding System.
- It can range from being a Language-Dependent Text Understanding System (such as an English Text Understanding System) to being a Language-Independent Text Understanding System.
- It can be supported by an NLU Service.
- …
- Example(s):
- a Natural Language Inference (NLI) System, such as a legal NLI system.
- A Nuance's NLU, Amazon Lex's NLU, IBM Watson's NLU, Artificial Solution's NLU, ...
- A Machine Reading System (that implements a machine reading algorithm).
- One that performs well on General Language Understanding Evaluation (GLUE) Benchmark.
- …
- Counter-Example(s):
- See: NL Semantic Analysis System, NELL System, Linguistic Pragmatics.
References
2015
- (Liang, 2015) ⇒ Percy Liang. (2013). “Natural Language Understanding: Foundations and State-of-the-Art.” Tutorial at ICML-2015.
- ABSTRACT: Building systems that can understand human language — being able to answer questions, follow instructions, carry on dialogues — has been a long-standing challenge since the early days of AI. Due to recent advances in machine learning, there is again renewed interest in taking on this formidable task. A major question is how one represents and learns the semantics (meaning) of natural language, to which there are only partial answers. The goal of this tutorial is (i) to describe the linguistic and statistical challenges that any system must address; and (ii) to describe the types of cutting edge approaches and the remaining open problems. Topics include distributional semantics (e.g., word vectors), frame semantics (e.g., semantic role labeling), model-theoretic semantics (e.g., semantic parsing), the role of context, grounding, neural networks, latent variables, and inference. The hope is that this unified presentation will clarify the landscape, and show that this is an exciting time for the machine learning community to engage in the problems in natural language understanding.