Shallow Semantic Text Parsing Task
(Redirected from Shallow Semantic Text Parsing)
Jump to navigation
Jump to search
A Shallow Semantic Text Parsing Task is a semantic text parsing task that is a shallow parsing task (that requires a shallow meaning representation).
- Context:
- Input: Natural Language Artifact.
- output: Shallow Semantic Representation.
- It can be solved by a Shallow Semantic Parsing from Text System (that implements a Shallow Semantic Parsing from Text Algorithm.
- It can range from being a Propositional Semantic Parsing Task to being an Extra-Propositional Semantic Parsing Task.
- ...
- Example(s):
- Propositional Semantic Parsing Tasks, such as:
- Extra-Propositional Semantic Parsing Tasks, such as:
- …
- Counter-Example(s):
- See: Information Extraction from Text.
References
2024
- (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Semantic_parsing#Shallow_Semantic_Parsing Retrieved:2024-1-24.
- Shallow semantic parsing is concerned with identifying entities in an utterance and labelling them with the roles they play. Shallow semantic parsing is sometimes known as slot-filling or frame semantic parsing, since its theoretical basis comes from frame semantics, wherein a word evokes a frame of related concepts and roles. Slot-filling systems are widely used in virtual assistants in conjunction with intent classifiers, which can be seen as mechanisms for identifying the frame evoked by an utterance.[1] [2] Popular architectures for slot-filling are largely variants of an encoder-decoder model, wherein two recurrent neural networks (RNNs) are trained jointly to encode an utterance into a vector and to decode that vector into a sequence of slot labels. [3] This type of model is used in the Amazon Alexa spoken language understanding system.[1] This parsing follow an unsupervised learning techniques.
2013
- (Berant et al., 2013) ⇒ Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. (2013). “Semantic Parsing on Freebase from Question-Answer Pairs.” In: Proceedings of EMNLP (EMNLP-2013).
- QUOTE: We focus on the problem of semantic parsing natural language utterances into logical forms that can be executed to produce denotations. Traditional semantic parsers (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010) have two limitations: (i) they require annotated logical forms as supervision, and (ii) they operate in limited domains with a small number of logical predicates. Recent developments aim to lift these limitations, either by reducing the amount of supervision (Clarke et al., 2010; Liang et al., 2011; Goldwasser et al., 2011; Artzi and Zettlemoyer, 2011) or by increasing the number of logical predicates (Cai and Yates, 2013). The goal of this paper is to do both: learn a semantic parser without annotated logical forms that scales to the large number of predicates on Freebase.
- ↑ 1.0 1.1 Kumar, Anjishnu, et al. "Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding." arXiv preprint arXiv:1711.00549 (2017).
- ↑ Bapna, Ankur, et al. "Towards zero-shot frame semantic parsing for domain scaling." arXiv preprint arXiv:1707.02363(2017).
- ↑ Liu, Bing, and Ian Lane. "Attention-based recurrent neural network models for joint intent detection and slot filling." arXiv preprint arXiv:1609.01454 (2016).