Neural Structured Learning (NSL) Task
(Redirected from Neural Structured Learning Task)
Jump to navigation
Jump to search
A Neural Structured Learning (NSL) Task is a Neural Network Training Task that is based on a Structured-Output Learning Task.
- Context:
- It can also use structured data in addition to feature inputs
- It can be solved by Neural Structured Learning System that implements a Neural Structured Learning Algorithm.
- Example(s):
- Counter-Example(s):
- See: Neural Network Training System, Unlabeled Data, Labeled Data, Machine Learning, Artificial Neural Network, Neural Graph Learning Task, Adversarial Learning Task.
References
2020
- (TensorFlow, 2020) ⇒ (2020)"Neural Structured Learning: Training with Structured Signals". In: Abadi et al. (2015) "TensorFlow: Large-scale machine learning on heterogeneous systems". Software available from tensorflow.org. Retrieved: 2020-02-28.
- QUOTE: Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph or implicit as induced by adversarial perturbation.
Structured signals are commonly used to represent relations or similarity among samples that may be labeled or unlabeled. Therefore, leveraging these signals during neural network training harnesses both labeled and unlabeled data, which can improve model accuracy, particularly when the amount of labeled data is relatively small. Additionally, models trained with samples that are generated by adding adversarial perturbation have been shown to be robust against malicious attacks, which are designed to mislead a model's prediction or classification.
- QUOTE: Neural Structured Learning (NSL) is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph or implicit as induced by adversarial perturbation.
2019a
- (Juan & Ravi, 2019) ⇒ Da-Cheng Juan, and Sujith Ravi (September, 2019). "Introducing Neural Structured Learning in TensorFlow". In: TensorFlow Blog.
- QUOTE: In Neural Structured Learning (NSL), the structured signals - whether explicitly defined as a graph or implicitly learned as adversarial examples - are used to regularize the training of a neural network, forcing the model to learn accurate predictions (by minimizing supervised loss), while at the same time maintaining the similarity among inputs from the same structure (by minimizing the neighbor loss, see the figure above). This technique is generic and can be applied on arbitrary neural architectures, such as Feed-forward NNs, Convolutional NNs and Recurrent NNs.
2019b
- (Pahuja et al., 2019) ⇒ Vardaan Pahuja, Jie Fu, Sarath Chandar, and Christopher J. Pal (2019). "Structure Learning for Neural Module Networks". ArXiv:1905.11532.
- QUOTE: In order to answer a visual reasoning question, the model needs to execute modules in a tree-structured layout. In order to facilitate this sort of compositional behavior, a differentiable memory pool to store and retrieve intermediate attention maps is used. A memory stack (with length denoted by $L$) stores $H \times W$ dimensional attention maps, where $H$ and $W$ are the height and width of image feature maps respectively. Depending on the number of attention maps required as input by the module, it pops them from the stack and later pushes the result back to the stack. The model performs soft module execution by executing all modules at each time step. The updated stack and stack pointer at each subsequent time-step are obtained by a weighted average of those corresponding to each module using the weights $w^{(t)}$ predicted by the module controller.
2017
- (Iyyer et al., 2017) ⇒ Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang (2017, July). "Search-based Neural Structured Learning for Sequential Question Answering". In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). DOI:10.18653/v1/P17-1167
- QUOTE: Section 3 describes our novel dynamic neural semantic parsing framework (DynSP), a weakly supervised structured-output learning approach based on reward-guided search that is designed for solving sequential QA.