Compositional Generalization (CG) Task
(Redirected from Compositional Generalization)
Jump to navigation
Jump to search
A Compositional Generalization (CG) Task is a generalization task for the ability to understand and produce unseen combinations of unseen components.
- Example(s):
- CGT(“run”, “run twice”, “jump”, “jump twice”)?
- a Compositional Generalization in Natural Language Benchmark Task, such as:
- SCAN Benchmark (Lake and Baroni 2018), which consists of input natural language commands (e.g., “jump and look left twice”) paired with output action sequences (e.g., “JUMP LTURN LOOK LTURN LOOK”).
- Complex Freebase Questions (CFQ) (Keysers et al. 2019), which contains input natural language questions paired with their output meaning representations (SPARQL queries against the Freebase knowledge graph).
- Compositional Generation for Visual-Question Answering Task, such as in: (Hudson and Manning, 2018).
- Compositional Generation for Algebraic Compositionality Task, such as in: (Veldhoen et al. 2016; Saxton et al. 2019).
- Compositional Generation for Logic Inference Task, such as in: (Bowman, Manning, and Potts 2015; Mul and Zuidema 2019).
- …
- See: SCAN Dataset, Compositional Freebase Questions (CFQ) Benchmark Task.
References
2020
- (Keysers et al., 2020) ⇒ Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. (2020). “Measuring Compositional Generalization: A Comprehensive Method on Realistic Data.” In: Proceedings of the International Conference on Learning Representations (ICLR-2020).
- ABSTRACT: State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.
2018a
- (Hudson & Manning, 2018) ⇒ Drew A. Hudson, and Christopher D. Manning. (2018). “Compositional Attention Networks for Machine Reasoning.” In: International Conference on Learning Representations.
2018b
- (Lake & Baroni, 2018) ⇒ Brenden Lake, and Marco Baroni. (2018). “Generalization Without Systematicity: On the Compositional Skills of Sequence-to-sequence Recurrent Networks.” In: International Conference on Machine Learning, pp. 2873-2882 . PMLR,
1988
- (Fodor & Pylyshyn, 1988) ⇒ Jerry A. Fodor, and Zenon W. Pylyshyn. (1988). “Connectionism and Cognitive Architecture: A Critical Analysis.” In: Cognition 28, no. 1-2
1965
- (Chomsky, 1965) ⇒ Noam Chomsky. (1965). “Aspects of the Theory of Syntax.” MIT press,