Artificial Grammar Learning (AGL) Task
(Redirected from Artificial grammar learning (AGL))
Jump to navigation
Jump to search
An Artificial Grammar Learning (AGL) Task is a Natural Language Learning Task that can learn grammatical rules of a language.
- AKA: Grammar Learning Task.
- Context:
- It can be solve by a AGL System that implements a AGL algorithm.
- Example(s):
- GraSp,
- Wolff's SNPR,
- …
- Counter-Example(s):
- See: Formal Language Theory, Cognitive Psychology, Linguistics, Implicit Learning, Language Learning Aptitude, Syntax, Cottontop Tamarin, Cognitive Psychology, Lexicon.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Artificial_grammar_learning Retrieved:2021-7-4.
- Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.
Many researchers propose that the rules of the artificial grammar are learned on an implicit level since the rules of the grammar are never explicitly presented to the participants. The paradigm has also recently been utilized for other areas of research such as language learning aptitude, structural priming and to investigate which brain structures are involved in syntax acquisition and implicit learning.
Apart from humans, the paradigm has also been used to investigate pattern learning in other species, e.g. cottontop tamarins and starlings.
- Artificial grammar learning (AGL) is a paradigm of study within cognitive psychology and linguistics. Its goal is to investigate the processes that underlie human language learning by testing subjects' ability to learn a made-up grammar in a laboratory setting. It was developed to evaluate the processes of human language learning but has also been utilized to study implicit learning in a more general sense. The area of interest is typically the subjects' ability to detect patterns and statistical regularities during a training phase and then use their new knowledge of those patterns in a testing phase. The testing phase can either use the symbols or sounds used in the training phase or transfer the patterns to another set of symbols or sounds as surface structure.
2020
- (Trotter et al., 2020) ⇒ Antony S. Trotter, Padraic Monaghan, Gabriel J. L. Beckers, and Morten H. Christiansen (2019-2020). "Exploring Variation Between Artificial Grammar Learning Experiments: Outlining a Meta‐Analysis Approach". In: Topics in cognitive science, 12(3), 875-893.
- QUOTE: Artificial grammar learning (AGL) has become an important tool used to understand aspects of human language learning and whether the abilities underlying learning may be unique to humans or found in other species. Successful learning is typically assumed when human or animal participants are able to distinguish stimuli generated by the grammar from those that are not at a level better than chance. However, the question remains as to what subjects actually learn in these experiments. Previous studies of AGL have frequently introduced multiple potential contributors to performance in the training and testing stimuli, but meta-analysis techniques now enable us to consider these multiple information sources for their contribution to learning — enabling intended and unintended structures to be assessed simultaneously. We present a blueprint for meta-analysis approaches to appraise the effect of learning in human and other animal studies for a series of artificial grammar learning experiments, focusing on studies that examine auditory and visual modalities (...)
2014
- (MacWhinney, 2014) ⇒ Brian MacWhinney (Ed.). (1987-2014). "Mechanisms of Language Acquisition: The 20th Annual Carnegie Mellon Symposium on Cognition". In: Psychology Press.
2012
- (Fitch & Friederici, 2012) ⇒ W. Tecumseh Fitch, and Angela D. Friederici (2012). "Artificial Grammar Learning Meets Formal Language Theory: An Overview". In: Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598), 1933-1955.
2007
- (Pothos, 2007) ⇒ Emmanuel M. Pothos (2007). "Theories of Artificial Grammar Learning". In: Psychological bulletin, 133(2), 227.
2002
- (Henrichsen, 2002) ⇒ Peter Juel Henrichsen (2002). "GraSp: Grammar Learning from Unlabelled Speech Corpora". In: Proceedings of the 6th Conference on Natural Language Learning (CoNLL 2002) Held in cooperation with (COLING 2002).
- QUOTE: This paper presents the ongoing project Computational Models of First Language Acquisition, together with its current product, the learning algorithm GraSp. Graph is designed specifically for inducing grammars from large, unlabelled corpora of spontaneous (i.e. unscripted) speech. The learning algorithm does not assume a predefined grammatical taxonomy; rather the determination of categories and their relations is considered as part of the learning task.
1982
- (Wolff, 1982) ⇒ J. Gerard Wolff (1982). "Language acquisition, data compression and generalization". In: Elsevier- Language & Communication, 2(1), 57-89.