Cumulative Learning Process
A Cumulative Learning Process is a learning process that is based on the accumulation and analysis of prior knowledge.
- AKA: Lifelong Learning, Layered Learning.
- Context:
- It can be reproduced by Cumulative Machine Learning System.
- It focuses on accumulating and analyzing prior knowledge over time.
- It typically occurs over longer periods, building upon existing knowledge.
- It often involves integrating new information with existing knowledge.
- It can be applied in various fields such as education, machine learning, and cognitive science.
- It can involve techniques like incremental learning and multi-task learning to build upon previously acquired skills and knowledge.
- It emphasizes building upon previously acquired skills and knowledge.
- It can range from simple knowledge aggregation to complex inductive transfer across tasks.
- ...
- Example(s):
- An Incremental Learning system that continuously updates its model as new data becomes available.
- A Multi-task Learning system that uses knowledge from multiple related tasks to improve learning efficiency.
- ...
- Counter-Example(s):
- A Base Learning system that does not incorporate prior knowledge and learns each task independently.
- See: Reinforcement Learning System, Supervised Learning System, Incremental Learning System, Continuous Learning.
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Cumulative_learning Retrieved:2018-4-15.
- Cumulative learning is the cognitive process by which we accumulate knowledge and abilities that serve as building blocks for subsequent cognitive development. A very simple example is the saying 'you can't run before you can walk'; the procedural memory built while learning to walk is necessary before one can start to learn to run. Pronouncing words is impossible without first learning to pronounce the vowels and consonants that make them up (hence babies' babbling).
This is an essential cognitive capacity, allowing prior development to produce new foundations for further cognitive development. Cumulative learning consolidates the knowledge one has obtained through experiences, allowing it to be reproduced and exploited for subsequent learning situations through cumulative interaction between prior knowledge and new information.[1]
Arguably, all learning is cumulative learning, as all learning depends on previous learning [2] (except skills that are innate, such as breathing, swallowing, gripping etc).
- Cumulative learning is the cognitive process by which we accumulate knowledge and abilities that serve as building blocks for subsequent cognitive development. A very simple example is the saying 'you can't run before you can walk'; the procedural memory built while learning to walk is necessary before one can start to learn to run. Pronouncing words is impossible without first learning to pronounce the vowels and consonants that make them up (hence babies' babbling).
2018b
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Multi-task_learning Retrieved:2018-4-15.
- Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. [3] [4] Caruana, R. (1997). "Multi-task learning" (PDF). Machine Learning. 28: 41–75. doi:10.1023/A:1007379606734.</ref> Early versions of MTL were called "hints" [5] In a widely cited 1997 paper, Rich Caruana gave the following characterization:
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Abu-Mostafa, Y. S. (1990). "Learning from hints in neural networks" (PDF). Journal of Complexity. 6: 192–198. doi:10.1016/0885-064x(90)90006-y.</ref> Further examples of settings for MTL include multiclass classification and multi-label classification. Weinberger, Kilian. "Multi-task Learning".</ref> Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks. Romera-Paredes, B., Argyriou, A., Bianchi-Berthouze, N., & Pontil, M., (2012) Exploiting Unrelated Tasks in Multi-Task Learning. http://jmlr.csail.mit.edu/proceedings/papers/v22/romera12/romera12.pdf </ref>Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.
- Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. [3] [4] Caruana, R. (1997). "Multi-task learning" (PDF). Machine Learning. 28: 41–75. doi:10.1023/A:1007379606734.</ref> Early versions of MTL were called "hints" [5] In a widely cited 1997 paper, Rich Caruana gave the following characterization:
2017
- (Michelucci & Oblinger, 2017) ⇒ Pietro Michelucci; Daniel Oblinger.(2017). "Cumulative Learning". In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA
- QUOTE: : Cumulative learning (CL) exploits knowledge acquired on prior tasks to improve learning performance on subsequent related tasks. Consider, for example, a CL system that is learning to play chess. Here, one might expect the system to learn from prior games concepts (e.g., favorable board positions, standard openings, end games, etc.) that can be used for future learning. This is in contrast to base learning (Vilalta and Drissi 2002) in which a fixed learning algorithm is applied to a single task and performance tends to improve only with more exemplars. So, in CL there tends to be explicit reuse of learned knowledge to constrain new learning, whereas base learning depends entirely upon new external inputs.
Relevant techniques for CL operate over multiple tasks, often at higher levels of abstraction, such as new problem space representations, task-based selection of learning algorithms, dynamic adjustment of learning parameters, and iterative analysis and modification of the learning algorithms themselves. Though actual usage of this term is varied and evolving, CL typically connotes sequential inductive transfer. It should be noted that the word “inductive” in this connotation qualifies the transfer of knowledge to new tasks, not the underlying learning algorithms.
- QUOTE: : Cumulative learning (CL) exploits knowledge acquired on prior tasks to improve learning performance on subsequent related tasks. Consider, for example, a CL system that is learning to play chess. Here, one might expect the system to learn from prior games concepts (e.g., favorable board positions, standard openings, end games, etc.) that can be used for future learning. This is in contrast to base learning (Vilalta and Drissi 2002) in which a fixed learning algorithm is applied to a single task and performance tends to improve only with more exemplars. So, in CL there tends to be explicit reuse of learned knowledge to constrain new learning, whereas base learning depends entirely upon new external inputs.
2012
- (Lee, 2012) ⇒ Lee J. (2012) Cumulative Learning. In: Seel N.M. (eds) Encyclopedia of the Sciences of Learning. Springer, Boston, MA
- QUOTE: Intelligent systems, human or artificial, accumulate knowledge and abilities that serve as building blocks for subsequent cognitive development. Cumulative learning (CL) deals with the gradual development of knowledge and skills that improve over time. In both educational psychology and artificial intelligence, such layered or sequential learning is considered to be an essential cognitive capacity, both in acquiring useful aggregations and abstractions that are conducive to intelligent behavior and in producing new foundations for further cognitive development. The primary benefit of CL is that it consolidates the knowledge one has obtained through the experiences, allowing it to be reproduced and exploited for subsequent learning situations through cumulative interaction between prior knowledge and new information.
2005
- (Swarup et al., 2005) ⇒ Swarup, S., Mahmud, M. M., Lakkaraju, K., & Ray, S. R. (2005). Cumulative learning: Towards designing cognitive architectures for artificial agents that have a lifetime (PDF).
- ABSTRACT: Cognitive architectures should be designed with learning performance as a central goal. A critical feature of intelligence is the ability to apply the knowledge learned in one context to a new context. A cognitive agent is expected to have a lifetime, in which it has to learn to solve several different types of tasks in its environment. In such a situation, the agent should become increasingly better adapted to its environment. This means that its learning performance on each new task should improve as it is able to transfer knowledge learned in previous tasks to the solution of the new task. We call this ability cumulative learning. Cumulative learning thus refers to the accumulation of learned knowledge over a lifetime, and its application to the learning of new tasks. We believe that creating agents that exhibit sophisticated, long-term, adaptive behavior is going to require this kind of approach.
- ↑ Lee, JungMi (1 January 2012). Seel, Prof Dr Norbert M., ed. Encyclopedia of the Sciences of Learning. Springer US. pp. 887–893. doi:10.1007/978-1-4419-1428-6_1660. Retrieved 3 June 2016 – via link.springer.com.
- ↑ Richey, Rita C. “The future role of Robert M. Gagne in instructional design." The Legacy of Robert M. Gagne (2000): 255-281.
- ↑ Baxter, J. (2000). A model of inductive bias learning" Journal of Artificial Intelligence Research 12:149--198, On-line paper]
- ↑ Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. Paper at Citeseer
- ↑ Suddarth, S., Kergosien, Y. (1990). Rule-injection hints as a means of improving network performance and learning time. EURASIP Workshop. Neural Networks pp. 120-129. Lecture Notes in Computer Science. Springer.