Cumulative Learning Process

From GM-RKB
(Redirected from lifelong learning)
Jump to navigation Jump to search

A Cumulative Learning Process is a learning process that is based on the accumulation and analysis of prior knowledge.



References

2018a

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Cumulative_learning Retrieved:2018-4-15.
    • Cumulative learning is the cognitive process by which we accumulate knowledge and abilities that serve as building blocks for subsequent cognitive development. A very simple example is the saying 'you can't run before you can walk'; the procedural memory built while learning to walk is necessary before one can start to learn to run. Pronouncing words is impossible without first learning to pronounce the vowels and consonants that make them up (hence babies' babbling).

      This is an essential cognitive capacity, allowing prior development to produce new foundations for further cognitive development. Cumulative learning consolidates the knowledge one has obtained through experiences, allowing it to be reproduced and exploited for subsequent learning situations through cumulative interaction between prior knowledge and new information.[1]

      Arguably, all learning is cumulative learning, as all learning depends on previous learning [2] (except skills that are innate, such as breathing, swallowing, gripping etc).

2018b

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Multi-task_learning Retrieved:2018-4-15.
    • Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. [3] [4] Caruana, R. (1997). "Multi-task learning" (PDF). Machine Learning. 28: 41–75. doi:10.1023/A:1007379606734.</ref> Early versions of MTL were called "hints" [5] In a widely cited 1997 paper, Rich Caruana gave the following characterization:

      Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.

      In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Abu-Mostafa, Y. S. (1990). "Learning from hints in neural networks" (PDF). Journal of Complexity. 6: 192–198. doi:10.1016/0885-064x(90)90006-y.</ref> Further examples of settings for MTL include multiclass classification and multi-label classification. Weinberger, Kilian. "Multi-task Learning".</ref> Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks. Romera-Paredes, B., Argyriou, A., Bianchi-Berthouze, N., & Pontil, M., (2012) Exploiting Unrelated Tasks in Multi-Task Learning. http://jmlr.csail.mit.edu/proceedings/papers/v22/romera12/romera12.pdf </ref>

2017

2012

  • (Lee, 2012) ⇒ Lee J. (2012) Cumulative Learning. In: Seel N.M. (eds) Encyclopedia of the Sciences of Learning. Springer, Boston, MA
    • QUOTE: Intelligent systems, human or artificial, accumulate knowledge and abilities that serve as building blocks for subsequent cognitive development. Cumulative learning (CL) deals with the gradual development of knowledge and skills that improve over time. In both educational psychology and artificial intelligence, such layered or sequential learning is considered to be an essential cognitive capacity, both in acquiring useful aggregations and abstractions that are conducive to intelligent behavior and in producing new foundations for further cognitive development. The primary benefit of CL is that it consolidates the knowledge one has obtained through the experiences, allowing it to be reproduced and exploited for subsequent learning situations through cumulative interaction between prior knowledge and new information.

2005


  1. Lee, JungMi (1 January 2012). Seel, Prof Dr Norbert M., ed. Encyclopedia of the Sciences of Learning. Springer US. pp. 887–893. doi:10.1007/978-1-4419-1428-6_1660. Retrieved 3 June 2016 – via link.springer.com.
  2. Richey, Rita C. “The future role of Robert M. Gagne in instructional design." The Legacy of Robert M. Gagne (2000): 255-281.
  3. Baxter, J. (2000). A model of inductive bias learning" Journal of Artificial Intelligence Research 12:149--198, On-line paper]
  4. Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. Paper at Citeseer
  5. Suddarth, S., Kergosien, Y. (1990). Rule-injection hints as a means of improving network performance and learning time. EURASIP Workshop. Neural Networks pp. 120-129. Lecture Notes in Computer Science. Springer.