Incremental Machine Learning System: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "<P> [[" to "<P>  [[")
m (Text replacement - "ules]]" to "ule]]s")
 
Line 20: Line 20:
** A [[Batch Learning System]],
** A [[Batch Learning System]],
** A [[Offline Learning System]].
** A [[Offline Learning System]].
* <B>See:</B> [[Active Learning]]; [[Decision Tree]], [[Decision Rules]], [[Artificial Neural Network]], [[Support Vector Machine]], [[Forecasting]], [[Machine Learning]], [[Supervised Learning]], [[Unsupervised Learning]], [[Data Stream]], [[Big Data]], [[Classification]].
* <B>See:</B> [[Active Learning]]; [[Decision Tree]], [[Decision Rule]]s, [[Artificial Neural Network]], [[Support Vector Machine]], [[Forecasting]], [[Machine Learning]], [[Supervised Learning]], [[Unsupervised Learning]], [[Data Stream]], [[Big Data]], [[Classification]].


----
----
Line 29: Line 29:
=== 2018 ===
=== 2018 ===
* (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Incremental_learning Retrieved:2018-4-15.
* (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Incremental_learning Retrieved:2018-4-15.
** In [[computer science]], '''incremental learning''' is a method of [[machine learning]], in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of [[supervised learning]] and [[unsupervised learning]] that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.        <P>        Many traditional machine learning algorithms inherently support incremental learning, other algorithms can be adapted to facilitate this. Examples of incremental algorithms include [[Decision tree|decisions trees]] (IDE4, <ref> Schlimmer, J. C., & Fisher, D. [https://www.researchgate.net/profile/Doug_Fisher2/publication/221603307_A_Case_Study_of_Incremental_Concept_Induction/links/555b1fc608ae980ca6122e64.pdf A case study of incremental concept induction]. Fifth National Conference on Artificial Intelligence, 496-501. Philadelphia, 1986 </ref> ID5R <ref> Utgoff, P. E., [http://people.cs.umass.edu/~utgoff/papers/mlj-id5r.pdf Incremental induction of decision trees]. Machine Learning, 4(2): 161-186, 1989 </ref> ), [[decision rules]], <ref> Ferrer-Troyano, Francisco, Jesus S. Aguilar-Ruiz, and Jose C. Riquelme. [https://idus.us.es/xmlui/bitstream/handle/11441/39713/Incremental%20rule.pdf?sequence=4&isAllowed=y Incremental rule learning based on example nearness from numerical data streams]. Proceedings of the 2005 ACM symposium on Applied computing. ACM, 2005 </ref> [[artificial neural network]]s (RBF networks, <ref> Bruzzone, Lorenzo, and D. Fernàndez Prieto. [https://www.researchgate.net/profile/Piyabute_Fuangkhon/publication/224545094_An_Incremental_Learning_Algorithm_for_Supervised_Neural_Network_with_Contour_Preserving_Classification/links/557d4adf08aec87640db58af.pdf An incremental-learning neural network for the classification of remote-sensing images]. Pattern Recognition Letters: 1241-1248, 1999 </ref> Learn++, <ref> R. Polikar, L. Udpa, S. Udpa, V. Honavar. [https://www.researchgate.net/profile/Vasant_Honavar/publication/2489080_Learn_An_Incremental_Learning_Algorithm_for_Supervised_Neural_Networks/links/0912f50d151e7d22df000000.pdf Learn++: An incremental learning algorithm for supervised neural networks]. IEEE Transactions on Systems, Man, and Cybernetics. Rowan University USA, 2001. </ref> Fuzzy ARTMAP, <ref> G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, D. Rosen. [http://open.bu.edu/bitstream/handle/2144/2071/91.016.pdf?sequence=1 Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps]. IEEE transactions on neural networks, 1992 </ref> TopoART,<ref name="TopoART">Marko Tscherepanow, Marco Kortkamp, and Marc Kammer. [http://aiweb.techfak.uni-bielefeld.de/files/tscherepanow.marko2011ahierarchical-nn-r1.pdf A Hierarchical ART Network for the Stable Incremental Learning of Topological Structures and Associations from Noisy Data]. Neural Networks, 24(8): 906-916, 2011 </ref> and IGNG <ref> Jean-Charles Lamirel, Zied Boulila, Maha Ghribi, and Pascal Cuxac. [https://www.researchgate.net/profile/Pascal_Cuxac/publication/47760684_A_new_incremental_neural_clustering_approach_for_performing_reliable_large_scope_scientometrics_analysis/links/58dbbb2c458515152b23f075/A-new-incremental-neural-clustering-approach-for-performing-reliable-large-scope-scientometrics-analysis.pdf A New Incremental Growing Neural Gas Algorithm Based on Clusters Labeling Maximization: Application to Clustering of Heterogeneous Textual Data]. IEA/AIE 2010: Trends in Applied Intelligent Systems, 139-148, 2010 </ref> ) or the incremental [[Support vector machine|SVM]]. <ref> Diehl, Christopher P., and Gert Cauwenberghs. [http://www.isn.ucsd.edu/pubs/ijcnn03_inc.pdf SVM incremental learning, adaptation and optimization]. Neural Networks, 2003. Proceedings of the International Joint Conference on. Vol. 4. IEEE, 2003. </ref> The aim of incremental learning is for the learning model to adapt to new data without forgetting its existing knowledge, it does not retrain the model. Some incremental learners have built-in some parameter or assumption that controls the relevancy of old data, while others, called stable incremental machine learning algorithms, learn representations of the training data that are not even partially forgotten over time. Fuzzy ART<ref name="Fuzzy ART">Carpenter, G.A., Grossberg, S., & Rosen, D.B., [http://dcommon.bu.edu/bitstream/handle/2144/2070/91.015.pdf?sequence=1&isAllowed=y Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system], Neural Networks, 4(6): 759-771, 1991 </ref> and TopoART  are two examples for this second approach.        <P>        Incremental algorithms are frequently applied to [[data stream]]s or [[big data]], addressing issues in data availability and resource scarcity respectively. Stock trend prediction and user profiling are some examples of data streams where new data becomes continuously available. Applying incremental learning to big data aims to produce faster [[classification]] or [[forecasting]] times.
** In [[computer science]], '''incremental learning''' is a method of [[machine learning]], in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of [[supervised learning]] and [[unsupervised learning]] that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.        <P>        Many traditional machine learning algorithms inherently support incremental learning, other algorithms can be adapted to facilitate this. Examples of incremental algorithms include [[Decision tree|decisions trees]] (IDE4, <ref> Schlimmer, J. C., & Fisher, D. [https://www.researchgate.net/profile/Doug_Fisher2/publication/221603307_A_Case_Study_of_Incremental_Concept_Induction/links/555b1fc608ae980ca6122e64.pdf A case study of incremental concept induction]. Fifth National Conference on Artificial Intelligence, 496-501. Philadelphia, 1986 </ref> ID5R <ref> Utgoff, P. E., [http://people.cs.umass.edu/~utgoff/papers/mlj-id5r.pdf Incremental induction of decision trees]. Machine Learning, 4(2): 161-186, 1989 </ref> ), [[decision rule]]s, <ref> Ferrer-Troyano, Francisco, Jesus S. Aguilar-Ruiz, and Jose C. Riquelme. [https://idus.us.es/xmlui/bitstream/handle/11441/39713/Incremental%20rule.pdf?sequence=4&isAllowed=y Incremental rule learning based on example nearness from numerical data streams]. Proceedings of the 2005 ACM symposium on Applied computing. ACM, 2005 </ref> [[artificial neural network]]s (RBF networks, <ref> Bruzzone, Lorenzo, and D. Fernàndez Prieto. [https://www.researchgate.net/profile/Piyabute_Fuangkhon/publication/224545094_An_Incremental_Learning_Algorithm_for_Supervised_Neural_Network_with_Contour_Preserving_Classification/links/557d4adf08aec87640db58af.pdf An incremental-learning neural network for the classification of remote-sensing images]. Pattern Recognition Letters: 1241-1248, 1999 </ref> Learn++, <ref> R. Polikar, L. Udpa, S. Udpa, V. Honavar. [https://www.researchgate.net/profile/Vasant_Honavar/publication/2489080_Learn_An_Incremental_Learning_Algorithm_for_Supervised_Neural_Networks/links/0912f50d151e7d22df000000.pdf Learn++: An incremental learning algorithm for supervised neural networks]. IEEE Transactions on Systems, Man, and Cybernetics. Rowan University USA, 2001. </ref> Fuzzy ARTMAP, <ref> G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, D. Rosen. [http://open.bu.edu/bitstream/handle/2144/2071/91.016.pdf?sequence=1 Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps]. IEEE transactions on neural networks, 1992 </ref> TopoART,<ref name="TopoART">Marko Tscherepanow, Marco Kortkamp, and Marc Kammer. [http://aiweb.techfak.uni-bielefeld.de/files/tscherepanow.marko2011ahierarchical-nn-r1.pdf A Hierarchical ART Network for the Stable Incremental Learning of Topological Structures and Associations from Noisy Data]. Neural Networks, 24(8): 906-916, 2011 </ref> and IGNG <ref> Jean-Charles Lamirel, Zied Boulila, Maha Ghribi, and Pascal Cuxac. [https://www.researchgate.net/profile/Pascal_Cuxac/publication/47760684_A_new_incremental_neural_clustering_approach_for_performing_reliable_large_scope_scientometrics_analysis/links/58dbbb2c458515152b23f075/A-new-incremental-neural-clustering-approach-for-performing-reliable-large-scope-scientometrics-analysis.pdf A New Incremental Growing Neural Gas Algorithm Based on Clusters Labeling Maximization: Application to Clustering of Heterogeneous Textual Data]. IEA/AIE 2010: Trends in Applied Intelligent Systems, 139-148, 2010 </ref> ) or the incremental [[Support vector machine|SVM]]. <ref> Diehl, Christopher P., and Gert Cauwenberghs. [http://www.isn.ucsd.edu/pubs/ijcnn03_inc.pdf SVM incremental learning, adaptation and optimization]. Neural Networks, 2003. Proceedings of the International Joint Conference on. Vol. 4. IEEE, 2003. </ref> The aim of incremental learning is for the learning model to adapt to new data without forgetting its existing knowledge, it does not retrain the model. Some incremental learners have built-in some parameter or assumption that controls the relevancy of old data, while others, called stable incremental machine learning algorithms, learn representations of the training data that are not even partially forgotten over time. Fuzzy ART<ref name="Fuzzy ART">Carpenter, G.A., Grossberg, S., & Rosen, D.B., [http://dcommon.bu.edu/bitstream/handle/2144/2070/91.015.pdf?sequence=1&isAllowed=y Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system], Neural Networks, 4(6): 759-771, 1991 </ref> and TopoART  are two examples for this second approach.        <P>        Incremental algorithms are frequently applied to [[data stream]]s or [[big data]], addressing issues in data availability and resource scarcity respectively. Stock trend prediction and user profiling are some examples of data streams where new data becomes continuously available. Applying incremental learning to big data aims to produce faster [[classification]] or [[forecasting]] times.


=== 2017 ===
=== 2017 ===

Latest revision as of 21:04, 9 May 2024

An Incremental Machine Learning System is a Machine Learning System, based on Incremental Learning, that implements an Incremental Machine Learning Algorithm to solve a Incremental Machine Learning Task.



References

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Incremental_learning Retrieved:2018-4-15.
    • In computer science, incremental learning is a method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

      Many traditional machine learning algorithms inherently support incremental learning, other algorithms can be adapted to facilitate this. Examples of incremental algorithms include decisions trees (IDE4, [1] ID5R [2] ), decision rules, [3] artificial neural networks (RBF networks, [4] Learn++, [5] Fuzzy ARTMAP, [6] TopoART,[7] and IGNG [8] ) or the incremental SVM. [9] The aim of incremental learning is for the learning model to adapt to new data without forgetting its existing knowledge, it does not retrain the model. Some incremental learners have built-in some parameter or assumption that controls the relevancy of old data, while others, called stable incremental machine learning algorithms, learn representations of the training data that are not even partially forgotten over time. Fuzzy ART[10] and TopoART are two examples for this second approach.

      Incremental algorithms are frequently applied to data streams or big data, addressing issues in data availability and resource scarcity respectively. Stock trend prediction and user profiling are some examples of data streams where new data becomes continuously available. Applying incremental learning to big data aims to produce faster classification or forecasting times.

2017

  • (Utgoff, 2017) ⇒ Paul E. Utgoff. (2017). Incremental Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA
    • QUOTE: Incremental learning refers to any online learning process that learns the same model as would be learned by a batch learning algorithm.

      (...)

       Incremental learning is useful when the input to a learning process occurs as a stream of distinct observations spread out over time, with the need or desire to be able to use the result of learning at any point in time, based on the input observations received so far. In principle, the stream of observations may be infinitely long, or the next observation long delayed, precluding any hope of waiting until all the observations have been received. Without the ability to forestall learning, one must commit to a sequence of hypotheses or other learned artifacts based on the inputs observed up to the present. One would rather not simply accumulate and store all the inputs and, upon receipt of each new one, apply a batch learning algorithm to the entire sequence of inputs received so far. It would be preferable computationally if the existing hypothesis or other artifact of learning could be updated in response to each newly received input observation.

2012

2005


  1. Schlimmer, J. C., & Fisher, D. A case study of incremental concept induction. Fifth National Conference on Artificial Intelligence, 496-501. Philadelphia, 1986
  2. Utgoff, P. E., Incremental induction of decision trees. Machine Learning, 4(2): 161-186, 1989
  3. Ferrer-Troyano, Francisco, Jesus S. Aguilar-Ruiz, and Jose C. Riquelme. Incremental rule learning based on example nearness from numerical data streams. Proceedings of the 2005 ACM symposium on Applied computing. ACM, 2005
  4. Bruzzone, Lorenzo, and D. Fernàndez Prieto. An incremental-learning neural network for the classification of remote-sensing images. Pattern Recognition Letters: 1241-1248, 1999
  5. R. Polikar, L. Udpa, S. Udpa, V. Honavar. Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man, and Cybernetics. Rowan University USA, 2001.
  6. G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds, D. Rosen. Fuzzy ARTMAP: a neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE transactions on neural networks, 1992
  7. Marko Tscherepanow, Marco Kortkamp, and Marc Kammer. A Hierarchical ART Network for the Stable Incremental Learning of Topological Structures and Associations from Noisy Data. Neural Networks, 24(8): 906-916, 2011
  8. Jean-Charles Lamirel, Zied Boulila, Maha Ghribi, and Pascal Cuxac. A New Incremental Growing Neural Gas Algorithm Based on Clusters Labeling Maximization: Application to Clustering of Heterogeneous Textual Data. IEA/AIE 2010: Trends in Applied Intelligent Systems, 139-148, 2010
  9. Diehl, Christopher P., and Gert Cauwenberghs. SVM incremental learning, adaptation and optimization. Neural Networks, 2003. Proceedings of the International Joint Conference on. Vol. 4. IEEE, 2003.
  10. Carpenter, G.A., Grossberg, S., & Rosen, D.B., Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system, Neural Networks, 4(6): 759-771, 1991