Training Time Complexity
(Redirected from Training Time)
Jump to navigation
Jump to search
A Training Time Complexity is a Computational Time Complexity that corresponds to the time interval that takes for an algorithm to learn a model from training data.
- AKA: Training-Time Computational Cost, Training Time, Learning Time.
- Example(s):
- kNN Classification Training Time Complexity: $\Theta(\vert\mathbb{D}\vert L_{ave})$
- ...
- …
- Counter-Example(s):
- See: Machine Learning System, Machine Learning Task, Test Dataset, Training Dataset, Evaluation Dataset, Computational Complexity, Analysis of Algorithms.
References
2019
- (Choi et al., 2019) ⇒ Seok-Hwan Choi, Jin-Myeong Shin, and Yoon-Ho Choi(2019). "Dynamic Nonparametric Random Forest Using Covariance". In: Hindawi - Security and Communication Networks (2019). DOI:10.1155/2019/3984031.
- QUOTE: In Figure 5, we show the learning time of each algorithm under the UNSW-NB15 dataset. For the UNSW-NB15 dataset, the learning time of the proposed C-DRF algorithm was 1,987.48 seconds on average, which was faster than 3,363.02 seconds and 2,258.81 seconds of the original RF algorithm and P. Latinne et al.’s algorithm, respectively.
2017
- (Sammut & Webb, 2017) ⇒ Claude Sammut, and Geoffrey I. Webb. (2017). "Training Time". In: (Sammut & Webb, 2017). DOI:/10.1007/978-1-4899-7687-1_975.
- QUOTE: A learning algorithm is typically applied at two distinct times. Training time refers to the time when an algorithm is learning a model from training data. Test time refers to the time when an algorithm is applying a learned model to make predictions. Lazy learning usually blurs the distinction between these two times, deferring most learning until test time.
2008
- (Manning et al., 2008) ⇒ Christopher D. Manning, Prabhakar Raghavan, Hinrich Schütze. (2008). “Introduction to Information Retrieval." Cambridge University Press. ISBN:0521865719.
- QUOTE: Table 14.3 gives the time complexity of kNN. kNN has properties that are quite different from most other classification algorithms. Training a kNN classifier simply consists of determining $k$ and preprocessing documents. In fact, if we preselect a value for $k$ and do not preprocess, then kNN requires no training at all. In practice, we have to perform preprocessing steps like tokenization. It makes more sense to preprocess training documents once as part of the training phase rather than repeatedly every time we classify a new test document.
Test time is $\Theta(\vert \mathbb{D} \vert M_{ave} M_{a})$ for kNN. It is linear in the size of the training set as we need to compute the distance of each training document from the test document. Test time is independent of the number of classes $J$. kNN therefore has a potential advantage for problems with large $J$.
- QUOTE: Table 14.3 gives the time complexity of kNN. kNN has properties that are quite different from most other classification algorithms. Training a kNN classifier simply consists of determining $k$ and preprocessing documents. In fact, if we preselect a value for $k$ and do not preprocess, then kNN requires no training at all. In practice, we have to perform preprocessing steps like tokenization. It makes more sense to preprocess training documents once as part of the training phase rather than repeatedly every time we classify a new test document.
kNN with preprocessing of training set | |
training | $\Theta(\vert\mathbb{D}\vert L_{ave})$ |
testing | $\Theta( L_{a} + \vert \mathbb{D} \vert M_{ave} M_{a})= \Theta(\vert\mathbb{D}\vert M_{ave} M_{a})$ |
kNN without preprocessing of training set | |
training | $\Theta(1)$ |
testing | $\Theta( L_{a} + \vert \mathbb{D} \vert L_{ave} M_{a}) = \Theta(\vert\mathbb{D}\vert L_{ave} M_{a})$ |