Feature Learning Task: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - "xam ple" to "xample")
m (Text replacement - "ions]] " to "ion]]s ")
 
Line 46: Line 46:
=== 2008 ===
=== 2008 ===
* ([[2008_ConvexMultiTaskFeatureLearning|Argyriou et al., 2008]]) ⇒ [[Andreas Argyriou]], [[Theodoros Evgeniou]], and [[Massimiliano Pontil]]. ([[2008]]). “[http://eprints.pascal-network.org/archive/00003419/01/mtl_feat.pdf Convex Multi-task Feature Learning].” In: Machine Learning Journal, 73(3). [http://dx.doi.org/10.1007/s10994-007-5040-8 doi:10.1007/s10994-007-5040-8]  
* ([[2008_ConvexMultiTaskFeatureLearning|Argyriou et al., 2008]]) ⇒ [[Andreas Argyriou]], [[Theodoros Evgeniou]], and [[Massimiliano Pontil]]. ([[2008]]). “[http://eprints.pascal-network.org/archive/00003419/01/mtl_feat.pdf Convex Multi-task Feature Learning].” In: Machine Learning Journal, 73(3). [http://dx.doi.org/10.1007/s10994-007-5040-8 doi:10.1007/s10994-007-5040-8]  
** QUOTE: [[2008_ConvexMultiTaskFeatureLearning|We]] present a [[method for learning sparse representations]] shared across [[multiple task]]s. … [[Learning common sparse representations]] across [[multiple tasks]] or [[datasets]] may also be of interest for example for [[data compression]]. While the [[problem of learning (or selecting) sparse representations]] has been extensively studied either for [[single-task supervised learning]] (e.g., using [[1-norm regularization]]) or for [[unsupervised learning]] (e.g., using [[principal component analysis (PCA)]] or [[independent component analysis (ICA)]]), there has been only limited work [3, 9, 31, 48] in the [[multi-task supervised learning setting]].
** QUOTE: [[2008_ConvexMultiTaskFeatureLearning|We]] present a [[method for learning sparse representation]]s shared across [[multiple task]]s. … [[Learning common sparse representation]]s across [[multiple tasks]] or [[datasets]] may also be of interest for example for [[data compression]]. While the [[problem of learning (or selecting) sparse representation]]s has been extensively studied either for [[single-task supervised learning]] (e.g., using [[1-norm regularization]]) or for [[unsupervised learning]] (e.g., using [[principal component analysis (PCA)]] or [[independent component analysis (ICA)]]), there has been only limited work [3, 9, 31, 48] in the [[multi-task supervised learning setting]].


----
----

Latest revision as of 07:26, 22 August 2024

A Feature Learning Task is a Machine Learning Task that learns data representations from input data.



References

2021

  1. Nathan Srebro; Jason D. M. Rennie; Tommi S. Jaakkola (2004). Maximum-Margin Matrix Factorization. NIPS.
  2. Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning (PDF). Int'l Conf. on AI and Statistics (AISTATS). Archived from the original (PDF) on 2017-08-13. Retrieved 2014-11-24.
  3. Csurka, Gabriella; Dance, Christopher C.; Fan, Lixin; Willamowski, Jutta; Bray, Cédric (2004). Visual categorization with bags of keypoints (PDF). ECCV Workshop on Statistical Learning in Computer Vision.
  4. Daniel Jurafsky; James H. Martin (2009). Speech and Language Processing. Pearson Education International. pp. 145–146.

2013

In the case of probabilistic models, a good representation is often one that captures the posterior distribution of the underlying explanatory factors for the observed input. A good representation is also one that is useful as input to a supervised predictor. Among the various ways of learning representations, this paper focuses on deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract- and ultimately more useful - representations.

2008