Sparse Coding Task
A Sparse Coding Task is a vector coding task that requires the creation of sparse vectors.
- Context:
- It can be solved by a Sparse Coding System (that implements a sparse coding algorithm).
- …
- Counter-Example(s):
- See: Neural Coding, Analog Signal, Digital Data, Autoencoder, Wavelet, Linear Combination, Basis (Linear Algebra), Normal Distribution, Visual Cortex, Sparse Coding, Associative Memory (Psychology), Neural Ensemble, Sparse Distributed Memory, Gero Miesenböck.
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/neural_coding#Sparse_coding Retrieved:2016-3-31.
- The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons.
As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. A major result in neural coding from Olshausen and Field [1] is that sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex. The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system.
Given a potentially large set of input patterns, sparse coding algorithms (e.g. Sparse Autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols.
- The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons.
- ↑ Olshausen, Bruno A; Field, David J. “Emergence of simple-cell receptive field properties by learning a sparse code for natural images." Nature 381.6583 (1996): 607-609. http://www.cs.ubc.ca/~little/cpsc425/olshausen_field_nature_1996.pdf
2012
- (Sindhwani & Ghoting, 2012) ⇒ Vikas Sindhwani, and Amol Ghoting. (2012). “Large-scale Distributed Non-negative Sparse Coding and Sparse Dictionary Learning.” In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2012). ISBN:978-1-4503-1462-6 doi:10.1145/2339530.2339610
- QUOTE: The proposed algorithms may be seen as parallel optimization procedures for constructing sparse non-negative factorizations of large, sparse matrices. Our approach alternates between a parallel sparse coding phase implemented using greedy or convex (<i l1) regularized risk minimization procedures,
2006
- (Lee et al., 2006) ⇒ Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. (2006). “Efficient Sparse Coding Algorithms.” In: Advances in Neural Information Processing Systems, pp. 801-808.
2002
- (Hoyer, 2002) ⇒ Patrik O. Hoyer. (2002). “Non-negative Sparse Coding.” In: Neural Networks for Signal Processing,
- ABSTRACT: Non-negative sparse coding is a method for decomposing multivariate data into non-negative sparse components. In this paper we briefly describe the motivation behind this type of data representation and its relation to standard sparse coding and non-negative matrix factorization. We then give a simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components. In addition, we show how the basis vectors can be learned from the observed data. Simulations demonstrate the effectiveness of the proposed method.
1997
- (Olshausen & Field, 1997) ⇒ Bruno A. Olshausen, and David J. Field . (1997). “Sparse Coding with An Overcomplete Basis Set: A Strategy Employed by V1?." Vision research 37, no. 23
- ABSTRACT: The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and bandpass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete — i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli.