Linear Transformation Operation
A Linear Transformation Operation is a transformation operation that preserves the operations of vector addition and scalar multiplication.
- AKA: Linear Mapping.
- Context:
- It can be represented as, for [math]\displaystyle{ V }[/math] and [math]\displaystyle{ W }[/math] be vector spaces over the field [math]\displaystyle{ F }[/math], a linear transformation [math]\displaystyle{ V }[/math] into [math]\displaystyle{ W }[/math] is a function [math]\displaystyle{ T }[/math] from [math]\displaystyle{ V }[/math] into [math]\displaystyle{ W }[/math] such that [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math] in [math]\displaystyle{ V }[/math] and all scalars [math]\displaystyle{ c }[/math] in [math]\displaystyle{ F }[/math].
- It can be a Surjective Linear Transformation.
- It can be an Injective Linear Transformation.
- It can be produced by a Linear Mapping Task.
- It can range from being a Linear Function to a Linear Matrix Transformation.
- For two linear transformations [math]\displaystyle{ T_1:V \to W }[/math] and [math]\displaystyle{ T_2:V \to W }[/math], [math]\displaystyle{ T_1T_2 }[/math] and [math]\displaystyle{ T_2T_1 }[/math] (composite transformations) are also linear transformations from [math]\displaystyle{ V \to W }[/math]. But [math]\displaystyle{ T_1T_2 \# T_2T_1 }[/math].
- For a linear transformation [math]\displaystyle{ T:V \to W }[/math], the collection of all elements [math]\displaystyle{ w \in W }[/math] such that [math]\displaystyle{ w=T(v) }[/math] for [math]\displaystyle{ v \in V }[/math]is called range of T and is denoted as [math]\displaystyle{ ran(T) }[/math]. That is
[math]\displaystyle{ ran(T)=\{T(v)|v \in V\} }[/math].
- For a linear transformation [math]\displaystyle{ T:V \to W }[/math], the set of all elements of [math]\displaystyle{ V }[/math]that mapped into zero element by the linear transformation [math]\displaystyle{ T }[/math] is called the kernel or the null-space of [math]\displaystyle{ T }[/math] and denoted as [math]\displaystyle{ ker(T) }[/math]. That is
[math]\displaystyle{ ker(T)=\{v|T(v)=0\} }[/math].
- Example(s):
- a Zero Map.
- an Identity Map.
- a Linear Transformation Addition Operation ([math]\displaystyle{ + }[/math]), where for [math]\displaystyle{ T_1:V \to W }[/math] and [math]\displaystyle{ T_2:V \to W }[/math], [math]\displaystyle{ T_1 + T_2 }[/math] is also a linear transformation (from [math]\displaystyle{ V \to W }[/math]).
- a Constant Multiplication Function, [math]\displaystyle{ x \mapsto cx }[/math], where [math]\displaystyle{ c }[/math] is a constant.
- a Fourier Transform.
- a Haar Transform.
- a Bilinear Function.
- a Linear Projection, such as an orthogonal projection.
- [math]\displaystyle{ T(x)=-x/2 }[/math], with scale compression and scale reflection.
- a transformation [math]\displaystyle{ T:\R^3 \to \mathbb{R}^2 }[/math] defined by [math]\displaystyle{ T\left(\substack{
x \\y \\z}\right)=\left( \substack{
y+z \\y-z} \right) }[/math] is a linear transformation since [math]\displaystyle{ T }[/math] satisfies the property [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha, \beta \in V }[/math] and all scalars [math]\displaystyle{ c \in F }[/math].
Let [math]\displaystyle{ \alpha=\left(\substack{ x_1 \\y_1 \\z_1}\right) \in V, \beta=\left(\substack{ x_2 \\y_2 \\z_2}\right) \in V }[/math] and scalar [math]\displaystyle{ c \in F }[/math]
.So [math]\displaystyle{ T\alpha=T(\alpha)=T\left(\substack{ x_1 \\y_1 \\z_1}\right)=\left( \substack{ y_1+z_1 \\y_1-z_1} \right) \in \mathbb{R}^2=W, T \beta=T(\beta)=T\left(\substack{ x_2 \\y_2 \\z_2}\right)=\left( \substack{y_2+z_2 \\y_2-z_2} \right) \in \mathbb{R}^2=W }[/math]
Then [math]\displaystyle{ T(c\alpha+\beta)=T\left(\substack{ cx_1+x_2 \\cy_1+y_2 \\cz_1+z_2}\right)=\left( \substack{ cy_1+y_2+cz_1+z_2 \\cy_1+y_2-cz_1-z_2} \right)=\left( \substack{ c(y_1+z_1)+y_2+z_2 \\c(y_1-z_1)+y_2-z_2} \right)=c\left( \substack{ y_1+z_1 \\y_1-z_1} \right)+ \left( \substack{ y_2+z_2 \\y_2-z_2} \right)=c(T\alpha) + T \beta }[/math], which proves that [math]\displaystyle{ T }[/math] is a linear transformation.
With a little computation it can also be found that the transformation matrix [math]\displaystyle{ T=\begin{pmatrix}0 & 1 & 1 \\ 0 & 1 & -1 \end{pmatrix} }[/math]
.It can be verified that [math]\displaystyle{ T\left(\substack{ x \\y \\z}\right)=\begin{pmatrix}0 & 1 & 1 \\ 0 & 1 & -1 \end{pmatrix} \left(\substack{ x \\y \\z}\right)=\left( \substack{ y+z \\y-z} \right) }[/math]
- Counter-Example(s):
- a Non-Linear Transformation, such as [math]\displaystyle{ x\mapsto x^2 }[/math].
- Cosine Transform.
- a transformation [math]\displaystyle{ T:\R^3 \to \mathbb{R}^1 }[/math] defined by [math]\displaystyle{ T\begin{pmatrix}x \\y \\z \end{pmatrix}=x^2+y^2+z^2 }[/math] is a not linear transformation since [math]\displaystyle{ T }[/math] does not satisfy the property [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha, \beta \in V }[/math] and all scalars [math]\displaystyle{ c \in F }[/math].
The property can be verified as follows.
Let [math]\displaystyle{ \alpha=\begin{pmatrix}x_1 \\y_1 \\z_1 \end{pmatrix} \in \mathbb{R}^3=V, \beta=\begin{pmatrix}x_2 \\y_2 \\z_2 \end{pmatrix} \in \mathbb{R}^3=V }[/math] and scalar [math]\displaystyle{ c \in F }[/math]
.So [math]\displaystyle{ T\alpha=T(\alpha)=T\begin{pmatrix}x_1 \\y_1 \\z_1 \end{pmatrix}={x_1}^2+{y_1}^2+{z_1}^2 \in \mathbb{R}^1=W }[/math] and [math]\displaystyle{ T\beta=T(\beta)=T\begin{pmatrix}x_2 \\y_2 \\z_2 \end{pmatrix}={x_2}^2+{y_2}^2+{z_2}^2 \in \mathbb{R}^1=W }[/math]
[math]\displaystyle{ T(c \alpha+\beta)=T\begin{pmatrix}cx_1+x_2 \\cy_1+y_2 \\cz_1+z_2 \end{pmatrix}={(cx_1+x_2)}^2+{(cy_1+y_2)}^2+{(cz_1+z_2)}^2 \# cT(\alpha)+T(\beta) }[/math].
- See: Linear Model Training, Homomorphism, Independent Component Analysis, Vector Space, Category Theory, Linear Independence, Linear Algebra Concept.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/linear_map Retrieved:2015-1-30.
- In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping V → W between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication. Linear maps can generally be represented as matrices, and simple examples include rotation and reflection linear transformations.
An important special case is when V = W, in which case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the same meaning as linear map, while in analytic geometry it does not.
A linear map always maps linear subspaces onto linear subspaces (possibly of a lower dimension); for instance it maps a plane through the origin to a plane, straight line or point.
In the language of abstract algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring.
- In mathematics, a linear map (also called a linear mapping, linear transformation or, in some contexts, linear function) is a mapping V → W between two modules (including vector spaces) that preserves (in the sense defined below) the operations of addition and scalar multiplication. Linear maps can generally be represented as matrices, and simple examples include rotation and reflection linear transformations.
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Linear_map#Definition_and_first_consequences Retrieved:2015-1-30.
- Let V and W be vector spaces over the same field K. A function f: V → W is said to be a linear map if for any two vectors x and y in V and any scalar α in K, the following two conditions are satisfied:
- [math]\displaystyle{ f(\mathbf{x}+\mathbf{y}) = f(\mathbf{x})+f(\mathbf{y}) \! }[/math] additivity.
- [math]\displaystyle{ f(\alpha \mathbf{x}) = \alpha f(\mathbf{x}) \! }[/math] homogeneity of degree 1
- This is equivalent to requiring the same for any linear combination of vectors, i.e. that for any vectors x1, ..., xm ∈ V and scalars a1, ..., am ∈ K, the following equality holds: : [math]\displaystyle{ f(a_1 \mathbf{x}_1+\cdots+a_m \mathbf{x}_m) = a_1 f(\mathbf{x}_1)+\cdots+a_m f(\mathbf{x}_m). \! }[/math] Denoting the zero elements of the vector spaces V and W by 0V and 0W respectively, it follows that f(0V) = 0W because letting α = 0 in the equation for homogeneity of degree 1, :[math]\displaystyle{ f(\mathbf{0}_{V}) = f(0 \cdot \mathbf{0}_{V}) = 0 \cdot f(\mathbf{0}_{V}) = \mathbf{0}_{W} . }[/math] Occasionally, V and W can be considered to be vector spaces over different fields. It is then necessary to specify which of these ground fields is being used in the definition of "linear". If V and W are considered as spaces over the field K as above, we talk about K-linear maps. For example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear.
A linear map from V to K (with K viewed as a vector space over itself) is called a linear functional.
These statements generalize to any left-module RM over a ring R without modification.
- Let V and W be vector spaces over the same field K. A function f: V → W is said to be a linear map if for any two vectors x and y in V and any scalar α in K, the following two conditions are satisfied:
2012
- Mark V. Sapir. http://www.math.vanderbilt.edu/~msapir/msapir/feb19.html
- QUOTE: A function from [math]\displaystyle{ \R^n }[/math] to [math]\displaystyle{ \R^m }[/math] which takes every [math]\displaystyle{ n }[/math]-vector [math]\displaystyle{ v }[/math] to the [math]\displaystyle{ m }[/math]-vector [math]\displaystyle{ Av }[/math] where [math]\displaystyle{ A }[/math] is a [math]\displaystyle{ m }[/math] by [math]\displaystyle{ n }[/math] matrix, is called a linear transformation. The matrix [math]\displaystyle{ A }[/math] is called the standard matrix of this transformation. If [math]\displaystyle{ n=m }[/math] then the transformation is called a linear operator of the vector space [math]\displaystyle{ \R^n\lt math\gt . \lt P\gt Notice that by the definition the linear transformation with a standard matrix A takes every vector : \lt math\gt (x_1,...,x_n) }[/math] from \mathbb{R}^n to the vector : [math]\displaystyle{ (A(1,1)x_1+...+A(1,n)x_n, A(2,1)x_1+...+A(2,n)x_n,...,A(m,1)x_1+...+A(m,n)x_n) }[/math]
from \mathbb{R}^m where A(i,j) are the entries of A. Conversely, every transformation from \mathbb{R}^n to \mathbb{R}^m given by a formula of this kind is a linear transformation and the coefficients A(i,j) form the standard matrix of this transformation.
Examples. 1. Consider the transformation of R2 which takes each vector (a,b) to the opposite vector (-a,-b). This is a linear operator with standard matrix : [math]\displaystyle{ [ -1 0] \\ [0 -1] }[/math]
- QUOTE: A function from [math]\displaystyle{ \R^n }[/math] to [math]\displaystyle{ \R^m }[/math] which takes every [math]\displaystyle{ n }[/math]-vector [math]\displaystyle{ v }[/math] to the [math]\displaystyle{ m }[/math]-vector [math]\displaystyle{ Av }[/math] where [math]\displaystyle{ A }[/math] is a [math]\displaystyle{ m }[/math] by [math]\displaystyle{ n }[/math] matrix, is called a linear transformation. The matrix [math]\displaystyle{ A }[/math] is called the standard matrix of this transformation. If [math]\displaystyle{ n=m }[/math] then the transformation is called a linear operator of the vector space [math]\displaystyle{ \R^n\lt math\gt . \lt P\gt Notice that by the definition the linear transformation with a standard matrix A takes every vector : \lt math\gt (x_1,...,x_n) }[/math] from \mathbb{R}^n to the vector : [math]\displaystyle{ (A(1,1)x_1+...+A(1,n)x_n, A(2,1)x_1+...+A(2,n)x_n,...,A(m,1)x_1+...+A(m,n)x_n) }[/math]
2010
- http://mathworld.wolfram.com/LinearTransformation.html
- A linear transformation between two vector spaces V and W is a map T:V->W such that the following hold:
- T(v_1+v_2)=T(v_1)+T(v_2) for any vectors v_1 and v_2 in V, and
- 2. T(alphav)=alphaT(v) for any scalar alpha.
- A linear transformation may or may not be injective or surjective.
- A linear transformation between two vector spaces V and W is a map T:V->W such that the following hold:
2009
- http://en.wiktionary.org/wiki/linear_transformation
- (linear algebra) A map between vector spaces which respects addition and multiplication.
2000
- (Hyvärinen & Oja, 2000) ⇒ Aapo Hyvärinen, and Erkki Oja. (2000). “Independent Component Analysis: Algorithms and Applications.” In: Neural Networks, 13(4-5). doi:10.1016/S0893-6080(00)00026-5.
- QUOTE: A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.
Another, very different application of ICA is on feature extraction. A fundamental problem in digital signal processing is to find suitable representations for image, audio or other kind of data for tasks like compression and denoising. Data representations are often based on [[(discrete) linear transformations. Standard linear transformations widely used in image processing are the Fourier, Haar, cosine transforms etc. Each of them has its own favorable properties (Gonzales and Wintz, 1987).
It would be most useful to estimate the linear transformation from the data itself, in which case the transform could be ideally adapted to the kind of data that is being processed.
- QUOTE: A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.