Jacobian Matrix

From GM-RKB
(Redirected from jacobian matrix)
Jump to navigation Jump to search

A Jacobian Matrix is a square matrix that contains the first-order partial derivatives of a vector-valued function.



References

2021a

  • (Wolfram Mathworld, 2021) ⇒ Eric W. Weisstein (1999-2021). "Jacobian". From MathWorld--A Wolfram Web Resource. Retrieved:2021-1-24.
    • QUOTE: Given a set $\mathbf{y=f(x)}$ of $n$ equations in $n$ variables $x_1, \cdots , x_n$, written explicitly as

$\mathbf{y} \equiv\left[\begin{array}{c} f_{1}(\mathbf{x}) \\ f_{2}(\mathbf{x}) \\ \vdots \\ f_{n}(\mathbf{x}) \end{array}\right]$

(1)
or more explicitly as

$\left\{\begin{array}{l} y_{1}=f_{1}\left(x_{1}, \ldots, x_{n}\right) \\ \vdots \\ y_{n}=f_{n}\left(x_{1}, \ldots, x_{n}\right) \end{array}\right.$

(2)
the Jacobian matrix, sometimes simply called “the Jacobian” (Simon and Blume 1994) is defined by

$\mathbf{J}\left(x_{1}, \ldots, x_{n}\right)=\left[\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}} \\ \vdots & \ddots & \vdots \\ \frac{\partial y_{n}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}} \end{array}\right] .$

(3)
The determinant of $\mathbf{J}$ is the Jacobian determinant (confusingly, often called “the Jacobian” as well) and is denoted

$J=\left|\frac{\partial\left(y_{1}, \ldots, y_{n}\right)}{\partial\left(x_{1}, \ldots, x_{n}\right)}\right|$

(4)

2021b

  • (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant Retrieved:2021-1-24.
    • In vector calculus, the Jacobian matrix () of a vector-valued function in several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature.

      Suppose f : ℝn → ℝm is a function such that each of its first-order partial derivatives exist on n. This function takes a point x ∈ ℝn as input and produces the vector f(x) ∈ ℝm as output. Then the Jacobian matrix of f is defined to be an m×n matrix, denoted by J, whose (i,j)th entry is [math]\displaystyle{ \mathbf J_{ij} = \frac{\partial f_i}{\partial x_j} }[/math] , or explicitly : [math]\displaystyle{ \mathbf J = \begin{bmatrix} \dfrac{\partial \mathbf{f}}{\partial x_1} & \cdots & \dfrac{\partial \mathbf{f}}{\partial x_n} \end{bmatrix} = \begin{bmatrix} \nabla^T f_1 \\ \vdots \\ \nabla^T f_m \end{bmatrix} = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \cdots & \dfrac{\partial f_1}{\partial x_n}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial x_1} & \cdots & \dfrac{\partial f_m}{\partial x_n} \end{bmatrix} }[/math] where [math]\displaystyle{ \nabla^T f_i }[/math] is the transpose (row vector) of the gradient of the [math]\displaystyle{ i }[/math] component.

      This matrix, whose entries are functions of x, is denoted in various ways; common notations includeDf, Jf, [math]\displaystyle{ \nabla \mathbf{f} }[/math] , and [math]\displaystyle{ \frac{\partial(f_1,..,f_m)}{\partial(x_1, ..,x_n)} }[/math] . Some authors define the Jacobian as the transpose of the form given above.

      The Jacobian matrix represents the differential of f at every point where f is differentiable. In detail, if h is a displacement vector represented by a column matrix, the matrix product J(x) ⋅ h is another displacement vector, that is the best linear approximation of the change of f in a neighborhood of x, if f(x) is differentiable at x.Template:Efn This means that the function that maps y to f(x) + J(x) ⋅ (yx) is the best linear approximation of f(y) for all points y close to x. This linear function is known as the derivative or the differential of f at x.

      When m = n, the Jacobian matrix is square, so its determinant is a well-defined function of x, known as the Jacobian determinant of f. It carries important information about the local behavior of f. In particular, the function f has locally in the neighborhood of a point x an inverse function that is differentiable if and only if the Jacobian determinant is nonzero at x (see Jacobian conjecture). The Jacobian determinant also appears when changing the variables in multiple integrals (see substitution rule for multiple variables).

      When m = 1, that is when f : ℝn → ℝ is a scalar-valued function, the Jacobian matrix reduces to a row vector. This row vector of all first-order partial derivatives of f is the gradient of f, i.e. [math]\displaystyle{ \mathbf{J}_{f} = \nabla f }[/math] . Specialising further, when m = n = 1, that is when f : ℝ → ℝ is a scalar-valued function of a single variable, the Jacobian matrix has a single entry. This entry is the derivative of the function f.

      These concepts are named after the mathematician Carl Gustav Jacob Jacobi (1804–1851).

2021c

  • (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant#Jacobian_matrix Retrieved:2021-1-24.
    • The Jacobian of a vector-valued function in several variables generalizes the gradient of a scalar-valued function in several variables, which in turn generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian matrix of a scalar-valued function in several variables is (the transpose of) its gradient and the gradient of a scalar-valued function of a single variable is its derivative.

      At each point where a function is differentiable, its Jacobian matrix can also be thought of as describing the amount of "stretching", "rotating" or "transforming" that the function imposes locally near that point. For example, if (x′, y′) f(x, y) is used to smoothly transform an image, the Jacobian matrix Jf(x, y), describes how the image in the neighborhood of (x, y) is transformed.

      If a function is differentiable at a point, its differential is given in coordinates by the Jacobian matrix. However a function does not need to be differentiable for its Jacobian matrix to be defined, since only its first-order partial derivatives are required to exist.

      If f is differentiable at a point p in n, then its differential is represented by Jf(p). In this case, the linear transformation represented by Jf(p) is the best linear approximation of f near the point p, in the sense that : [math]\displaystyle{ \mathbf f(\mathbf x) - \mathbf f(\mathbf p) = \mathbf J_{\mathbf f}(\mathbf p)(\mathbf x - \mathbf p) + o(\|\mathbf x - \mathbf p\|) \quad (\text{as } \mathbf{x} \to \mathbf{p}), }[/math] where o(‖xp‖) is a quantity that approaches zero much faster than the distance between x and p does as x approaches p. This approximation specializes to the approximation of a scalar function of a single variable by its Taylor polynomial of degree one, namely : [math]\displaystyle{ f(x) - f(p) = f'(p) (x - p) + o(x - p) \quad (\text{as } x \to p) }[/math] .

      In this sense, the Jacobian may be regarded as a kind of “first-order derivative” of a vector-valued function of several variables. In particular, this means that the gradient of a scalar-valued function of several variables may to be regarded as its "first-order derivative".

      Composable differentiable functions f : ℝn → ℝm and g : ℝm → ℝk satisfy the chain rule, namely [math]\displaystyle{ \mathbf{J}_{\mathbf{g} \circ \mathbf{f}}(\mathbf{x}) = \mathbf{J}_{\mathbf{g}}(\mathbf{f}(\mathbf{x})) \mathbf{J}_{\mathbf{f}}(\mathbf{x}) }[/math] for x in n.

      The Jacobian of the gradient of a scalar function of several variables has a special name: the Hessian matrix, which in a sense is the “second derivative” of the function in question.

1994

  • (Simon & Blume, 1994) ⇒ C. P. Simon, and L. E. Blume (1994). “Mathematics for Economists". In: New York: W. W. Norton.