2012 MatrixComputations3rdEd
- (Golub & Van Loan, 2012) ⇒ Gene H. Golub, and Charles F. Van Loan. (2012). “Matrix Computations (4th Ed.).” Johns Hopkins University Press. ISBN:1421408597
Subject Headings: Matrix Computation, Cholesky factorization, Matrix Anal, QR Algorithm, Singular Value Decomposition, tridiagonal matrix.
Notes
- Google Books Book Page: http://books.google.com/books?id=X5YfsuCWpxMC
- Book Site: http://www.cs.cornell.edu/cv/GVL4/golubandvanloan.htm
- http://www.cs.cornell.edu/cv/GVL4/M-Files/M-Home.htm
Cited By
Quotes
Chapter 1 - Matrix Multiplication p.1
1.1 Basic Algorithms and Notation p.2
{{#ifanon:|
Matrix computations are built upon a hierarchy of linear algebraic operations. Dot products involve the scalar operations of addition and multiplication. Matrix-vector multiplication is made up of dot products. Matrix-matrix multiplication amounts to a collection of matrix-vector products. All of these operations can be described in algorithmic form or in the language of linear algebra. One of our goals is to show how these two styles of expression complement each other. Along the way we pick up notation and acquaint the reader with the kind of thinking that underpins the matrix computation area. The discussion revolves around the matrix multiplication problem, a computation that can be organized in several ways.
1.1.1 Matrix Notation
Let designate the set of real numbers. We denote the vector space of all m-by-n real matrices by m ×n:
- [math]\displaystyle{ }[/math]
If a capital letter is used to denote a matrix (e.g., A, B, Δ), then the corresponding lower case letter with subscript ij refers to the (i, j) entry (e.g., aij, bij, δij). Sometimes we designate the elements of a matrix with the notation [A] ij or A( i, j).
1.1.2 Matrix Operations
Basic matrix operations include transposition (m × n → n ×m), C = AT ⇒ cij = aij,
- [math]\displaystyle{ C = A + B ⇒ cij = aij + bij, }[/math]
addition ([math]\displaystyle{ \R^{m × n} × \mathbb{R}^{m × n} → \mathbb{R}^{m × n} }[/math]),
- [math]\displaystyle{ C = αA ⇒ cij = αaij, }[/math]
scalar-matrix multiplication ([math]\displaystyle{ \R × \mathbb{R}^{m × n} → \mathbb{R}^{m × n} }[/math]), and matrix-matrix multiplication ([math]\displaystyle{ \R^{m × p} × \mathbb{R}^{p × n} → \mathbb{R}^{m ×n} }[/math]),
- [math]\displaystyle{ C = A B \Rightarrow c_{ij} = \Sigma^p_{k=1} }[/math]
Pointwise matrix operations are occasionally useful, especially pointwise multiplication ([math]\displaystyle{ \R^{m × n} × \mathbb{R}^{m × n} → \mathbb{R}^{m × n} }[/math]),
- [math]\displaystyle{ C = A \ .^* B \Rightarrow c_{ij} = a_{ij} b_{ij} . }[/math]
and pointwise division ([math]\displaystyle{ \R^{m × n} × \mathbb{R}^{m × n} → \mathbb{R}^{m × n} }[/math]),
- [math]\displaystyle{ C = A \ ./ B ⇒ c_{ij} = a_{ij} / b_{ij}. }[/math]
Of course, for pointwise division to make sense, the “denominator matrix” must have nonzero entries.
1.1.3 Vector Notation
Let [math]\displaystyle{ \R^n }[/math] denote the vector space of real n-vectors:
- [math]\displaystyle{ x \in \mathbb{R}^n \Leftrightarrow \ \ x = \begin{bmatrix} x_{1} \\ \vdots \\ x_n \end{bmatrix} \ \ x_i \in \R. }[/math]
We refer to [math]\displaystyle{ x_i }[/math] as the ith component of [math]\displaystyle{ x }[/math]. Depending upon context, the alternative notations [math]\displaystyle{ [x]_i }[/math] and [math]\displaystyle{ x(i) }[/math] are sometimes used. Notice that we are identifying [math]\displaystyle{ \R^n }[/math] with [math]\displaystyle{ \R^{n × 1} }[/math] and so the members of [math]\displaystyle{ n }[/math] are column vectors. On the other hand, the elements of [math]\displaystyle{ n × 1 }[/math] are row vectors: [math]\displaystyle{ x \in \mathbb{R}^{n × 1} \Leftrightarrow x = [x_1,...,x_n] }[/math]. If [math]\displaystyle{ x }[/math] is a column vector, then [math]\displaystyle{ y = x^T }[/math] a row vector.
1.1.4 Vector Operations
Assume that a , x n, and y n. Basic vector operations include scalar-vector multiplication, z = ax ⇒ zi = axi, vector addition, z = x + y ⇒ zi = x_i + yi, and the inner product (or dot product), A particularly important operation, which we write in update form, is the saxpy: y = ax + y ⇒ yi = axi + yi Here, the symbol “[math]\displaystyle{ = }[/math]” is used to denote assignment, not mathematical equality. The vector y is being updated. The name “saxpy” is used in LAPACK , a software package that implements many of the algorithms in this book. “Saxpy” is a mnemonic for “scalar a x plus y.” See LAPACK.
[[Pointwise vector operation]s are also useful, including vector multiplication, [math]\displaystyle{ z = x \ .* y ⇒ z_i = x_iy_i }[/math], and vector division, </math>z = x \./ y ⇒ z_i = x_i / y_i</math>.
…
}}
1.2 Structure and Efficiency p.14
1.3 Block Matrices and Algorithms p.22
1.4 Fast Matrix-Vector Products p.33
1.5 Vectorization and Locality p.43
1.6 Parallel Matrix Multiplication p.49
Chapter 2 - Matrix Analysis p.63
2.1 Basic Ideas from Linear Algebra p.64
2.2 Vector Norms p.68
2.3 Matrix Norms p.71
2.4 The Singular Value Decomposition p.76
2.5 Subspace Metrics p.81
2.6 Linear System Sensitivity p.87
2.7 Finite Precision Matrix Computations p.93
Chapter 3 - General Linear Systems p.105
3.1 Triangular Systems p.106
3.2 The LU Factorization p.111
3.3 Roundoff Error in Gaussian Elimination p.122
3.4 Pivoting p.125
3.5 Improving and Estimating Accuracy p.137
3.6 Parallel LU p.144
Chapter 4 - Special Linear Systems p.153
4.1 Diagonal Dominance and Symmetry p.154
4.2 Positive Definite Systems p.159
4.3 Banded Systems p.176
4.4 Symmetric Indefinite Systems p.186
4.5 Block Tridiagonal Systems p.196
4.6 Vandermonde Systems p.203
4.7 Classical Methods for Toeplitz Systems p.208
4.8 Circulant and Discrete Poisson Systems p.219
Chapter 5 - Orthogonalization and Least Squares p.233
5.1 Householder and Givens Transformations p.234
5.2 The QR Factorization p.246
5.3 The Full-Rank Least Squares Problem p.260
5.4 Other Orthogonal Factorizations p.274
5.5 The Rank-Deficient Least Squares Problem p.288
5.6 Square and Underdetermined Systems p.298
Chapter 6 - Modified Least Squares Problems p.303
6.1 Weighting and Regularization p.304
6.2 Constrained Least Squares p.313
6.3 Total Least Squares p.320
6.4 Subspace Computations with the SVD p.327
6.5 Updating Matrix Factorizations p.334
Chapter 7 - Unsymmetric Eigenvalue Problems p.347
7.1 Properties and Decompositions p.348
7.2 Perturbation Theory p.357
7.3 Power Iterations p.365
7.4 The Hessenberg and Real Schur Forms p.376
7.5 The Practical QR Algorithm p.385
7.6 Invariant Subspace Computations p.394
7.7 The Generalized Eigenvalue Problem p.405
7.8 Hamiltonian and Product Eigenvalue Problems p.420
7.9 Pseudospectra p.426
Chapter 8 - Symmetric Eigenvalue Problems p.439
8.1 Properties and Decompositions p.440
8.2 Power Iterations p.450
8.3 The Symmetric QR Algorithm p.458
8.4 More Methods for Tridiagonal Problems p.467
8.5 Jacobi Methods p.476
8.6 Computing the SVD p.486
8.7 Generalized Eigenvalue Problems with Symmetry p.497
Chapter 9 - Functions of Matrices p.513
9.1 Eigenvalue Methods p.514
9.2 Approximation Methods p.522
9.3 The Matrix Exponential p.530
9.4 The Sign, Square Root, and Log of a Matrix p.536
Chapter 10 - Large Sparse Eigenvalue Problems p.545
10.1 The Symmetric Lanczos Process p.546
10.2 Lanczos, Quadrature, and Approximation p.556
10.3 Practical Lanczos Procedures p.562
10.4 Large Sparse SVD Frameworks p.571
10.5 Krylov Methods for Unsymmetric Problems p.579
10.6 Jacobi-Davidson and Related Methods p.589
Chapter 11 - Large Sparse Linear System Problems p.597
11.1 Direct Methods p.598
11.2 The Classical Iterations p.611
11.3 The Conjugate Gradient Method p.625
11.4 Other Krylov Methods p.639
11.5 Preconditioning p.650
11.6 The Multigrid Framework p.670
Chapter 12 - Special Topics p.681
12.1 Linear Systems with Displacement Structure p.681
12.2 Structured-Rank Problems p.691
12.3 Kronecker Product Computations p.707
12.4 Tensor Unfoldings and Contractions p.719
12.5 Tensor Decompositions and Iterations p.731
References
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2012 MatrixComputations3rdEd | Gene H. Golub Charles F. Van Loan | Matrix Computations (3rd Ed.) | 2012 |