Special Matrices
A.1.4 Special Matrices
Any matrix D for which d ij n is a vector, then D = diag (x) ∈ R n ×n is the diagonal matrix formed by the elements of x. For a matrix
A ∈R n ×n the elements a ii , i = 1 : n, form the main diagonal of A, and we write
diag (A) = diag (a 11 ,a 22 ,...,a nn ).
For k = 1 : n − 1 the elements a i,i +k (a i +k,i ), i = 1 : n − k, form the kth superdiagonal (subdiagonal) of A. The elements a i,n −i+1 , i = 1 : n, form the (main) antidiagonal of A.
The unit matrix I = I n n ∈R ×n is defined by
I n = diag (1, 1, . . . , 1) = (e 1 ,e 2 ,...,e n ), and the kth column of I n is denoted by e k . We have that I n = (δ ij ) , where δ ij is the
Kronecker symbol δ ij ij = 1, i = j. For all square matrices of order n, it holds that AI = IA = A. If desirable, we set the size of the unit matrix as a subscript of
I , e.g., I n .
A matrix A for which all nonzero elements are located in consecutive diagonals is called a band matrix. A is said to have upper bandwidth r if r is the smallest integer such that
a ij = 0, j>i + r,
and similarly to have lower bandwidth s if s is the smallest integer such that
a ij = 0, i>j + s.
The number of nonzero elements in each row of A is then at most equal to w = r + s + 1, m which is the bandwidth of A. For a matrix A ∈ R ×n which is not square, we define the bandwidth as
w = max {j − k + 1 | a ij a ik
1≤i≤m
Several classes of band matrices that occur frequently have special names. Thus, a matrix for which r = s = 1 is called tridiagonal; if r = 0, s = 1 (r = 1, s = 0), it is
called lower (upper) bidiagonal, etc. A matrix with s = 1 (r = 1) is called an upper (lower) Hessenberg matrix.
An upper triangular matrix is a matrix R for which r ij = 0 whenever i > j. A square upper triangular matrix has the form
r 11 r 12 ... r 1n 0 r 22 ... r 2n
0 0 ... r nn
A.1. Vectors and Matrices A-7 If also r ij = 0 when i = j, then R is strictly upper triangular. Similarly a matrix L is lower
triangular if l ij = 0, i < j, and strictly lower triangular if l ij = 0, i ≤ j. Sums, products, and inverses of square upper (lower) triangular matrices are again triangular matrices of the same type.
A square matrix A is called symmetric if its elements are symmetric about its main diagonal, i.e., a ij =a ji , or equivalently, A T = A. The product of two symmetric matrices is symmetric if and only if A and B commute, that is, AB = BA. If A T = −A, then A is called skew-symmetric.
For any square nonsingular matrix A, there is a unique adjoint matrix A ∗ such that
(x, A ∗ y) = (Ax, y).
The matrix A ∈ C n ×n is called self-adjoint if A ∗ = A. In particular, for A ∈ R n ×n with the standard inner product, we have
(Ax, y)
= (Ax) T =x A T y.
Hence A ∗ =A T , the transpose of A, and A is self-adjoint if it is symmetric. A symmetric matrix A is called positive definite if
(A.1.12) and positive semidefinite if x T Ax
x T Ax >
0 ∀x ∈ R n , x
≥ 0 for all x ∈ R n . Otherwise it is called indefinite.
Similarly, A ∈ C H ×n is self-adjoint or Hermitian if A = A , the conjugate transpose of A. A Hermitian matrix has analogous properties to a real symmetric matrix. If A is
Hermitian, then (x H Ax) H =x H Ax is real, and A is positive definite if
(A.1.13) Any matrix A ∈ C n ×n can be written as the sum of its Hermitian and a skew-Hermitian part,
x H Ax >
0 ∀x ∈ C n , x
A = H (A) + S(A), where
H (A) = (A +A ),
2 S(A) = (A
2 −A ).
A is Hermitian if and only if S(A) = 0. It is easily seen that A is positive definite if and only if its symmetric part H (A) is positive definite. For the vector space R n (C n ), any inner product can be written as
(x, y)
=x T Gy
((x, y)
=x H Gy),
where the matrix G is positive definite. Let q 1 ,...,q n
∈R m
be orthonormal and form the matrix
= (q m 1 ,...,q n ) ∈R ×n , m ≥ n.
Then Q is called an orthogonal matrix and Q T Q =I n . If Q is square (m = n), then it also holds that Q −1
=Q T , QQ T =I n . Two vectors x and y in C n are called orthogonal if x H y = 0. A square matrix U for
which U H U = I is called unitary, and from (A.1.2) we find that
(U x) H Uy
=x H U H Uy
=x H y.
A-8 Online Appendix A. Introduction to Matrix Computations
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more