The Numerical Rank of a Matrix
1.4.4 The Numerical Rank of a Matrix
Let A be a matrix of rank r < min(m, n), and E a matrix of small random elements. Then it is most likely that the perturbed matrix A + E has maximal rank min(m, n). However,
since A + E is close to a rank deficient matrix, it should be considered as having numerical rank equal to r. In general, the numerical rank assigned to a matrix should depend on some
tolerance δ, which reflects the error level in the data and/or the precision of the arithmetic used.
It can be shown that perturbations of an element of a matrix A result in perturbations of the same, or smaller, magnitude in its singular values. This motivates the following definition of numerical rank.
Definition 1.4.5.
A matrix A ∈ R m ×n is said to have numerical δ-rank equal to k if σ 1 ≥···≥σ k >δ ≥σ k +1 ≥···≥σ p , p = min(m, n),
(1.4.28) where σ i are the singular values of A. Then the right singular vectors (v k +1 ,...,v n ) form
an orthogonal basis for the numerical null space of A.
1.4. The Linear Least Squares Problem
53 Definition 1.4.5 assumes that there is a well-defined gap between σ k +1 and σ k . When
this is not the case the numerical rank of A is not well defined.
Example 1.4.2.
Consider an integral equation of the first kind,
k(s, t )f (s) ds = g(t), k(s, t ) =e −(s−t) ,
on −1 ≤ t ≤ 1. In order to solve this equation numerically it must first be discretized. We introduce a uniform mesh for s and t on [−1, 1] with step size h = 2/n, s i = −1 + ih, t j = −1 + jh, i, j = 0 : n. Approximating the integral with the trapezoidal rule gives
h w i k(s i ,t j )f (t i ) = g(t j ), j = 0 : n,
i =0
where w i
0 =w n = 1/2. These equations form a linear system Kf
∈R n +1 . For n = 100 the singular values σ k of the matrix K were computed in IEEE double
K = g, (n ∈R +1)×(n+1) ,
f, g
precision with a unit roundoff level of 1.11 · 10 −16 (see Sec. 2.2.3). They are displayed in logarithmic scale in Figure 1.4.2. Note that for k > 30 all σ k are close to roundoff level,
so the numerical rank of K certainly is smaller than 30. This means that the linear system Kf = g is numerically underdetermined and has a meaningful solution only for special right-hand sides g.
Figure 1.4.2. Singular values of a numerically singular matrix.
54 Chapter 1. Principles of Numerical Calculations Equation (1.4.28) is a Fredholm integral equation of the first kind. It is known that
such equations are ill-posed in the sense that the solution f does not depend continuously on the right-hand side g. This example illustrate how this inherent difficulty in the continuous problem carries over to the discretized problem.
ReviewQuestions
1.4.1 State the Gauss–Markov theorem.
1.4.2 Show that the matrix A T A ∈R n ×n of the normal equations is symmetric and positive semidefinite, i.e., x T (A T A)x
1.4.3 Give two geometric conditions which are necessary and sufficient conditions for x to
be the pseudoinverse solution of Ax = b.
1.4.4 (a) Which are the four fundamental subspaces of a matrix? Which relations hold between them?
(b) Show, using the SVD, that P R(A)
= AA † and P R(A T )
=A † A .
1.4.5 (a) Construct an example where (AB) †
(b) Show that if A is an m×r matrix, B is an r ×n matrix, and rank (A) = rank (B) = r , then (AB) †
=B † A † .
Problems and Computer Exercises
1.4.1 In order to estimate the height above sea level for three points A, B, and C, the difference in altitude was measured between these points and points D, E, and F at
sea level. The measurements obtained form a linear system in the heights x A ,x B , and x C of A, B, and C:
Determine the least squares solution and verify that the residual vector is orthogonal to all columns in A.
1.4.2 Consider the least squares problem min x
2 , where A has full column rank. Partition the problem as
By a geometric argument show that the solution can be obtained as follows. First
1.5. Numerical Solution of Differential Equations
55 compute x 2 as the solution to the problem
min
x 2 A 1 (A 2 x 2 2 ,
where P A ⊥ 1 =I−P A 1 is the orthogonal projector onto N (A T 1 ) . Then compute x 2 as the solution to the problem
min
1 1 1 − (b − A 2 2 2 .
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more