Absolute and Relative Errors
2.1.2 Absolute and Relative Errors
Approximation is a central concept in almost all the uses of mathematics. One must often
be satisfied with approximate values of the quantities with which one works. Another type of approximation occurs when one ignores some quantities which are small compared to others. Such approximations are often necessary to ensure that the mathematical and numerical treatment of a problem does not become hopelessly complicated.
We make the following definition.
Definition 2.1.1.
Let ˜x be an approximate value whose exact value is x. Then the absolute error in ˜x is
4x = | ˜x − x|, 4x/x = |( ˜x − x)/x|.
In some books the error is defined with the opposite sign to what we use here. It makes almost no difference which convention one uses, as long as one is consistent. Note that x − ˜x is the correction which should be added to ˜x to get rid of the error. The correction and the absolute error then have the same magnitude but may have different signs.
In many situations one wants to compute a strict or approximate bound for the absolute or relative error. Since it is sometimes rather hard to obtain an error bound that is both strict and sharp, one sometimes prefers to use less strict but often realistic error estimates. These can be based on the first neglected term in some expansion, or on some other asymptotic considerations.
The notation x = ˜x ± ǫ means, in this book, | ˜x − x| ≤ ǫ. For example, if x = 0.5876 ± 0.0014, then 0.5862 ≤ x ≤ 0.5890, and | ˜x − x| ≤ 0.0014. In other texts, the same plus–minus notation is sometimes used for the “standard error” (see Sec. 2.3.3) or
bound and the relative error bound may be defined as bounds for
−p implies that components ˜x i with | ˜x i about p significant digits, but this is not true for components of smaller absolute value. An
alternative is to use componentwise relative errors,
(2.1.1) but this assumes that x i
max
| ˜x i −x i |/|x i |,
2.1. Basic Concepts in Error Estimation
91 We will distinguish between the terms accuracy and precision. By accuracy we mean
the absolute or relative error of an approximate quantity. The term precision will be reserved for the accuracy with which the basic arithmetic operations +, −, ∗, / are performed. For
floating-point operations this is given by the unit roundoff; see (2.2.8). Numerical results which are not followed by any error estimations should often, though not always, be considered as having an uncertainty of 1 2 of a unit in the last decimal place. In presenting numerical results, it is a good habit, if one does not want to go through the difficulty of presenting an error estimate with each result, to give explanatory remarks such as
• “All the digits given are thought to be significant.” • “The data have an uncertainty of at most three units in the last digit.”
• “For an ideal two-atom gas, c P /c V = 1.4 (exactly).”
We shall also introduce some notations, useful in practice, though their definitions are not exact in a mathematical sense:
a ≪ b (a ≫ b) is read “a is much smaller (much greater) than b.” What is meant by “much smaller”(or “much greater”) depends on the context—among other things, on the desired precision.
a ≈ b is read “a is approximately equal to b” and means the same as |a − b| ≪ c, where c is chosen appropriate to the context. We cannot generally say, for example, that 10 −6 ≈ 0.
a same as “a ≤ b or a ≈ b.”
Occasionally we shall have use for the following more precisely defined mathematical concepts:
f (x) = O(g(x)), x → a, means that |f (x)/g(x)| is bounded as x → a (a can be finite, +∞, or −∞).
f (x) = o(g(x)), x → a, means that lim x →a f (x)/g(x) = 0.
f (x) ∼ g(x), x → a, means that lim x →a f (x)/g(x) = 1.
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more