Functional Analysis and Applications pdf pdf

Industrial and Applied Mathematics

Abul Hasan Siddiqi

Functional
Analysis and
Applications

Industrial and Applied Mathematics
Editor-in-chief
Abul Hasan Siddiqi, Sharda University, Greater Noida, India
Editorial Board
Zafer Aslan, International Centre for Theoretical Physics, Istanbul, Turkey
M. Brokate, Technical University, Munich, Germany
N.K. Gupta, Indian Institute of Technology Delhi, New Delhi, India
Akhtar Khan, Center for Applied and Computational Mathematics, Rochester, USA
Rene Lozi, University of Nice Sophia-Antipolis, Nice, France
Pammy Manchanda, Guru Nanak Dev University, Amritsar, India
M. Zuhair Nashed, University of Central Florida, Orlando, USA
Govindan Rangarajan, Indian Institute of Science, Bengaluru, India
K.R. Sreenivasan, Polytechnic School of Engineering, New York, USA


The Industrial and Applied Mathematics series publishes high-quality research-level
monographs, lecture notes and contributed volumes focusing on areas where
mathematics is used in a fundamental way, such as industrial mathematics,
bio-mathematics, financial mathematics, applied statistics, operations research and
computer science.

More information about this series at http://www.springer.com/series/13577

Abul Hasan Siddiqi

Functional Analysis
and Applications

123

Abul Hasan Siddiqi
School of Basic Sciences and Research
Sharda University
Greater Noida, Uttar Pradesh

India

ISSN 2364-6837
ISSN 2364-6845 (electronic)
Industrial and Applied Mathematics
ISBN 978-981-10-3724-5
ISBN 978-981-10-3725-2 (eBook)
https://doi.org/10.1007/978-981-10-3725-2
Library of Congress Control Number: 2018935211
© Springer Nature Singapore Pte Ltd. 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the

authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Printed on acid-free paper
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
part of Springer Nature
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To
My wife Azra

Preface

Functional analysis was invented and developed in the twentieth century. Besides
being an area of independent mathematical interest, it provides many fundamental
notions essential for modeling, analysis, numerical approximation, and computer
simulation processes of real-world problems. As science and technology are
increasingly refined and interconnected, the demand for advanced mathematics
beyond the basic vector algebra and differential and integral calculus has greatly
increased. There is no dispute on the relevance of functional analysis; however,

there have been differences of opinion among experts about the level and
methodology of teaching functional analysis. In the recent past, its applied nature
has been gaining ground.
The main objective of this book is to present all those results of functional
analysis, which have been frequently applied in emerging areas of science and
technology.
Functional analysis provides basic tools and foundation for areas of vital
importance such as optimization, boundary value problems, modeling real-world
phenomena, finite and boundary element methods, variational equations and
inequalities, inverse problems, and wavelet and Gabor analysis. Wavelets, formally
invented in the mid-eighties, have found significant applications in image processing and partial differential equations. Gabor analysis was introduced in 1946,
gaining popularity since the last decade among the signal processing community
and mathematicians.
The book comprises 15 chapters, an appendix, and a comprehensive updated
bibliography. Chapter 1 is devoted to basic results of metric spaces, especially an
important fixed-point theorem called the Banach contraction mapping theorem, and
its applications to matrix, integral, and differential equations. Chapter 2 deals with
basic definitions and examples related to Banach spaces and operators defined on
such spaces. A sufficient number of examples are presented to make the ideas clear.
Algebras of operators and properties of convex functionals are discussed. Hilbert

space, an infinite-dimensional analogue of Euclidean space of finite dimension, is
introduced and discussed in detail in Chap. 3. In addition, important results such as
projection theorem, Riesz representation theorem, properties of self-adjoint,
vii

viii

Preface

positive, normal, and unitary operators, relationship between bounded linear
operator and bounded bilinear form, and Lax–Milgram lemma dealing with the
existence of solutions of abstract variational problems are presented. Applications
and generalizations of the Lax–Milgram lemma are discussed in Chaps. 7 and 8.
Chapter 4 is devoted to the Hahn–Banach theorem, Banach–Alaoglu theorem,
uniform boundedness principle, open mapping, and closed graph theorems along
with the concept of weak convergence and weak topologies. Chapter 5 provides an
extension of finite-dimensional classical calculus to infinite-dimensional spaces,
which is essential to understand and interpret various current developments of
science and technology. More precisely, derivatives in the sense of Gâteau, Fréchet,
Clarke (subgradient), and Schwartz (distributional derivative) along with Sobolev

spaces are the main themes of this chapter. Fundamental results concerning existence and uniqueness of solutions and algorithm for finding solutions of optimization problems are described in Chap. 6. Variational formulation and existence
of solutions of boundary value problems representing physical phenomena are
described in Chap. 7. Galerkin and Ritz approximation methods are also included.
Finite element and boundary element methods are introduced and several theorems
concerning error estimation and convergence are proved in Chap. 8. Chapter 9 is
devoted to variational inequalities. A comprehensive account of this elegant
mathematical model in terms of operators is given. Apart from existence and
uniqueness of solutions, error estimation and finite element methods for approximate solutions and parallel algorithms are discussed. The chapter is mainly based
on the work of one of its inventors, J. L. Lions, and his co-workers and research
students. Activities at the Stampacchia School of Mathematics, Erice, Italy, are
providing impetus to researchers in this field. Chapter 10 is devoted to rudiments of
spectral theory with applications to inverse problems. We present frame and basis
theory in Hilbert spaces in Chap. 11. Chapter 12 deals with wavelets. Broadly,
wavelet analysis is a refinement of Fourier analysis and has attracted the attention of
researchers in mathematics, physics, and engineering alike. Replacement of the
classical Fourier methods, wherever they have been applied, by emerging wavelet
methods has resulted in drastic improvements. In this chapter, a detailed account of
this exciting theory is presented. Chapter 13 presents an introduction to applications
of wavelet methods to partial differential equations and image processing. These are
emerging areas of current interest. There is still a wide scope for further research.

Models and algorithms for removal of an unwanted component (noise) of a signal
are discussed in detail. Error estimation of a given image with its wavelet representation in the Besov norm is given. Wavelet frames are comparatively a new
addition to wavelet theory. We discuss their basic properties in Chap. 14. Dennis
Gabor, Nobel Laureate of Physics (1971), introduced windowed Fourier analysis,
now called Gabor analysis, in 1946. Fundamental concepts of this analysis with
certain applications are presented in Chap. 15.
In appendix, we present a resume of the results of topology, real analysis,
calculus, and Fourier analysis which we often use in this book. Chapters 9, 12, 13,
and 15 contain recent results opening up avenues for further work.

Preface

ix

The book is self-contained and provides examples, updated references, and
applications in diverse fields. Several problems are thought-provoking, and many
lead to new results and applications. The book is intended to be a textbook for
graduate or senior undergraduate students in mathematics. It could also be used for
an advance course in system engineering, electrical engineering, computer engineering, and management sciences. The proofs of theorems and other items marked
with an asterisk may be omitted for a senior undergraduate course or a course in

other disciplines. Those who are mainly interested in applications of wavelets and
Gabor system may study Chaps. 2, 3, and 11 to 15. Readers interested in variational
inequalities and its applications may pursue Chaps. 3, 8, and 9. In brief, this book is
a handy manual of contemporary analytic and numerical methods in
infinite-dimensional spaces, particularly Hilbert spaces.
I have used a major part of the material presented in the book while teaching at
various universities of the world. I have also incorporated in this book the ideas that
emerged after discussion with some senior mathematicians including Prof. M. Z.
Nashed, Central Florida University; Prof. P. L. Butzer, Aachen Technical
University; Prof. Jochim Zowe and Prof. Michael Kovara, Erlangen University; and
Prof. Martin Brokate, Technical University, Munich.
I take this opportunity to thank Prof. P. Manchanda, Chairperson, Department of
Mathematics, Guru Nanak Dev University, Amritsar, India; Prof. Rashmi
Bhardwaj, Chairperson, Non-linear Dynamics Research Lab, Guru Gobind Singh
Indraprastha University, Delhi, India; and Prof. Q. H. Ansari, AMU/KFUPM, for
their valuable suggestions in editing the manuscript. I also express my sincere
thanks to Prof. M. Al-Gebeily, Prof. S. Messaoudi, Prof. K. M. Furati, and
Prof. A. R. Khan for reading carefully different parts of the book.
Greater Noida, India


Abul Hasan Siddiqi

Contents

1

Banach Contraction Fixed Point Theorem . . . . . . . . . . . .
1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Contraction Fixed Point Theorem by Stefan Banach .
1.3 Application of Banach Contraction Mapping Theorem
1.3.1 Application to Matrix Equation . . . . . . . . . . .
1.3.2 Application to Integral Equation . . . . . . . . . .
1.3.3 Existence of Solution of Differential Equation
1.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.

.
.
.
.
.
.
.
.

1
1
1
7
7
9
12
13

2

Banach Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Basic Results of Banach Spaces . . . . . . . . . . . . . . . . . . . . .
2.2.1 Examples of Normed and Banach Spaces . . . . . . .
2.3 Closed, Denseness, and Separability . . . . . . . . . . . . . . . . . .
2.3.1 Introduction to Closed, Dense, and Separable Sets .
2.3.2 Riesz Theorem and Construction of a New Banach
Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.3 Dimension of Normed Spaces . . . . . . . . . . . . . . . .
2.3.4 Open and Closed Spheres . . . . . . . . . . . . . . . . . . .
2.4 Bounded and Unbounded Operators . . . . . . . . . . . . . . . . . .
2.4.1 Definitions and Examples . . . . . . . . . . . . . . . . . . .
2.4.2 Properties of Linear Operators . . . . . . . . . . . . . . . .
2.4.3 Unbounded Operators . . . . . . . . . . . . . . . . . . . . . .
2.5 Representation of Bounded and Linear Functionals . . . . . . .
2.6 Space of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Convex Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7.1 Convex Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7.2 Affine Operator . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7.3 Lower Semicontinuous and Upper Semicontinuous
Functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

15
15
16
17
20
20

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

22
22
23
25
25
33
40
41
43
48
48
50

...

53

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

xi

xii

Contents

2.8

3

4

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8.1 Solved Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8.2 Unsolved Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .

54
54
65

Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Fundamental Definitions and Properties . . . . . . . . . . . . . . .
3.2.1 Definitions, Examples, and Properties of Inner
Product Space . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2 Parallelogram Law . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Orthogonal Complements and Projection Theorem . . . . . . .
3.3.1 Orthogonal Complements and Projections . . . . . . .
3.4 Orthogonal Projections and Projection Theorem . . . . . . . . .
3.5 Projection on Convex Sets . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Orthonormal Systems and Fourier Expansion . . . . . . . . . . .
3.7 Duality and Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7.1 Riesz Representation Theorem . . . . . . . . . . . . . . .
3.7.2 Reflexivity of Hilbert Spaces . . . . . . . . . . . . . . . . .
3.8 Operators in Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . .
3.8.1 Adjoint of Bounded Linear Operators on a Hilbert
Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.8.2 Self-adjoint, Positive, Normal, and Unitary
Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.8.3 Adjoint of an Unbounded Linear Operator . . . . . . .
3.9 Bilinear Forms and Lax–Milgram Lemma . . . . . . . . . . . . .
3.9.1 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . .
3.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.10.1 Solved Problems . . . . . . . . . . . . . . . . . . . . . . . . .
3.10.2 Unsolved Problems . . . . . . . . . . . . . . . . . . . . . . . .

...
...
...

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

112
121
123
123
132
132
140

Fundamental Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Hahn–Banach Theorem . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Topologies on Normed Spaces . . . . . . . . . . . . . . . . . . .
4.3.1 Compactness in Normed Spaces . . . . . . . . . . .
4.3.2 Strong and Weak Topologies . . . . . . . . . . . . .
4.4 Weak Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Weak Convergence in Banach Spaces . . . . . . .
4.4.2 Weak Convergence in Hilbert Spaces . . . . . . .
4.5 Banach–Alaoglu Theorem . . . . . . . . . . . . . . . . . . . . . .
4.6 Principle of Uniform Boundedness and Its Applications
4.6.1 Principle of Uniform Boundedness . . . . . . . . .
4.7 Open Mapping and Closed Graph Theorems . . . . . . . .
4.7.1 Graph of a Linear Operator and Closedness
Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

145
145
146
155
155
157
158
158
161
164
166
166
167

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

71
71
72
72
78
80
80
83
90
93
101
101
105
106

. . . 106

. . . . . . 167

Contents

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

170
171
171
171
175

Differential and Integral Calculus in Banach Spaces .
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 The Gâteaux and Fréchet Derivatives . . . . . . . . .
5.2.1 The Gâteaux Derivative . . . . . . . . . . . .
5.2.2 The Fréchet Derivative . . . . . . . . . . . . .
5.3 Generalized Gradient (Subdifferential) . . . . . . . .
5.4 Some Basic Results from Distribution Theory
and Sobolev Spaces . . . . . . . . . . . . . . . . . . . . .
5.4.1 Distributions . . . . . . . . . . . . . . . . . . . .
5.4.2 Sobolev Space . . . . . . . . . . . . . . . . . . .
5.4.3 The Sobolev Embedding Theorems . . . .
5.5 Integration in Banach Spaces . . . . . . . . . . . . . . .
5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6.1 Solved Problems . . . . . . . . . . . . . . . . .
5.6.2 Unsolved Problems . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

177
177
178
178
182
190

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

192
192
206
211
215
218
218
223

Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 General Results on Optimization . . . . . . . . . . . . . . . . .
6.3 Special Classes of Optimization Problems . . . . . . . . . .
6.3.1 Convex, Quadratic, and Linear Programming . .
6.3.2 Calculus of Variations and Euler–Lagrange
Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.3 Minimization of Energy Functional (Quadratic
Functional) . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4 Algorithmic Optimization . . . . . . . . . . . . . . . . . . . . . .
6.4.1 Newton Algorithm and Its Generalization . . . .
6.4.2 Conjugate Gradient Method . . . . . . . . . . . . . .
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

227
227
227
231
231

4.8

5

6

7

xiii

4.7.2 Open Mapping Theorem . . .
4.7.3 The Closed Graph Theorem
Problems . . . . . . . . . . . . . . . . . . . .
4.8.1 Solved Problems . . . . . . . .
4.8.2 Unsolved Problems . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

. . . . . . 231
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

233
234
234
243
246

Operator Equations and Variational Methods . . . . . . . . . . . . . .
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Operator Equations and Solvability Conditions . . . . . . . . . . .
7.3.1 Equivalence of Operator Equation and Minimization
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3.2 Solvability Conditions . . . . . . . . . . . . . . . . . . . . . .
7.3.3 Existence Theorem for Nonlinear Operators . . . . . . .

.
.
.
.

.
.
.
.

249
249
249
253

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

. . 253
. . 255
. . 258

xiv

Contents

7.4
7.5

7.6

7.7
7.8
8

9

Existence of Solutions of Dirichlet and Neumann Boundary
Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Approximation Method for Operator Equations . . . . . . . . . .
7.5.1 Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . .
7.5.2 Rayleigh–Ritz–Galerkin Method . . . . . . . . . . . . . .
Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6.1 Eigenvalue of Bilinear Form . . . . . . . . . . . . . . . . .
7.6.2 Existence and Uniqueness . . . . . . . . . . . . . . . . . . .
Boundary Value Problems in Science and Technology . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Finite Element and Boundary Element Methods . . . . . . . . .
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . .
8.2.1 Abstract Problem and Error Estimation . . . . . .
8.2.2 Internal Approximation of H 1 ðXÞ . . . . . . . . . .
8.2.3 Finite Elements . . . . . . . . . . . . . . . . . . . . . . .
8.3 Applications of the Finite Method in Solving Boundary
Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.4 Introduction of Boundary Element Method . . . . . . . . . .
8.4.1 Weighted Residuals Method . . . . . . . . . . . . . .
8.4.2 Boundary Solutions and Inverse Problem . . . . .
8.4.3 Boundary Element Method . . . . . . . . . . . . . . .
8.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

259
263
263
266
267
267
268
269
274

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

277
277
280
280
286
287

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

292
297
297
299
301
307

Variational Inequalities and Applications . . . . . . . . . . . . . . . .
9.1 Motivation and Historical Remarks . . . . . . . . . . . . . . . . .
9.1.1 Contact Problem (Signorini Problem) . . . . . . . . .
9.1.2 Modeling in Social, Financial and Management
Sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Variational Inequalities and Their Relationship
with Other Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1 Classes of Variational Inequalities . . . . . . . . . . . .
9.2.2 Formulation of a Few Problems in Terms
of Variational Inequalities . . . . . . . . . . . . . . . . . .
9.3 Elliptic Variational Inequalities . . . . . . . . . . . . . . . . . . . .
9.3.1 Lions–Stampacchia Theorem . . . . . . . . . . . . . . . .
9.3.2 Variational Inequalities for Monotone Operators . .
9.4 Finite Element Methods for Variational Inequalities . . . . .
9.4.1 Convergence and Error Estimation . . . . . . . . . . .
9.4.2 Error Estimation in Concrete Cases . . . . . . . . . . .
9.5 Evolution Variational Inequalities and Parallel Algorithms
9.5.1 Solution of Evolution Variational Inequalities . . .
9.5.2 Decomposition Method and Parallel Algorithms . .

. . . . 311
. . . . 311
. . . . 311
. . . . 312
. . . . 313
. . . . 313
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

315
320
321
323
329
329
333
335
335
338

Contents

9.6

9.7

xv

Obstacle Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.1 Obstacle Problem . . . . . . . . . . . . . . . . . . . . . .
9.6.2 Membrane Problem (Equilibrium of an Elastic
Membrane Lying over an Obstacle) . . . . . . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . 345
. . . . . . 345
. . . . . . 346
. . . . . . 348

10 Spectral Theory with Applications . . . . . . . . . . . . . . . . . .
10.1 The Spectrum of Linear Operators . . . . . . . . . . . . . . .
10.2 Resolvent Set of a Closed Linear Operator . . . . . . . . .
10.3 Compact Operators . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 The Spectrum of a Compact Linear Operator . . . . . . .
10.5 The Resolvent of a Compact Linear Operator . . . . . . .
10.6 Spectral Theorem for Self-adjoint Compact Operators .
10.7 Inverse Problems and Self-adjoint Compact Operators .
10.7.1 Introduction to Inverse Problems . . . . . . . . . .
10.7.2 Singular Value Decomposition . . . . . . . . . . .
10.7.3 Regularization . . . . . . . . . . . . . . . . . . . . . . .
10.8 Morozov’s Discrepancy Principle . . . . . . . . . . . . . . . .
10.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

351
351
355
356
360
361
363
368
368
370
373
377
380

11 Frame and Basis Theory in Hilbert Spaces . . . . . .
11.1 Frame in Finite-Dimensional Hilbert Spaces . .
11.2 Bases in Hilbert Spaces . . . . . . . . . . . . . . . . .
11.2.1 Bases . . . . . . . . . . . . . . . . . . . . . . . .
11.3 Riesz Bases . . . . . . . . . . . . . . . . . . . . . . . . .
11.4 Frames in Infinite-Dimensional Hilbert Spaces
11.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

381
381
386
386
389
391
394

12 Wavelet Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2 Continuous and Discrete Wavelet Transforms . . . . . . . . .
12.2.1 Continuous Wavelet Transforms . . . . . . . . . . . .
12.2.2 Discrete Wavelet Transform and Wavelet Series
12.3 Multiresolution Analysis, and Wavelets Decomposition
and Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3.1 Multiresolution Analysis (MRA) . . . . . . . . . . . .
12.3.2 Decomposition and Reconstruction Algorithms .
12.3.3 Wavelets and Signal Processing . . . . . . . . . . . .
12.3.4 The Fast Wavelet Transform Algorithm . . . . . . .
12.4 Wavelets and Smoothness of Functions . . . . . . . . . . . . .
12.4.1 Lipschitz Class and Wavelets . . . . . . . . . . . . . .
12.4.2 Approximation and Detail Operators . . . . . . . . .
12.4.3 Scaling and Wavelet Filters . . . . . . . . . . . . . . . .
12.4.4 Approximation by MRA-Associated Projections .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

399
399
400
400
409

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

415
415
418
421
423
425
425
429
435
443

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

xvi

Contents

12.5 Compactly Supported Wavelets . . . . . . . . . . . . . . .
12.5.1 Daubechies Wavelets . . . . . . . . . . . . . . . .
12.5.2 Approximation by Families of Daubechies
Wavelets . . . . . . . . . . . . . . . . . . . . . . . . .
12.6 Wavelet Packets . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . 446
. . . . . . . . . 446
. . . . . . . . . 452
. . . . . . . . . 460
. . . . . . . . . 461

13 Wavelet Method for Partial Differential Equations and Image
Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2 Wavelet Methods in Partial Differential and Integral
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.2 General Procedure . . . . . . . . . . . . . . . . . . . . . . .
13.2.3 Miscellaneous Examples . . . . . . . . . . . . . . . . . . .
13.2.4 Error Estimation Using Wavelet Basis . . . . . . . . .
13.3 Introduction to Signal and Image Processing . . . . . . . . . .
13.4 Representation of Signals by Frames . . . . . . . . . . . . . . . .
13.4.1 Functional Analytic Formulation . . . . . . . . . . . . .
13.4.2 Iterative Reconstruction . . . . . . . . . . . . . . . . . . .
13.5 Noise Removal from Signals . . . . . . . . . . . . . . . . . . . . . .
13.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.5.2 Model and Algorithm . . . . . . . . . . . . . . . . . . . . .
13.6 Wavelet Methods for Image Processing . . . . . . . . . . . . . .
13.6.1 Besov Space . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.6.2 Linear and Nonlinear Image Compression . . . . . .
13.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14 Wavelet Frames . . . . . . . . . . . . . . .
14.1 General Wavelet Frames . . . . .
14.2 Dyadic Wavelet Frames . . . . .
14.3 Frame Multiresolution Analysis
14.4 Problems . . . . . . . . . . . . . . . .

. . . . 465
. . . . 465
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

466
466
467
471
476
479
480
480
482
484
484
486
489
489
491
493

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

497
497
502
506
508

15 Gabor Analysis . . . . . . . . . . . . . . . . . . .
15.1 Orthonormal Gabor System . . . . . .
15.2 Gabor Frames . . . . . . . . . . . . . . . .
15.3 HRT Conjecture for Wave Packets
15.4 Applications . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

509
509
511
517
518

.
.
.
.
.

.
.
.
.
.

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Notational Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561

About the Author

Abul Hasan Siddiqi is a distinguished scientist and professor emeritus at the
School of Basic Sciences and Research, Sharda University, Greater Noida, India.
He has held several important administrative positions such as Chairman,
Department of Mathematics; Dean Faculty of Science; Pro-Vice-Chancellor of
Aligarh Muslim University. He has been actively associated with International
Centre for Theoretical Physics, Trieste, Italy (UNESCO’s organization), in different
capacities for more than 20 years; was Professor of Mathematics at King Fahd
University of Petroleum and Minerals, Saudi Arabia, for 10 years; and was
Consultant to Sultan Qaboos University, Oman, for five terms, Istanbul Aydin
University, Turkey, for 3 years, and the Institute of Micro-electronics, Malaysia, for
5 months. Having been awarded three German Academic Exchange Fellowships to
carry out mathematical research in Germany, he has also jointly published more
than 100 research papers with his research collaborators and five books and edited
proceedings of nine international conferences. He is the Founder Secretary of the
Indian Society of Industrial and Applied Mathematics (ISIAM), which celebrated
its silver jubilee in January 2016. He is editor-in-chief of the Indian Journal of
Industrial and Applied Mathematics, published by ISIAM, and of the Springer’s
book series Industrial and Applied Mathematics. Recently, he has been elected
President of ISIAM which represents India at the apex forum of industrial and
applied mathematics—ICIAM.

xvii

Chapter 1

Banach Contraction Fixed Point
Theorem

Abstract The main goal of this chapter is to introduce notion of distance between
two points in an abstract set. This concept was studied by M. Fréchet and it is known
as metric space. Existence of a fixed point of a mapping on a complete metric space
into itself was proved by S. Banach around 1920. Application of this theorem for
existence of matrix, differential and integral equations is presented in this chapter.
Keywords Metric space · Complete metric space · Fixed point · Contraction
mapping · Hausdorff metric

1.1 Objective
The prime goal of this chapter is to discuss the existence and uniqueness of a fixed
point of a special type of mapping defined on a metric space into itself, called contraction mapping along with applications.

1.2 Contraction Fixed Point Theorem by Stefan Banach
Definition 1.1 Let
d(·, ·) : X × X → R
be a real-valued function on X × X , where X is a nonempty set. d(·, ·) is called a
metric and (X, d) is called a metric space if d(·, ·) satisfies the following conditions:
1. d(x, y) ≥ 0 ∀ x, y ∈ X , d(x, y) = 0 if and only if x = y.
2. d(x, y) = d(y, x) for all x, y ∈ X ,
3. d(x, y) ≤ d(x, z) + d(z, y) for all x, y, z ∈ X .
Remark 1.1 d(x, y) is also known as the distance between x and y belonging to X.
It is a generalization of the distance between two points on real line.
© Springer Nature Singapore Pte Ltd. 2018
A. H. Siddiqi, Functional Analysis and Applications, Industrial and Applied Mathematics,
https://doi.org/10.1007/978-981-10-3725-2_1

1

2

1 Banach Contraction Fixed Point Theorem

It may be noted that positivity condition:
d(x, y) ≥ 0 ∀ x, y ∈ X
follows from second part of condition (i). d(x, y) ≤ d(x, z) + d(z, y) by (iii).
Choosing x = y, we get:
d(x, x) ≤ d(x, z) + d(z, x) or 0 ≤ 2d(x, z) for x, z ∈ X
because
d(x, x) = 0 and d(x, z) = d(z, x).
Hence for all x, z ∈ X , d(x, z) ≥ 0, namely positivity.
Remark 1.2 A subset Y of a metric space (X, d) is itself a metric space. (Y, d) is a
metric space if Y ⊆ X and
d1 (x, y) = d(x, y) ∀ x, y ∈ Y
Examples of Metric Spaces
Example 1.1 Let
(1.1)

d(·, ·) : R × R → R
be defined by
d(x, y) = |x − y| ∀ x, y ∈ R.

Then d(·, ·) is a metric on R (distance between two points of R) and (R, d) is a metric
space.
Example 1.2 Let R 2 denote the Euclidean space of dimension 2. Define a function d(·, ·) on R 2 as follows: d(x, y) = ((u 1 − u 2 )2 + (v1 − v2 )2 )1/2 , where
x = (u 1 , u 2 ), y = (v1 , v2 ).
d(·, ·) is a metric on R 2 and (R 2 , d) is a metric space,
Example 1.3 Let R n denote the vector space of dimension n. For u = (u 1 , u 2 , . . . ,
u n ) ∈ R n and v = (v1 , v2 , . . . , vn ) ∈ R n . Define d(·, ·) as follows:
(a) d(u, v) = (

n


|u k − vk |2 )1/2 .

k=1

(R n , d) is a metric space.
Example 1.4 For a number p satisfying 1 ≤ p < ∞, let ℓ p denote the space of


|u k | p is convergent.
infinite sequences u = (u 1 , u 2 , . . . , u n , ..) such that the series
k=1

(ℓ p , d(·, ·)) is a metric space, where d(·, ·) is defined by

1.2 Contraction Fixed Point Theorem by Stefan Banach

d(u, v) =

∞

k=1

|u k − vk | p

3

1/ p

,

u = (u 1 , u 2 , . . . , u k , ..), v = (v1 , v2 , . . . , vk , ..) ∈ ℓ p .
d(·, ·) is distance between elements of ℓ p .
Example 1.5 Suppose C[a, b] represents the set of all real continuous functions
defined on closed interval [a, b]. Let d(·, ·) be a function defined on C[a, b]×C[a, b]
by:
(a) d( f, g) = sup | f (x) − g(x)|, ∀ f, g ∈ C[a, b]
a≤x≤b

1/2
b
2
(b) d( f, g) =
| f (x) − g(x)| dx
, ∀ f, g ∈ C[a, b].
a

(C[a, b], d(·, ·)) is a metric space with respect to metrices given in (a) and (b).

Example 1.6 Suppose L 2 [a, b] denote the set of all integrable functions f defined
b

on [a, b] such that lim | f |2 d x is finite. Then, (L 2 [a, b], d(·, ·)) is a metric space if
a

d( f, g) = (

b

| f (x) − g(x)|2 dx)1/2 , f, g ∈ L 2 [a, b]

a

d(·, ·) is a metric on L 2 [a, b].
Definition 1.2 Let {u n } be a sequence of points in a metric space (X, d) which is
called a Cauchy sequence if for every ε > 0, there is an integer N such that
d(u m , u n ) < ε ∀ n, m > N .
It may be recalled that a sequence in a metric space is a function having domain as the
set of natural numbers and the range as a subset of the metric space. The definition
of Cauchy sequence means that the distance between two points u n and u m is very
small when n and m are very large.
Definition 1.3 Let {u n } be a sequence in a metric space (X, d). It is called convergent
with limit u in X if, for ε > 0, there exists a natural number N having property
d(u n , u) < ε ∀ n > N
If {u n } converges to u, that is, {u n } → u as n → ∞, then we write, lim u n = u.
n→∞

Definition 1.4 Let every Cauchy sequence in a metric space (X, d) is convergent.
Then (X, d) is called a complete metric space.

4

1 Banach Contraction Fixed Point Theorem

Complete Metric Spaces
Example 1.7 (a) Spaces R, R 2 , R n , ℓ p , C[a, b] with metric (a) of Example 1.5 and
L 2 [a, b] are examples of complete metric spaces.
(b) (0, 1] is not a complete metric space.
(c) C[a, b] with integral metric is not a complete metric space.
(d) The set of rational numbers is not a complete metric space.
(e) C[a, b] with (b) of Example 1.5 is not complete metric space.
Definition 1.5 (a) A subset M of a metric space (X, d) is said to be bounded if
there exists a positive constant k such that d(u, v) ≤ k for all u, v belonging to
M.
(b) A subset M of a metric space (X, d) is closed if every convergent sequence {u n }
in M is convergent in M.
(c) If every bounded sequence M ⊂ (X, d) has a convergent subsequence then the
subset M is called compact.
(d) Let T : (X, d) → (Y, d). T is called continuous if u n → u implies that T (u n ) →
T u; that is, d(u n , u) → 0 as n → ∞ implies that d(T (u n ), T u) → 0.
Remark 1.3 1. It may be noted that every bounded and closed subset of (R n , d) is
a compact subset.
2. It may be observed that each closed subset of a complete metric space is complete.
As we see above, a metric is a distance between two points. We introduce now
the concept of distance between the subsets of a set, for example, distance between
a line and a circle in R 2 . This is called Hausdorff metric.
Distance Between Two Subsets (Hausdorff Metric)
Let X be a set and H (X ) be a set of all subsets of X . Suppose d(·, ·) be a metric on
X. Then distance between a point u of X and a subset M of X is defined as
d(u, M) = inf{d(u, v)/v ∈ M}
or = inf {d(u, v)}
v∈M

Let M and N be two elements of H (X ). Distance or metric between M and N denoted
by (M,N) is defined as
d(M, N ) = sup inf d(u, v)
u∈M v∈N

= sup d(u, N ).
u∈M

It can be verified that
d(M, N )
= d(N , M)

1.2 Contraction Fixed Point Theorem by Stefan Banach

5

where
d(N , M) = sup inf d(v, u)
v∈N u∈M

= sup inf d(u, v).
u∈M u∈N

Definition 1.6 The Hausdorff metric or the distance between two elements M and
N of a metric (X, d), denoted by h(M, N ), is defined as
h(M, N ) = max{d(M, N ), d(N , M)}
Remark 1.4 If H (X ) denotes the set of all closed and bounded subsets of a metric
space (X, d) then h(M, N ) is a metric. If X = R 2 then H (R 2 ) the set of all compact
subsets of R 2 is a metric space with respect to h(M, N ).
Contraction Mapping
Definition 1.7 (Contraction Mapping) A mapping T : (X, d) → (X, d) is called a
Lipschitz continuous mapping if there exists a number α such that
d(T u, T v) ≤ αd(u, v) ∀ u, v ∈ X.
If α lies in [0, 1), that is, 0 ≤ α < 1, then T is called a contraction mapping. α is
called the contractivity factor of T .
Example 1.8 Let T : R → R be defined as T u = (1 + u)1/3 . Then finding a solution
to the equation T u = u is equivalent to solving the equation u 3 − u − 1 = 0. T is a
contraction mapping on I = [1, 2], where the contractivity factor is α = (3)1/3 − 1.
Example 1.9 (a) Let T u = 1/3u, 0 ≤ u ≤ 1. Then T is a contraction mapping on
[0, 1] with contractivity factor 1/3.
(b) Let S(u) = u + b, u ∈ R and b be any fixed element of R. Then S is not a
contraction mapping.
Example 1.10 Let I = [a, b] and f : [a, b] → [a, b] and suppose that f ′ (u) exist
and | f ′ (x)| < 1. Then f is a contraction mapping on I into itself.
Definition 1.8 (Fixed Point) Let T be a mapping on a metric space (X, d) into itself.
u ∈ X is called a fixed point if
Tu = u
Theorem 1.1 (Existence of Fixed Point-Contraction Mapping Theorem by Stefan
Banach) Let (X, d) be a complete metric space and let T be a contraction mapping
on (X, d) into itself with contractivity factor α. Then there exists only one point u in X

6

1 Banach Contraction Fixed Point Theorem

such that T u = u, that T has a unique fixed point. Furthermore, for any u ∈ (X, d),
the sequence x, T (x), T 2 (x), . . . , T k (x) converges to the point u; that is
lim T k = u

k→∞

Proof We know that T 2 (x) = T (T (x)), . . . , T k (x) = T (T (k−1) (x)), and
d(T m (x), T n (x)) ≤ αd(T m−1 (x), T (n−1) (x))
≤ α m d(x, T n−m (x))
n−m

≤ αm
d(T k−1 (x), T k (x))
≤ αm

k=1
n−m


α k−1 d(x, T (x))

k=1

This we obtain by applying contractivity (k − 1) times. It is clear that
d(T m (x), T n (x)) → 0 as m, n → ∞
and so T m (x) is a Cauchy sequence in a complete metric space (X, d). This sequence
must be convergent, that is
lim T m x = u

m→∞

We show that u is a fixed point of T , that is, T (u) = u. In fact, we will show that u
is unique. T (u) = u is equivalent to showing that d(T (u), u) = 0.
d(T (u), u) = d(u, T (u))
≤ d(u, T k (x)) + d(T k (x), T (u))
≤ d(u, T k (x)) + αd(u, T k−1 (x)) → 0 as k → ∞
It is clear that
lim d(u, T k (x)) = 0

k→∞

as u = lim T k (x) and lim d(u, T k−1 (x)) = 0 (u = lim T k (x)).
k→∞

k→∞

k→∞

Let v be another element in X such that T (v) = v. Then
d(u, v) = d(T (u), T (v)) ≤ αd(u, v)
This implies d(u, v) = 0 or u = v (Axiom (i) of the metric space).
Thus, T has a unique fixed point.

1.3 Application of Banach Contraction Mapping Theorem

7

1.3 Application of Banach Contraction Mapping Theorem
1.3.1 Application to Matrix Equation
Suppose we want to find the solution of a system of n linear algebraic equations with
n unknowns

a11 x1 + a12 x2 + · · · + a1n xn = b1 ⎪


a21 x1 + a22 x2 + · · · + a2n xn = b2
(1.2)
............................................. ⎪


an1 x1 + an2 x2 + · · · + ann xn = bn
Equivalent matrix formulation Ax = b, where


a11
⎜ a21
A=⎜
⎝ ...
an1

a12
a22
...
an2


· · · a1n
· · · a2n ⎟

· · · ... ⎠
· · · ann

x = (x1 , x2 , ..., xn )T , y = (y1 , y2 , ..., yn )T
The system can be written as

x1 = (1 − a11 )x1 − a12 x2 · · · − a1n xn + b1 ⎪


x2 = −a21 x1 − (1 − a22 )x2 · · · − a2n xn + b2
......................................................



xn = −an1 x1 − an2 x2 · · · + (1 − ann )xn + bn

(1.3)

By letting αi j = −ai j + δi j where

δi j =



1, f or i = j
0, f or i
= j

Equation (1.2) can be written in the following equivalent form
xi =

n


αi j x j + bi , i = 1, 2, . . . n

(1.4)

j=1

If x = (x1 , x2 , . . . , xn ) ∈ R n , then Eq. (1.1) can be written in the equivalent form
x − Ax + b = x

(1.5)

Let T x = x − Ax + b. Then the problem of finding the solution of system Ax = b
is equivalent to finding fixed points of the map T .

8

1 Banach Contraction Fixed Point Theorem

Now, T x − T x ′ = (I − A)(x − x ′ ) and we show that T is a contraction under a
reasonable condition on the matrix.
In order to find a unique fixed point of T , i.e., a unique solution of system of
equations (1.1), we apply Theorem 1.1. In fact, we prove the following result.
Equation (1.1) has a unique solution if
n

j=i

|αi j | =

n


| − ai j + δi j | ≤ k < 1, i = 1, 2, . . . n

j=1

For x = (x1 , x2 , . . . , xn ) and x ′ = (x1′ , x2′ , . . . , xn′ ), we have
d(T x, T x ′ ) = d(y, y ′ )
where
y = (y1 , y2 , . . . , yn ) ∈ R n
y ′ = (y1′ , y2′ , . . . , yn′ ) ∈ R n
n

αi j x j + bi
yi =
j=1

yi′ =

n


αi j x ′j + bi i = 1, 2, . . . n

j=1

We have
d(y, y ′ ) = sup |yi − yi′ |
1≤i≤n

= sup |
1≤i≤n

n


αi j + bi −

n


αi j x ′j − bi |

j=1

j=1




 n


αi j (x j − x ′j )
= sup 
1≤i≤n  j=1

= sup

n


1≤i≤n j=1

|αi j ||x j − x ′j |

≤ sup |x j − x ′j | sup
1≤i≤n

≤ k sup |x j −
1≤i≤n

n


1≤i≤n j=1
x ′j |

|αi j |

1.3 Application of Banach Contraction Mapping Theorem

9


Since nj=1 |αi j | ≤ k < 1 for i = 1, 2, . . . , n and d(x, x ′ ) = sup 1 ≤ j ≤ n|x j −x ′j |,
we have d(T x, T x ′ ) ≤ kd(x, x ′ ), 0 ≤ k < 1; i.e, T is a contraction mapping on R n
into itself. Hence, by Theorem 1.1, there exists a unique fixed point x ⋆ of T in R n ;
i.e., x ⋆ is a unique solution of system (1.1).

1.3.2 Application to Integral Equation
Here, we prove the following existence theorem for integral equations.
Theorem 1.2 Let the function H (x, y) be defined and measurable in the square
A = {(x, y)/a ≤ x ≤ b, a ≤ y ≤ b}. Further, let
b b
a

|H (x, y)|2 < ∞

a

and g(x) ∈ L 2 (a, b). Then the integral equation

f (x) = g(x) + μ

b

H (x, y) f (y) dy

(1.6)

a

possesses a unique solution f (x) ∈ L 2 (a, b) for every sufficiently small value of the
parameter μ.
Proof For applying Theorem 1.1, let X = L 2 , and consider the mapping T
T : L 2 (a, b) → L 2 (a, b)
Tf =h
where h(x) = g(x) + μ

b

H (x, y) f (y)dy ∈ L 2 (a, b).

a

This definition is valid for each f ∈ L 2 (a, b), h ∈ L 2 (a, b), and this can be seen
as follows. Since g ∈ L 2 (a, b) and μ is scalar, it is sufficient to show that

ψ(x) =

b

K (x, y) f (y)dy ∈ L 2 (a, b)

a

By the Cauchy–Schwarz inequality

10

1 Banach Contraction Fixed Point Theorem






a

b

  b

H (x, y) f (y)dy  ≤
|H (x, y) f (y)|dy
a
⎞1/2
⎛ b


2



|H (x, y)| dy

b

| f (y)|

a

a

Therefore
⎞2

b



|ψ(x)|2 = ⎝|

H (x, y) f (y)dy ⎠

a


⎞⎛ b
⎛ b


≤ ⎝ |H (x, y)|2 dy ⎠ ⎝ | f (y)|2 dy ⎠
a

a

or
b
a

b
b b
2
|ψ(x)| dx ≤ ( |H (x, y)| dy)( | f (y)|2 dy)dx
2

a

a

a

By the hypothesis,
b b
a

|H (x, y)|2 dxdy < ∞

a

and
b

| f (y)|2 dy < ∞

a

Thus

ψ(x) =

b

H (x, y) f (y)dy ∈ L 2 (a, b)

a

We know that L 2 (a, b) is a complete metric space with metric
d( f, g) =



a

b
2

| f (x) − g(x)| dx

1/2

2

1/2

1.3 Application of Banach Contraction Mapping Theorem

11

Now we show that T is a contraction mapping. We have d(T f, T f 1 ) = d(h, h 1 ),
where

h 1 (x) = g(x) + μ

b

H (x, y) f 1 (y)dy

a

⎡
⎤ 2

b  b





d(h, h 1 ) = |μ|( 
K (x, y)[ f (y) − f 1 (y)]dy  dx)1/2


a
a
⎞1/2
⎞1/2 ⎛ b
⎛ b b

 
|H (x, y)|2 dx dy ⎠ ⎝ | f (y) − f 1 (y)|2 dy ⎠
≤ |μ| ⎝
a

a

a

by using Cauchy–Schwarz–Bunyakowski inequality.
1/2

b b
2
Hence, d(T f, T f 1 ) ≤ |μ|
|K (x, y)| dxdy
d( f, f 1 ). By definition of
a a

the metric in L 2 , we have



d( f, f 1 ) = ⎝

b
a

⎞1/2

| f (y) − f 1 (y)|2 dy ⎠

If


|μ| < 1/ ⎝

b b
a

a

⎞1/2

|H (x, y)|2 dxdy ⎠

then
d(T f, T f 1 ) ≤ kd( f, f 1 )
where
⎞1/2
⎛ b b
 
|H (x, y)|2 dx dy ⎠ < 1
0 ≤ k = |μ| ⎝
a

a

Thus, T is a contraction and, so T has a unique fixed point, say, there exists a unique
f ⋆ ∈ L 2 [a, b] such that T f ⋆ = f ⋆ . Therefore, f ⋆ is a solution of equation (1.6).

12

1 Banach Contraction Fixed Point Theorem

1.3.3 Existence of Solution of Differential Equation
We prove Picard theorem applying contraction mapping theorem of Banach.
Theorem 1.3 Picard’s Theorem Let g(x, y) be a continuous function defined on a
rectangle M = {(x, y)/a ≤ x ≤ b, c ≤ y ≤ d} and satisfy the Lipschitz condition
of order 1 in variable y. Moreover, let (u 0 , v0 ) be an interior point of M. Then the
differential equation
dy
= g(x, y)
dx

(1.7)

has a unique solution, say y = f (x) which passes through (u 0 , v0 ).
Proof We examine in the first place that finding the solution of equation (1.6) is
equivalent to the problem of finding the solution of an integral equation. If y = f (x)
satisfies (1.6) and satisfies the condition that f (u 0 ) = v0 , then integrating (1.6) from
u 0 to x, we have
f (x) − f (u 0 ) =

x

g(t, f (t))dt

u0

f (x) = v0 +

x

g(t, f (t))dt

(1.8)

u0

Thus, solution of (1.6) is equivalent to a unique solution of (1.7).
Solution of (1.7): |g(x, y1 ) − g(x, y2 )| ≤ q|y1 − y2 |, q > 0 as g(x, y) satisfies
the Lipschitz condition of order 1 in the second variable y. g(x, y) is bounded on M;
that is, there exists a positive constant k such that |g(x, y)| ≤ m∀(x, y) ∈ M. This
is true as f (x, y) is continuous on a compact subset M of R 2 .
Find a positive constant p such that pq < 1 and the rectangle N = {(x, y)/ −
p + u 0 ≤ x ≤ p + u 0 , − pm + v0 ≤ y ≤ pm + v0 } is contained in M.
Suppose X is the set of all real-valued continuous functions y = f (x) defined on
[− p + u 0 , p + u 0 ] such that d( f (x), u 0 ) ≤ mp. It is clear that X is a closed subset
of C[u 0 − p, u 0 + p] with sup metric (Example 1.5(a)). It is a complete metric space
by Remark 1.3.
Remark 1.5 Define a mapping T : X → X by T f = h, where h(x) = v0 +
x
g(t, f (t)dt). T is well defined as

u0

 x






d(h(x), v0 ) = sup  g(t, f (t))dt  ≤ m(x − u 0 ) ≤ mp


u0

1.3 Application of Banach Contraction Mapping Theorem

13

h(x) ∈ X .
For f, f 1 ∈ X
 x






d(T f, T f 1 ) = d(h, h 1 ) = sup  [g(t, f (t) − g(t, f 1 (t))]dt 


x0

≤q

x

| f (t) − f 1 (t)|dt

x0

≤ qpd( f, f 1 )
or
d(T f, T f 1 ) ≤ αd(g, g1 )
where 0 ≤ α = q