Directory UMM :Networking Manual:computer_network_books:
3D Geometry for
Computer Graphics
Class 5
The plan today
Singular Value Decomposition
Basic
intuition
Formal definition
Applications
2
Geometric analysis of linear transformations
We want to know what a linear transformation A does
Need some simple and “comprehendible” representation
of the matrix of A.
Let’s look what A does to some vectors
Since A(v) = A(v), it’s enough to look at vectors v of unit length
A
3
The geometry of linear transformations
A linear (non-singular) transform A always takes
hyper-spheres to hyper-ellipses.
A
A
4
The geometry of linear transformations
Thus, one good way to understand what A does is to find
which vectors are mapped to the “main axes” of the
ellipsoid.
A
A
5
Geometric analysis of linear transformations
If we are lucky: A = V VT, V orthogonal (true
if A is symmetric)
The eigenvectors of A are the axes of the ellipse
A
6
Symmetric matrix: eigen decomposition
In this case A is just a scaling matrix. The eigen
decomposition of A tells us which orthogonal axes it
scales, and by how much:
1
1
2
A
1
�
�
2
A v1 v 2 K v n �
�
O
�
�
1
�
�
T
�
v v K vn
�1 2
�
n �
Av i i v i
7
General linear transformations: SVD
In general A will also contain rotations, not just scales:
1
1
2
1
A
A U �V T
1
�
�
2
A u1 u 2 K u n �
�
O
�
�
�
�
T
�
v1 v 2 K v n
�
�
n�
8
General linear transformations: SVD
1
1
2
1
A
AV U �
1
�
orthonormal
orthonormal �
2
A v1 v 2 K v n u1 u 2 K u n �
�
O
�
�
�
�
�
�
�
n�
A v i i u i , i �0
9
SVD more formally
SVD exists for any matrix
Formal definition:
For square matrices A Rnn, there exist orthogonal matrices
U, V Rnn and a diagonal matrix , such that all the diagonal
values i of are non-negative and
A U V
T
=
A
U
VT
10
SVD more formally
The diagonal values of (1, …, n) are called the singular values. It
is accustomed to sort them: 1 2 … n
The columns of U (u1, …, un) are called the left singular vectors.
They are the axes of the ellipsoid.
The columns of V (v1, …, vn) are called the right singular vectors.
They are the preimages of the axes of the ellipsoid.
A U V
T
=
A
U
V
T
11
Reduced SVD
For rectangular matrices, we have two forms of
SVD. The reduced SVD looks like this:
The
columns of U are orthonormal
Cheaper form for computation and storage
=
A
U
VT
12
Full SVD
We can complete U to a full orthogonal matrix
and pad by zeros accordingly
=
A
U
VT
13
Some history
SVD was discovered by the following people:
E. Beltrami
(1835 1900)
M. Jordan
(1838 1922)
E. Schmidt
(1876-1959)
J. Sylvester
(1814 1897)
H. Weyl
(1885-1955)
14
SVD is the “working horse” of linear algebra
There are numerical algorithms to compute
SVD. Once you have it, you have many things:
Matrix
inverse can solve square linear systems
Numerical rank of a matrix
Can solve least-squares systems
PCA
Many more…
15
Matrix inverse and solving linear systems
Matrix inverse:
A U �V T
A
1
U �V
T
1
1
�
1
�
V � O
�
�
So, to solve
V
T
1
�1 U 1
�
�T
U
�
1 �
n �
Ax b
x V �1 U T b
16
Matrix rank
The rank of A is the number of non-zero singular
values
n
1
m
2
n
=
A
U
VT
17
Numerical rank
If there are very small singular values, then A is
close to being singular. We can set a threshold t,
so that numeric_rank(A) = #{i| i > t}
If rank(A) < n then A is singular. It maps the entire
space Rn onto some subspace, like a plane (so A
is some sort of projection).
18
PCA
We wanted to find principal components
y
y’
x’
x
19
PCA
Move the center of mass to the origin
pi’ = pi m
y
y’
x’
x
20
PCA
Constructed the matrix X of the data points.
|
�|
X �
p1� p�
L
2
�
�
|
�|
| �
�
p�
n�
| �
�
The principal axes are eigenvectors of S = XXT
1
�
XX T S U �
� O
�
�
�
�
T
U
�
d �
�
21
PCA
We can compute the principal components by SVD of X:
X U V T
XX T U V T (U V T )T
U V T V T U T U %2U T
Thus, the left singular vectors of X are the principal
components! We sort them by the size of the singular
values of X.
22
Shape matching
We have two objects in correspondence
Want to find the rigid transformation that aligns
them
23
Shape matching
When the objects are aligned, the lengths of the
connecting lines are small.
24
Shape matching – formalization
Align two point sets
P p1 , K , p n and Q q1 , K , q n .
Find a translation vector t and rotation matrix R
so that:
n
�p
i 1
i
( Rqi t )
2
is minimized
25
Shape matching – solution
Turns out we can solve the translation and rotation
separately.
Theorem: if (R, t) is the optimal transformation, then the
points {pi} and {Rqi + t} have the same centers of mass.
1 n
p �p i
n i 1
1 n
q �q i
n i 1
1 n
p � Rqi t
n i 1
�
�1 n
�
p R � �q i �
t Rq t
�n i 1 �
t p Rq
26
Finding the rotation R
To find the optimal R, we bring the centroids of both point
sets to the origin:
p�
i pi p
q�
i qi q
We want to find R that minimizes
n
�p� Rq�
i 1
2
i
i
27
Summary of rigid alignment:
Translate the input points to the centroids:
p�
i pi p
q�
i qi q
Compute the “covariance matrix”
n
T
�
H �q�
p
i i
i 1
Compute the SVD of H:
The optimal rotation is
H U V T
R VU
T
The translation vector is
t p Rq
28
Complexity
Numerical SVD is an expensive operation
We always need to pay attention to the
dimensions of the matrix we’re applying SVD to.
29
See you next time
Finding the rotation R
n
n
� � p�
� �
�
�p�
i Rq i
i Rq i p i Rq i
2
i 1
T
i 1
n
T
T
T
T
T
T
�
�
�
�
�
�
� p�
p
p
R
q
q
R
p
q
R
Rq�
i
i
i
i
i
i
i
i
i 1
I
These terms do not depend on R,
so we can ignore them in the minimization
31
Finding the rotation R
n
n
T
T T
�
�
�
�
�
�
min � p�
R
q
q
R
p
max
p
R
q
q
� i i i R p�
i
i
i
i
i .
T
T
T
i 1
i 1
� � � p�
�
q�
i R pi qi R pi
i Rq i
T
T
T
n
T
T
T
this is a scalar
n
T
�
�
� max � 2p�
R
q
max
p
� i Rq�
i
i
i
i 1
T
i 1
32
Finding the rotation R
n
n
�
�
�
T
T
T �
�
�
�
�
�
�
p
R
q
Trace
R
q
p
Trace
R
q
p
i i
�
�
i i �
�
�� i i �
i 1
�i 1
�
� i 1
�
n
n
T
�
H �q�
p
i i
=
i 1
3 1 1 3 = 3 3
n
Trace( A) �Aii
i 1
33
Finding the rotation R
So, we want to find R that maximizes
Trace RH
Theorem: if M is symmetric positive definite (all
eigenvalues of M are positive) and B is any orthogonal
matrix then
Trace M �Trace( BM )
So, let’s find R so that RH is symmetric positive definite.
Then we know for sure that Trace(RH) is maximal.
34
Finding the rotation R
This is easy! Compute SVD of H:
H U V
Define R:
Check RH:
R VU
T
T
RH VU T U V T V V T
This is a symmetric matrix,
Its eigenvalues are i 0
So RH is positive!
35
Computer Graphics
Class 5
The plan today
Singular Value Decomposition
Basic
intuition
Formal definition
Applications
2
Geometric analysis of linear transformations
We want to know what a linear transformation A does
Need some simple and “comprehendible” representation
of the matrix of A.
Let’s look what A does to some vectors
Since A(v) = A(v), it’s enough to look at vectors v of unit length
A
3
The geometry of linear transformations
A linear (non-singular) transform A always takes
hyper-spheres to hyper-ellipses.
A
A
4
The geometry of linear transformations
Thus, one good way to understand what A does is to find
which vectors are mapped to the “main axes” of the
ellipsoid.
A
A
5
Geometric analysis of linear transformations
If we are lucky: A = V VT, V orthogonal (true
if A is symmetric)
The eigenvectors of A are the axes of the ellipse
A
6
Symmetric matrix: eigen decomposition
In this case A is just a scaling matrix. The eigen
decomposition of A tells us which orthogonal axes it
scales, and by how much:
1
1
2
A
1
�
�
2
A v1 v 2 K v n �
�
O
�
�
1
�
�
T
�
v v K vn
�1 2
�
n �
Av i i v i
7
General linear transformations: SVD
In general A will also contain rotations, not just scales:
1
1
2
1
A
A U �V T
1
�
�
2
A u1 u 2 K u n �
�
O
�
�
�
�
T
�
v1 v 2 K v n
�
�
n�
8
General linear transformations: SVD
1
1
2
1
A
AV U �
1
�
orthonormal
orthonormal �
2
A v1 v 2 K v n u1 u 2 K u n �
�
O
�
�
�
�
�
�
�
n�
A v i i u i , i �0
9
SVD more formally
SVD exists for any matrix
Formal definition:
For square matrices A Rnn, there exist orthogonal matrices
U, V Rnn and a diagonal matrix , such that all the diagonal
values i of are non-negative and
A U V
T
=
A
U
VT
10
SVD more formally
The diagonal values of (1, …, n) are called the singular values. It
is accustomed to sort them: 1 2 … n
The columns of U (u1, …, un) are called the left singular vectors.
They are the axes of the ellipsoid.
The columns of V (v1, …, vn) are called the right singular vectors.
They are the preimages of the axes of the ellipsoid.
A U V
T
=
A
U
V
T
11
Reduced SVD
For rectangular matrices, we have two forms of
SVD. The reduced SVD looks like this:
The
columns of U are orthonormal
Cheaper form for computation and storage
=
A
U
VT
12
Full SVD
We can complete U to a full orthogonal matrix
and pad by zeros accordingly
=
A
U
VT
13
Some history
SVD was discovered by the following people:
E. Beltrami
(1835 1900)
M. Jordan
(1838 1922)
E. Schmidt
(1876-1959)
J. Sylvester
(1814 1897)
H. Weyl
(1885-1955)
14
SVD is the “working horse” of linear algebra
There are numerical algorithms to compute
SVD. Once you have it, you have many things:
Matrix
inverse can solve square linear systems
Numerical rank of a matrix
Can solve least-squares systems
PCA
Many more…
15
Matrix inverse and solving linear systems
Matrix inverse:
A U �V T
A
1
U �V
T
1
1
�
1
�
V � O
�
�
So, to solve
V
T
1
�1 U 1
�
�T
U
�
1 �
n �
Ax b
x V �1 U T b
16
Matrix rank
The rank of A is the number of non-zero singular
values
n
1
m
2
n
=
A
U
VT
17
Numerical rank
If there are very small singular values, then A is
close to being singular. We can set a threshold t,
so that numeric_rank(A) = #{i| i > t}
If rank(A) < n then A is singular. It maps the entire
space Rn onto some subspace, like a plane (so A
is some sort of projection).
18
PCA
We wanted to find principal components
y
y’
x’
x
19
PCA
Move the center of mass to the origin
pi’ = pi m
y
y’
x’
x
20
PCA
Constructed the matrix X of the data points.
|
�|
X �
p1� p�
L
2
�
�
|
�|
| �
�
p�
n�
| �
�
The principal axes are eigenvectors of S = XXT
1
�
XX T S U �
� O
�
�
�
�
T
U
�
d �
�
21
PCA
We can compute the principal components by SVD of X:
X U V T
XX T U V T (U V T )T
U V T V T U T U %2U T
Thus, the left singular vectors of X are the principal
components! We sort them by the size of the singular
values of X.
22
Shape matching
We have two objects in correspondence
Want to find the rigid transformation that aligns
them
23
Shape matching
When the objects are aligned, the lengths of the
connecting lines are small.
24
Shape matching – formalization
Align two point sets
P p1 , K , p n and Q q1 , K , q n .
Find a translation vector t and rotation matrix R
so that:
n
�p
i 1
i
( Rqi t )
2
is minimized
25
Shape matching – solution
Turns out we can solve the translation and rotation
separately.
Theorem: if (R, t) is the optimal transformation, then the
points {pi} and {Rqi + t} have the same centers of mass.
1 n
p �p i
n i 1
1 n
q �q i
n i 1
1 n
p � Rqi t
n i 1
�
�1 n
�
p R � �q i �
t Rq t
�n i 1 �
t p Rq
26
Finding the rotation R
To find the optimal R, we bring the centroids of both point
sets to the origin:
p�
i pi p
q�
i qi q
We want to find R that minimizes
n
�p� Rq�
i 1
2
i
i
27
Summary of rigid alignment:
Translate the input points to the centroids:
p�
i pi p
q�
i qi q
Compute the “covariance matrix”
n
T
�
H �q�
p
i i
i 1
Compute the SVD of H:
The optimal rotation is
H U V T
R VU
T
The translation vector is
t p Rq
28
Complexity
Numerical SVD is an expensive operation
We always need to pay attention to the
dimensions of the matrix we’re applying SVD to.
29
See you next time
Finding the rotation R
n
n
� � p�
� �
�
�p�
i Rq i
i Rq i p i Rq i
2
i 1
T
i 1
n
T
T
T
T
T
T
�
�
�
�
�
�
� p�
p
p
R
q
q
R
p
q
R
Rq�
i
i
i
i
i
i
i
i
i 1
I
These terms do not depend on R,
so we can ignore them in the minimization
31
Finding the rotation R
n
n
T
T T
�
�
�
�
�
�
min � p�
R
q
q
R
p
max
p
R
q
q
� i i i R p�
i
i
i
i
i .
T
T
T
i 1
i 1
� � � p�
�
q�
i R pi qi R pi
i Rq i
T
T
T
n
T
T
T
this is a scalar
n
T
�
�
� max � 2p�
R
q
max
p
� i Rq�
i
i
i
i 1
T
i 1
32
Finding the rotation R
n
n
�
�
�
T
T
T �
�
�
�
�
�
�
p
R
q
Trace
R
q
p
Trace
R
q
p
i i
�
�
i i �
�
�� i i �
i 1
�i 1
�
� i 1
�
n
n
T
�
H �q�
p
i i
=
i 1
3 1 1 3 = 3 3
n
Trace( A) �Aii
i 1
33
Finding the rotation R
So, we want to find R that maximizes
Trace RH
Theorem: if M is symmetric positive definite (all
eigenvalues of M are positive) and B is any orthogonal
matrix then
Trace M �Trace( BM )
So, let’s find R so that RH is symmetric positive definite.
Then we know for sure that Trace(RH) is maximal.
34
Finding the rotation R
This is easy! Compute SVD of H:
H U V
Define R:
Check RH:
R VU
T
T
RH VU T U V T V V T
This is a symmetric matrix,
Its eigenvalues are i 0
So RH is positive!
35