Introduction to Orthogonal Matching Pursuit

  

Introduction to Orthogonal Matching Pursuit

Koredianto Usman

Telkom University

  

Faculty of Electrical Engineering

Indonesia

August 30, 2017

  This tutorial is a continuation of our previous tutorial on Matching Pursuit (MP).

  1 Introduction Consider the following situation. Given  

  −1.2 x =  1  and

  −0.707 0.8 0 A =

  0.707 0.6 −1

  Calculate y = A · x! Well this is easy. Simply  

  −1.2

  1.65 −0.707 0.8 0   y

  1 = . = A · x = ·

  0.707 0.6 −1 −0.25

  Now the difficult part. Given

  1.65 y = .

  −0.25 and −0.707 0.8 0

  A = ,

  0.707 0.6 −1 find the original (or as close as possible to) x! In Compressive Sensing terminology, finding y from x and A is called compression . While, on the other hand, finding original x from y and A is called reconstruction (problem). Yes, it is indeed a problem, since reconstruction is not an easy task.

  As for further terminology, x is called original signal or original

  

vector , A is called the ompression matrix or sensing matrix , and

y is called compressed signal or compressed vector .

  2 Basic Concept As already discussed in MP tutorial, we need to view the sensing matrix A as a collection of column vectors. That is:

  −0.707 0.8 0 A = = b b b

  1

  2 3 ,

  0.707 0.6 −1 where

  0.8 −0.707 b b b

  = ; = ; = .

  1

  2

  3

  0.707

  0.6 −1 These columns are called the basis or atoms.

  Now, if we let   x  

  1

  x = x

  2

  x

  3 Then

  0.8 −0.707

  • A = y.

  1

  2

  3

  · x = · x · x · x 0.707

  0.6 −1

  The last equation shows us that y is nothing but a linear combi- nation of atoms using coefficients as described in x. We know that actually x = 1 and x = 0. In other word, looking to

  1

  2

  3

  = −1.2, x the last equation atom b contributes -1.2 to produce y, atom b

  1 2 contributes 1 to produce y, and finally, atom b contributes 0 to y.

  3 OMP works similar to MP, which is to find which atom contributes

  the most to y, and then which one is the next, and which one is the next, and so on. This process, as we understand by now, need N iteration, where N is the number of atoms in A. In our example, N is 3.

  3 Calculating Contribution There are three atoms, which are:

  0.8 −0.707 b b b

  = ; = ; = .

  1

  2

  3

  0.707

  0.6 −1 and y

  −1.65 y = .

  −0.25 Therefore the contribution for each atom in y is

  1.65 −0.707

  1

  hb , yi = · = −0.707·−1.65+0.707·−0.25 = −1.34 0.707

  −0.25 and

  0.8

  1.65 = 1.17

  2

  hb , yi = ·

  0.6 −0.25 and finally

  1.65 = 0.25 hb

  3 , yi = ·

  −1 −0.25 Obviously, atom b gives the highest contribution, with the value

  1

  of -1.34 (neglect the negative value or just use the absolute value) To in this first round iteration, we select b as the selected basis,

  1 and the coefficient of that basis is -1.34.

  Of course, the dot product of each basis can be calculate in a single step, w, which is:     −0.707 0.707 −1.34

  1.65 T w = A  0.8 0.6  =  1.17 

  · y = · −0.25

  0.25 −1 Figure 1 shows the illustration of basis and y.

  4 Calculating Residue −0.707

  Now the first basis been selected, which is b = , and the

  1

  0.707 coefficient λ

  1 1 = −1.34 as the contribution coefficient. If we take λ 1 ·b b

  1 b

  2 y

b

  3 Figure 1: Illustration of basis and y

  from y, the residue would be:

  1.65

  0.7 −0.707 r = =

  1

  1

  = y − λ · b − (−1.34) · 0.707

  0.7 −0.25

  What does this residue mean to us? Well, let see again the geometry as shown in Fig.2. As we can see, the residue is perpendicular to the contribution of the first selected basis.

  5 Repeat the iteration After the first iteration, we get the selected basis b . We collect this

  1

  selected basis into a new A . That is

  new

  −0.707 A = [b ] = .

  new

  1

  0.707 b

  1 b

  2 y selected basis residue r

  

λ

  1

  1 · b b

  3 contribution of selected basis to y

  

Figure 2: Interpretation of residue

  The contribution coefficient as we rewrite it as x is   rec −1.34 x =   .

  rec

  The value -1.34 is at the first element because it is a contribution from the first basis b .

  0.7 r = .

  0.7 Now we have to select from the rest of basis b and b , which one

  2

  3

  has the highest contribution to r. In single step

  T 0.8 0.6

  0.7

  0.98 w = b b =

  2 3 · r = ·

  0.7 −1 −0.7 So here b contribute better which is 0.98.

  1 2 new

  so that

  −0.707 0.8 b A = b = .

  new

  1

  2

  0.707

  0.6 Now, this step is different with MP. Here, we have to calculate the contribution of A to y (and not the contribution of b to r!).

  new

  2 To solve this contribution problem, we have to solve the Least Square Problem which can be easily formulated as follow:

  Which λ dan λ that will fulfilled

  1

  2

  0.8

  1.65 −0.707

  λ + λ as close as possible to y =

  1 · 2 ·

  0.707

  0.6 −0.25 in mathematical term this formulation can be written as :

  new

  min kA · λ − yk

  2 In our case

  λ

  1.65

  1

  −0.707 0.8 min · −

  0.707 0.6 λ

  2 −0.25

  2

  is solve using λ

  new

  Reader may familiar that min kA · λ − yk

  2

  • λ

  = A

  new

  · y

  • Where A is pseudo inverse of A , which is

  new new

  • T T −1 A = (A ) .

  

new new new new

  · A · A Again, in our case,

  • −0.707 0.8 −0.6062 0.8082 +

  A = =

  new

  0.707 0.6 0.7143 0.7143 Calculating pseudo inverse is easy in Matlab, just use pinv() com- mand.

  • After calculating A , we get

  new

  • λ

  −0.6062 0.8082 −1.2

  = A =

  new

  · y = · 0.7143 0.7143

  1 −0.25

  As we get λ, we update coefficient x to become   rec −1.2 x  

  =

  1

  rec Here we put λ in the first and second position for x because it

  rec corresponds to the first and second basis that have been selected.

  After we get A and λ, now we can update new the residue

  new

  1.65 −0.707 0.8 −1.2 r

  = .

  new

  = y − A · λ = − · 0.707

  0.6

  1 −0.25

  Well, our residue is already . So we can stop here or we can con- tinue to the next step. (If we stop here, we can save some computation work)

  6 Final iteration This step is not necessary as the residue is already vanished. (Many OMP implemetation software requires parameter which is signal spar- sity K. This is the clue for the software that it will iterate K times.

  After that it will stop, no matter how much the residue left).

  Our reconstructed signal is therefore :     −1.2 x =

  1

  rec Which is the same to the original signal.

  7 Other example Now we proceed with other example. Let      

  3 x =  

  1 and  

  1

  0.4   −0.8 0.3 A = . −0.2 0.4 −0.3 −0.4

  0.2

  1 −0.1 0.8 Having x and A, y is calculated as      

  1

  0.4

  2.7 −0.8 0.3        

  3 y =

  0.1 = A · x = −0.2 0.4 −0.3 −0.4 ·  

  1

  0.2

  1

  4.5 −0.1 0.8

  2 Now, given A and y, please find x using OMP! Ans

  1. We have 4 basis :        

  0.3

  1

  0.4         −0.8 b = ; b = 0.4 ; b = ; b =

  1

  2

  

3

  4

  −0.2 −0.3 −0.4

  0.2

  1

  0.8 −0.1 2. Since the length of basis is not one, we need to normalize them all.    

  −0.8 −0.9428

  2

  2

  2

  ˆ     b = b / + 0.2 =

  1 1 / kb 1 k = −0.2 + (−0.4) −0.2357

  p(−0.8)    

  0.2 0.2357

  0.3 0.268 √

  2

  2

  2

  ˆ     b = b 0.4 / 0.3 + 0.4 + 1 = 0.3578

  2

  2

  2

  / kb k =   1 0.894 0.9535

  ˆ b   = b

  3

  3

  3

  / kb k = −0.286   −0.0953 0.4082

  ˆ b = b  

  4 4 / kb 4 k = −0.4082

  −0.8165 3. contribution of these normalize basis can be calculated as:   

  0.9535 0.4082

  2.7 −0.9428 0.268

  T   

  w = ˆ A 0.1 =

  · y = −0.2357 0.3578 −0.286 −0.4082 · 0.2357

  4.5   0.894 −0.0953 −0.8165   −1.5085     4.7852 2.1167 4.7357

  4. Second normalized basis ˆ b gives the highest contribution. There-

  2

  fore we take second basis and put it to A   new  

  0.3 A = ˆ b = 0.4 .

  new

  2

  1

  5. Calculte x as the solution of Least Square Problem :

  rec

  • L = A

  4.28

  p new · y =

  Since this Lp is the coefficient for basis 2, then we write      

  4.28 x =

  rec  

  6. Next we calculate residue            

  2.7 0.3 1.416 r =

  0.1

  0.4

  new p

  = y − A · L − · 4.28 = −1.612

  4.5

  1

  0.22 b b b

  7. Now we repeat step 2. We calculate which basis ˆ , ˆ and ˆ

  1

  3

  4 that gives highest contribution to r.      

  1.416 −0.9428 −0.2357 0.2357 −0.9032

  T      

  w = ˆ A 0.9535 = 1.7902

  ·y = −0.286 −0.0953 · −1.612 0.4082 0.22 1.4158

  −0.4082 −0.8165 So here, at the second position which corresponding to b gives

  3 the highest contribution 1.7902.

  8. Add this the selected b (not ˆ b ) in to A  

  3 3 new

  0.3

  1 A   = b b = .

  new

  2

  3

  0.4 −0.3

  1 −0.1

  9. Solve the Least Square Problem for Lp 4.1702

  • L = A

  p new

  · y = 1.7149 We know that this Lp is for b and b , therefore we write

  

2

   

  3   4.172 x = rec  

  1.7149

  10. Next we calculate residue      

  2.7

  0.3

  1 −0.266       4.172 r =

  0.1 =

  new p

  = y −A ·L − 0.4 −0.3 · −1.0536 1.7149

  4.5 1 0.5012 −0.1 11. Now we repeat step 2 as for the final iteration.

  12. calculate highest contribution from either b or b    

  1

  4

  −0.9428 0.4082 −0.266 0.6172

  ˆ ˆ w =     = b b

  1

4 −0.2357 −0.4082 · −1.0536

  · r = 0.7308

  0.2357 0.5012 −0.8165

  So here, at the second position which corresponding to b gives

  4 the highest contribution 0.7308.

  13. And then collect this basis to already available A   new  

  0.3

  1

  0.4 A = .

  new

  0.4 −0.3 −0.4

  1 −0.1 0.8

  14. Solve the Least Square Problem for Lp  

  3

  • L = A 

  1 

  p new · y =

  2 We know that this Lp is for b , b , and b , therefore we write

  

2

   

  3

  4  

  3 x =

  rec  

  1

  2

  15. Next we calculate residue        

  2.7

  0.3

  1

  0.4

  3 r =  0.1     1  =  

  = y − A new · L p − 0.4 −0.3 −0.4 ·

  4.5

  1

  2 −0.1 0.8

  16. Our iteration can be stopped here, as the residue is already zero.

  The reconstruction x in this case is      

  3 x =

  rec  

  1

  2 which is similar to the original x.

  8 Final notes As we see in the ilustration process above, we now may notes the following facts.

  • The highest contribution as we calculated from OMP is derived from normalized basis. Not from the original unnormalized basis.
  • If all basis in measurement or reconstruction matrix A has already been normalized, the the calculation can be proceeded without normalization step.
  • The contribution by each basis is done by dot product of the residue and the normalize basis.

  is calculated

  rec

  • In the case of MP, the reconstruction coefficient x from dot product of basis and residue, while on OMP, the coeffi- cient x is calculated from Least Square solution of A to y.

  rec new

  The solution of this Least Square Problem is actually L . And

  p

  L is directly indicates x with a proper positio. A itself is

  p rec new

  collected from already selected unnormalized basis. This process needs time. Therefore OMP is slower dan MP.

  and A and L .

  • Residue r is calculated from original y new p • The Iteration continues as many as the number or row in A (i.e.

  M ). Or, if the sparsity of the signal k is known, then the iteration repeats k times. If k < M, then knowing k will help our compu- tation faster. However if k is unknown, the iteration should be up to M.

  9 Mathematical formal algorithm for OMP Now, after we understand previous process, we can state the following steps as OMP algorithm.

  OMP algorithm 1.

  10 Weakness of OMP OMP are fast. Very fast, as compared to the convex optimization.

  This is an advantage of greedy algorithm. But OMP also suffers from highly coherency matrix A. What is coherency in a matrix? Coherency in matrix A simply indicates similarity between two columns in matrix A . Look at the following two matrix:

  0.6 0.8 1 0.6 0.61 1 A

  = and A =

  1

  2

  0.8 0.6 0 0.8 0.79 0 Matrix A has a very high coherency, because second column is very

  2

  similar to the first column. Matrix A has lesser coherency, because

  1

  second column doesnot very similar to the first and third column. The coherency value which is denoted by µ is defined as µ = max |hA(:, i) · (A :, j)i| .

  i,j ;i6=j

  The value of µ is between 0 and 1. If µ is very high, then, usually OMP gives a wrong reconstruction result.

  Try the following example: x = 2 1 0

  0.6 0.61 1 A = A =

  2

  0.8 0.79 0 Find y.

  Then given A and y, using OMP, find x back! Can you get the original x?

  11 Cite this document If you find this tutorial helpful, please cite it into your report as :

  Usman, Koredianto, (2017), Introduction to Orthogonal Matching Pursuit, Telkom University Online : http://korediantousman.staff.telkomuniversity.ac.id, access : your access time.

Dokumen yang terkait

Hubungan pH dan Viskositas Saliva terhadap Indeks DMF-T pada Siswa-siswi Sekolah Dasar Baletbaru I dan Baletbaru II Sukowono Jember (Relationship between Salivary pH and Viscosity to DMF-T Index of Pupils in Baletbaru I and Baletbaru II Elementary School)

0 46 5

The Effectiveness of Computer-Assisted Language Learning in Teaching Past Tense to the Tenth Grade Students of SMAN 5 Tangerang Selatan

4 116 138

The effectiveness of classroom debate to improve students' speaking skilll (a quasi-experimental study at the elevent year student of SMAN 3 south Tangerang)

1 33 122

Implementasi OCR (Optical Character Recognition) Menggunakan Metode String Matching Untuk Mendeteksi Obat dan Makanan Berbasis Android

17 100 66

Factors Related to Somatosensory Amplification of Patients with Epigas- tric Pain

0 0 15

The Concept and Value of the Teaching of Karma Yoga According to the Bhagavadgita Book

0 0 9

Pemanfaatan Permainan Tradisional sebagai Media Pembelajaran Anak Usia Dini untuk Mengembangkan Aspek Moral dan Bahasa Anak Utilization of Traditional Games as Media Learning Early Childhood to Develop Aspects of Moral and Language Children Irfan Haris

0 0 11

TEKNIK PERLAKUAN PENDAHULUAN DAN METODE PERKECAMBAHAN UNTUK MEMPERTAHANKAN VIABILITAS BENIH Acacia crassicarpa HASIL PEMULIAAN (Pretreatment Technique and Germination Method to Maintain the Viability of Acacia crassicarpa Improved Seed)

0 1 11

The Risk and Trust Factors in Relation to the Consumer Buying Decision Process Model

0 0 15

PENERAPAN ADING (AUTOMATIC FEEDING) PINTAR DALAM BUDIDAYA IKAN PADA KELOMPOK PETANI IKAN SEKITAR SUNGAI IRIGASI DI KELURAHAN KOMET RAYA, BANJARBARU Implementation of Ading (Automatic Feeding) Pintar in Fish Farming on Group of Farmer Close to River Irriga

0 0 5