between patches, y
p
is a vector containing DEM values in the overlap region, adding the regularization term that minimizing
the discrepancy between overlapping patches into 4, the final formulation can be expressed as
2 2
2 1
2
min
N
k l
l l
l l
l l
l l
p a R
k
w O L M D y
PO L M D y
6 where parameter β control the influence of the patch overlap
factor. 3.2
Dictionary construction
As learning an over-complete dictionary capable of representing classified of DEM patches is of great importance. The raw
patches are randomly sampled in this paper, which is similar to the approach used in Yang et al.2008. Then the sampled DEM
patches were classified into several new groups based on feature
vectors. Since K-SVD is one of the most popular dictionary learning algorithms, the sub-dictionaries are trained by K-SVD.
As the structures of DEM patches in each cluster are similar, sub-dictionary learning scheme can get more accurate structure
description of input DEM patches. The detail of K-SVD based sub-dictionary learning and combination are shown as follows:
Step 1: A few sub-dictionaries S
1
, S
2,
..., Sn are learned for each DEM patch groups
Step 2: Sub-dictionaries of each cluster are trained by K-SVD algorithm. Then the sub-dictionaries are combined to a
dictionary for fusion.
1 2
[ , ,
]
n
S S S
7 where
is the combined dictionary and S
1
, S
2
, …, S
n
are the trained sub-dictionaries.
3.3 Adaptive-weight determination for DEM fusion
The fusion is completed with weight maps that reflect the estimated relative accuracy of the source DEMs at each grid
point. First, geometric registration should be implemented for the datasets. The transformation matrix M
k
can be obtained after registration, and the cropping region can also be derived
according to the coordinates. A data-driven strategy is used to find the weights based on
geomorphological characteristics, which is similar to the approach used in Boufounos et al. 2011. Based on the slope
and entropy, the resulting accuracy maps of both input DEMs at each overlapping point can be determined, finally the reciprocal
values are used as weights for the fusion Papasaika et al., 2011.
3.4
Fusion of DEMs
Since optimization problems of this form constitute the main computational kernel of compressed sensing applications, there
exists a wide options of algorithms for their solution. As its simplicity and computational efficiency, Orthogonal Matching
Pursuit OMP Mallat,1998 is adopted. In the experiment, the
overlap parameter β is set between 0.6 and 1.2. The number of non-zero atoms is set between 7 and 15. The minimum patch
size is set as 3×3 and it should not be bigger than 9×9.
4. RESULTS AND DISCUSSION
4.1 Landing site mapping using CE-3 descent images
CE-3 began to descent from the lunar orbit at an altitude of around 15 km, and when it was about 2 km above the lunar
surface, the descent camera started to take images. During the phases of descending, hovering and obstacle avoidance and
landing, the descent camera acquired totally 4,672 images with a resolution higher than 1 m within an area of 11 km and as
high as 0.1 m within a range of 50 m from the landing point. Main technical parameters of the CE-3 descent camera are listed
in table 1. One hundred and eighty were selected with equal time interval and incorporated in a self-calibration free network
bundle adjustment, and the initial trajectory including camera positions and attitudes of the camera was recovered. Then, 26
ground control points GCPs are selected from the rectified Chang
’e-2 DEM and digital orthophoto map DOM for absolute orientation. The RMSEs Root Mean Square Errors of
these GCPs are 0.724 m, 0.717 m and 0.602 m in three directions. The RMSEs of 18 check points are also less than 1
m. Figure 1 shows the DEM and DOM generated from 80 descent images with a resolution of 0.075 m. The dots in the
maps represent the lander position. The maps cover an area of 97 m
115 m. Using more descent images of higher altitudes, topographic products of larger coverage were also generated.
Image size Actual imaging
distance Focal
length Pixel size
10241024 pixels
4 m~2000 m 8.3 mm
6.7 μm Table 1. Technical parameters of CE-3 descent camera
Figure 1. DEM right and DOM left generated from of
descent images Stereo base
Focal length Image size
Field of view 27cm
1189 pixels 10241024 pixels
46.4°46.4° Table 2. Parameters of Yutu Navcam camera
Figure 2. DEM of Navcam images at D site
This contribution has been peer-reviewed. https:doi.org10.5194isprs-archives-XLII-3-W1-119-2017 | © Authors 2017. CC BY 4.0 License.
121
4.2 Mapping with Navcam images