An approach for generating oblique surface models for the extraction of geoinformation is presented in Wieden et al.
2013. Following this approach, the oriented oblique images are pre-processed into tilted DSMs. The SGM, which has been
initially designed for the creation of 2.5D data from High Resolution Stereo Camera images, is applied to the oblique data
as well. Many 2.5D surface models of the same scene from different cardinal directions are the result see table A1. In
combination all pixels from the surface models are transformed into genuine 3D points and merged into a single point cloud
see figure 1. Since the synthetic building is the only elevated object within
the synthetic scene, the surface models are comparable to a normalized surface model nDSM, which could be generated
through dedicated algorithms Mayer, 2004 and Piltz et al., 2016. That means the relative height of the building is
implemented in the absolute height values of the normalized surface models and 3D point cloud. For the real world scene
such an nDSM algorithm is already applied. Beside a ground-no ground differentiation there is also a polygon generated, which
roughly describes the building outline. Both approaches only take the area inside such a polygon into account.
Figure 1. Colored synthetic point cloud left and colored point cloud for a building from the Heligoland dataset right
3. METHODOLOGY
The presented point cloud based approach is already used for reconstructing roof structures with help of a priori knowledge
from cadaster maps Dahlke et al., 2015. This published approach lacks the capability of reconstructing vertical
structures like façades or vertical roof structures. This drawback is overcome by an improvement which is presented in the next
section. The image processing approach has similar milestones as the point cloud based. Both start to define planes, which
describe facades and the roof of the building. Secondly the topology is recovered by defining edges between the planes.
Finally coordinates for the nodes are calculated by intersecting at least three planes.
3.1 Point Cloud Based Approach
The workflow for building reconstruction from 3D point clouds is separated into three steps: identification of cells defining
locally coplanar regions, merge of locally coplanar regions into so called building planes, generation of the geometrical model
after intersection of building planes. The idea is to split the 3D point cloud into small spatial areas
with a defined size, called cells. For each cell of the grid a linear regression after Pearson 1901 is performed. The regression
window of each grid is defined by its 3x3x3 neighborhood. The local regression plane is obtained and defined by points in this
neighborhood. The matrix of covariances of a point cloud 1 is a 3x3 matrix and the local regression plane is given in 2.
, , ∈
, , … ,
1
cov
≔ cov , cov , cov ,
cov , cov , cov , cov , cov , cov ,
2 where:
cov
= matrix of covariance cov=covariance
Three eigenvalues and eigenvectors are calculated for the covariance matrix. The eigenvector corresponding to the
smallest eigenvalue is the normal vector n of the regression plane. This plane passes through the center of gravity of the
point from the regression window of the considered cell. The square roots of the eigenvalues define the standard deviation in
the direction of their corresponding eigenvectors. The relation between the smallest standard deviation in the direction of the
normal vector of the regression plane and the sum of the other two standard deviations is here defined as a degree of
coplanarity:
≔ 3
where: = degree of coplanarity = standard deviation in direction of normal vector
2
,
3
= standard deviations in directions of the two other vectors
Cells with degree of coplanarity lower than a certain threshold will be denoted as coplanar cells. Now coplanar cells will be
merged into groups defining the so called building planes. A plane cell will be added to a group of plane cells if it has a
neighboring cell belonging to this group and if its normal vector is similar to the normal vector of the group of plane cells. The
difference of the normal vectors is measured by calculating the angle between the two normal vectors. The normal of the
building plane is defined by mean of all normal vectors of a group of plane cells. The result of the merge operation is shown
in figure 2. Finally a geometrical model of the whole building is generated. There are two different kinds of intersection points
which can be detected: 3-nodes, when three planes build a node or 4-node when four planes build a node. In order to detection
of 3-nodes a 3-combination of a set of planes is computed. For each combination is an intersection point calculated. The
intersection point is valid and as 3-node indicate, if it lies close to each plane of its triple planes combination. If two valid
intersection points lie very close to each other, they are merged into a 4-node. The reconstructed building is shown in figure 2.
Figure 2. Merged groups top and reconstructed result bottom
This contribution has been peer-reviewed. doi:10.5194isprsarchives-XLI-B3-599-2016
600
3.2 Image Processing Based Approach