Figure 2. The five components of processing flow
2.1 Viewpoint decision for point-based rendering
Viewpoints for point-based rendering are selected in point- cloud data through two steps. In the first step, an orthobinary
image is generated from the point cloud to represent a rough floor surface as a viewpoint candidate. In the next step, the
orthoimage is eroded with morphology processing to generate a viewpoint candidate network. Intersections on the network are
selected as the viewpoints for point-based rendering.
2.2 Point-based rendering
Point-cloud visualization has two issues. The first is the near-far problem caused by distance differences between the viewpoint
and the scanned points. The second is the transparency effect caused by rendering hidden points among near-side points.
These effects degrade the quality of a point-cloud visualization. Splat-based ray tracing Linsen et al. 2007 is a methodology
that generates a photorealistic curved surface on a panoramic view using normal vectors from point-cloud data. The long time
period required for surface generation in the 3D work space is a problem. Furthermore, the curved-surface description is
inefficient in representing urban and natural objects as Geographical Information System data. Thus, we have applied a
point-based rendering application with a simpler filtering algorithm Nakagawa 2013 to generate panoramic range
images from a random-point cloud. The processing flow of point-based rendering is described in Figure 3.
Figure 3. Point-based rendering First, the point cloud is projected from 3D space to panorama
space. This transformation simplifies viewpoint translation, filtering, and point-cloud browsing. The panorama space can be
represented by a spherical, hemispherical, cylindrical, or cubic model. Here, the cylindrical model is described for wall
modeling. The measured point data are projected onto a cylindrical surface, and can be represented as range data. The
range data can preserve measured point data such as a depth, X, Y, Z, and some processed data in the panorama space in a
multilayer style. Azimuth angles and relative heights from the viewpoint to the measured points can be calculated using 3D
vectors generated from the view position and the measured points. When azimuth angles and relative heights are converted
to column counts and row counts in the range data with adequate spatial angle resolution, a cylindrical panorama image
can be generated from the point cloud. Second, the generated range image is filtered to generate
missing points in the rendered result using distance values between the viewpoint and objects. Two types of filtering are
performed in the point-based rendering. The first is a depth filtering with the overwriting of occluded points. The second is
the generation of new points in the no-data spaces in the range image. New points are generated with the point tracking filter
developed in this study. Moreover, a normal vector from each point is estimated in the
range image. Normal vector estimation is often applied to extract features in point-cloud processing. Generally, three
points are selected in the point cloud to generate a triangle patch for normal vector estimation. Mesh generation is the basic
preprocessing step in this procedure. In 2D image processing, the Delaunay division is a popular algorithm. It can also be
applied to 3D point-cloud processing with millions of points Chevallier et al. 2011. However, using the Delaunay division,
it is hard to generate triangle patches for more than hundreds of millions of points without a high-speed computing environment
Fabio 2003 Böhm et al. 2006. Thus, we focused on our point-cloud rendering, which restricts visible point cloud data as
a 2D image. A closed point detection and topology assignment can be processed as 2D image processing, as shown in the
lower right image in Figure 2. The processing flow of normal vector estimation is described
below. First, a point and its neighbors in the range image are selected. Second, triangulation is applied to these points as
vertexes to generate faces. Then, the normal vector on each triangle is estimated using 3D coordinate values of each point.
In this research, an average value of each normal vector is used as the normal vector of a point, because we used the point cloud
taken from a laser scanner that presents difficulties for measuring edges and corners clearly. These procedures are
iterated to estimate the normal vectors of all points.
2.3 Normal vector clustering for surface estimation