the combined DSM the bridge is flat and the boulder has correct dimensions.
Figure 10: Section of the Combined DSM Figure 11 shows differences between the TLS DSM and the
UAV DSM.
Figure 11: DSM Differences In most parts the differences are smaller than 0.5 metres. The
bigger differences come mostly from overhanging boulders where either the UAV or the laser scanner could not acquire any
data and thus these regions were interpolated. In the middle of Figure 11 a seam is visible which denotes a larger difference.
This difference comes from the mosaicking that was done to combine the UAV model from the east and the west part. To the
right of this seam, there are other bigger differences, which are in the area of the small river. The water is difficult to match in
the UAV images and also results in larger noise in the laser scans.
3. MODELING OF STONE BOULDERS
The goal of this step was to provide detailed models of 4 large boulders, which are of archaeological relevance. Under these
boulders, excavations have been and will be carried out by the archaeologists from the University of Zurich. Thus the models
can for example be used to document the current state, for visualisations or as a basis for finding new excavation spots.
3.1 Terrestrial Laser Scanning
To generate detailed boulder models, on the one hand a TLS Imager 5006i Zoller + Fröhlich was used. According to the
manufacturer, the scan range is limited to 79 metre and must at least be 40 cm. The noise level depends strongly on the
reflectance of the scanned material. In our application, we expect maximal standard deviations of several millimetres. An
industrial camera is fixed on top of the scanner and can be used to colour the scans.
To estimate the relative and absolute pose of each laser scan, white wooden spheres were used as tie and control points. At
least 3 control points per boulder were measured using a referenced total station or differential GNSS. To get the relative
pose of two scans using a Helmert transformation with 6 parameters 3 translation, 3 rotation not less than 3 common tie
points are to be used. An example of a scan configuration is demonstrated in Figure 12. On the right side of the image, the
laptop to control the scanner and 2 tie point spheres can be seen.
Figure 12: Scanning of the Boulders The scans are primarily pre-processed in Z+F Lasercontrol
software Zoller + Fröhlich. To transform the scans into the reference system, the scanned spheres have to be modelled and
thus the centre of the sphere can be used as a tie point. This step can be accomplished semi-automatically. All tie points are used
for the relative orientation of the scans. With the aid of the measurements from the total station and GNSS the absolute
orientation can be performed. Based on the previously accomplished calibration of the camera, the colouring of the
scans can be carried out automatically. Thus we get coloured point clouds containing 50 million points on average, which can
be further processed, as described in Chapter 3.3. 3.2
Close Range Photogrammetry
Two professional Digital Single-lens Reflex DSLR cameras were used in order to obtain images of three stone boulders. The
Nikon D3x is a full frame DSLR camera with a 24.5 Mega pixel CMOS sensor, whereas Nikon D2x has also a CMOS sensor,
but of an APS-C format with a resolution of 12.4 Mega pixels. Together with both cameras, a set of three fixed-focus lenses
was used: 18 mm, 24 mm and 35 mm focal length. Before the image acquisition, each of the camera-lenses,
focused at infinity, was calibrated using the Australis package Photometrix, which consists of portable targets and calibration
software. The calibration was performed in the field, which takes only 20-30 minutes, including taking the photographs of
XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia
150
the portable targets they are retro-reflective and require the use of a flash, see Figure 13, uploading images on the laptop and
automatically performing the calibration. The uncertainty of the calibration did not exceed 0.18 pixels.
Figure 13: An In-Field Calibration Field with Australis Retro- Reflective Calibration Targets
A set of adhesive retro-reflective targets Figure 14, attached to the boulders in approximately even spacing, was used for
georeferencing. Their coordinates were determined with a total station, which has a special mode for measuring targets made of
retro-reflective foil. Total stations were positioned on a fixed point, measured beforehand with GNSS, or using free-station
mode.
Figure 14: Retro-Reflective Targets Left: for Georeferencing, Right: for Calibration
The images were acquired from such a distance that an accuracy of at least 2 cm could be achieved at the most distant object
points, resulting in an average pixel size of approximately 3 mm. What is more, the small distance to the object was also
motivated by a need to fill the whole image frame with the object
, in order to keep proper geometry of a photogrammetric
network. Because of the poor illumination, the lower parts of the stone boulders had to be photographed with flash, which
caused the retro-reflective targets to gleam, reducing the measurement accuracy.
The initial processing steps included sorting the large number of images, and small radiometric corrections. Further steps were
carried out using Photomodeler Scanner software. The images were semi-automatically oriented: the coarse relative orientation
was done manually. Then, having first exterior orientation parameters, it was possible to extract tie points automatically.
The final RMSE of the bundle adjustment was 1.2 pixels. After having the block of images oriented, it was possible to
densely match image pairs and generate point clouds of the boulders, each having roughly 15 million points. The stone
surface has good texture and Photomodeler dense matching performed reliably. The coloured point cloud was then further
processed, as described in Chapter 3.3. An example point cloud, together with the camera stations screen-shot from
Photomodeler Scanner software can be seen on Figure 15.
Figure 15: Example of the Point Cloud, Together with Camera Positions
3.3 Point Cloud Processing