comprises 180 images and is acquired using an UltraCam Osprey I multi-camera system one large frame nadir camera
with six oblique looking cameras along the four cardinal directions. The area is flown with an overlap of 7565
alongacross track and a nadir GSD of about 12 cm. Both datasets include ground truth information in form of GPS-
surveyed ground control points GCPs and check points CPs acquired with a mean accuracy of 8 cm Trento and 5 cm
Graz. Furthermore, the exterior orientation parameters are available as GNSS-IMU observations. More details about the
camera systems and the datasets are listed in Table 1. Additionally, a thermal flight is performed over the city of Graz
adopting a TABI-1800 thermal sensor. The delivered orthophoto features a spatial resolution of 60 cm and is
processed using a LiDAR DSM available for the area. Finally, land cadastre maps in vector format covering the whole
city of Trento are provided as shapefiles nominal scale of 1:2,000. The latter contain, among the rest, the building
footprints, each linked with a unique land parcel ID. 2.3
Non-spatial data
Several pieces of non-spatial information are provided by public authorities and private organizations for the city of Trento.
These include: energy performance certificates of few buildings, including
both residential houses and public utilities. They mainly list energy consumptions, carbon dioxide emissions and energy
efficiency indexes. sources of artificial night-time light single spots, like
streetlights, including spatial distribution and light emission information.
data from the register of buildings, including information about each property unit e.g. owner, number of floors,
number of rooms, building surfaces, property category, etc..
3. METHODOLOGY
3.1 3D geometry capture
Given the necessity to cover large areas, airborne data represents the main input for 3D geometry capture in urban
scenarios. Several 3D modelling methods have been recently developed in order to reconstruct the geometry of urban
environments using aerial imagery Kluckner and Bischof, 2010; Nex and Remondino, 2012; Bulatov et al., 2014.
Nowadays the base information is derived from dense point clouds extracted with state-of-the-art multi-stereo matching
algorithms Haala, 2013; Remondino et al., 2014. When traditional nadir-looking cameras are adopted, the workflow is
mainly straightforward: starting from the aerial triangulation AT results, many commercial matching tools can generate
DSM raster and cloud representations at a GSD-level resolution. When it comes to the processing of oblique imagery
for 3D information extraction, the task becomes not-trivial. Both image orientation and dense matching are aggravated by
several challenges, e.g. grater illumination changes, multiple occlusions, large scale variations and high sensitivity to noise
Wiedemann and More, 2012; Rupnik et al., 2013; Gerke et al., 2016. Furthermore, from a geometric point of view, the
traditional 2.5D processing for DSM production from nadir images should be replaced by a more compelling modelling in
“real” 3D space Haala and Rothermel, 2015. This inevitably requires the development of ad-hoc filtering and meshing
algorithms for the derived 3D points. These two approaches are both employed within the SENECA
project Figure 2, using state-of-the-art software solutions, either
commercial or
open-source, i.e.
Pix4D www.pix4d.com
, SURE www.nframes.com
and MicMac http:logiciels.ign.fr
. The nadir imagery collected over Trento is processed using the traditional pipeline, consisting in image
triangulation with self-calibration followed by dense image matching and DSM generation. The output is a 2.5 point cloud
resampled with a uniform spatial point distribution in the XY- plane. On the other side, the Graz oblique dataset is first
oriented by testing several strategies Rupnik et al., 2015. Then 3D multi-view matching and filtering steps are applied Rupnik
et al., 2014. In this case, an unstructured point cloud with spatially heterogeneous point distribution is generated.
Figure 2. Adopted workflow for producing 3D geometries.
3.2 3D geometry modelling
Few years ago, photogrammetry was coupled to manual or semi-automated methods in order to generate 3D building
models Gruen and Wang, 1998. Automation started with the first dense LiDAR-based point clouds Suveg and Vosselman,
2002. Nowadays, it is common to produce 3D city models large polygonal shapes or single 3D buildings from 2.5 or 3D
point-based representations Lafarge and Mallet, 2012. A geometrical simplification is then an essential step to reduce
data complexity and allow for an efficient management and real-time rendering of the final output in web-based platforms.
Several 3D reconstruction methods have been developed Haala and Kada, 2010; Musialski et al., 2013. From an algorithmic
point of view and focusing on building modelling, methods can be traced to two main categories:
reconstruction through template-fitting: building models
“from scratch” in order to best fit the given point clouds Kada, 2009; Sampath and Shan, 2010. Generally, these
methods first apply an interactive pre-segmentation of the points, then try to detect buildings by searching for selected
types of roof shapes e.g. flat, gable, hip roofs, etc.. Building footprints, when available, can be adopted to
support the process, after a non-trivial step of splitting andor merging. The output of these model-based methods
can feature different Level of Details LODs, according to the OCG standard CityGML Gröger and Plümer, 2012.
Generally, they provide distinct roof structures, but are restricted to planar vertical facades LOD2-compliant
building models. These approaches, although providing impressive and convincing 3D models, lack from generality
and build upon strong building priors on symmetry and roof typology.
This contribution has been peer-reviewed. doi:10.5194isprs-archives-XLII-1-W1-527-2017
529
reconstruction through DSM or 3D mesh simplification: this class of algorithms leans on the concept that buildings
are “contained” in a detailed 3D mesh or 2.5D DSM and seeks to simplify the meshed raw data until it meets ad-hoc
geometric and
semantic criteria.
Different mesh
simplification approaches have been introduced, e.g. based on dual contouring Zhou and Neumann, 2010. These data-
driven reconstruction methods are able to model roof shapes of arbitrary complexity and to extract building models in
accordance with the desired abstraction level. Although more close to the raw data and more general, however, these
approaches may lose semantic information and are often designed to deal with a specific type of input data.
Figure 3. Workflow adopted for modelling the building geometries partially based on TU Delft -
www.tudelft.nl
Both aforementioned approaches are employed within the SENECA project Figure 3.
The DSM cloud extracted from nadir imagery over Trento is used as input for the generation of LOD2-compliant building
models. This is carried out by fitting roofs primitives on the pre- segmented data using the tridiconHexagon suite of tools
www.tridicon.de . Furthermore, LOD1 polyhedral models are
reconstructed by extruding available cadastral footprints. Finally, ARCHCAD
www.graphisoft.com supported by data
collected from the register of buildings, are adopted to generate architectural 3D models LOD3-compliant of few buildings of
interest. For the Graz case study, a detailed 3D mesh is firstly generated
from the filtered 3D point cloud, in order to keep details visible on oblique imagery and reconstructed on facades. Then, the
mesh is simplified by reducing its number of faces, in order to allow an efficient uploading and rendering.
3.3
3D geometry enrichment and management
The extension of traditional database management systems in order to handle huge amount of spatial data has seen increasing
amounts of research work in recent years Gál et al., 2009; Lewis et al., 2012; Agugiaro, 2014; di Staso et al., 2015.
Focusing on 3D city modelling Biljecki et al., 2015, the reconstructed geometry can be enriched with many pieces of
non-spatial information, allowing for both visualization-based e.g. visibility analysis, urban planning, 3D cadastre, emergency
response, etc. and non-visualization i.e., when the visualization of 3D models is not required, being the spatial
operations, such as estimation of the solar irradiation, estimation of energy demand, etc., stored in databases usages.
This relies on the creation of a scalable system designed to store, manipulate, analyse, manage and visualize different types
of spatial and non-spatial data, and the connections among them.
Within the SENECA project, a service platform accessible from a web-based client was developed in order to manage both
“data containers
” i.e. building models as well as associated information i.e. non-spatial data collected within the project
activities. The architecture of the platform is shown in Figure 4 and features a multi-layer structure.
Figure 4. Architecture adopted for the management of 3D geometries.
Geospatial data are first adjuste d within the “Raw data
management layer”, developed in NodeJs www.nodejs.org
, in order to make them compatible with the platform and allow for
an efficient management of information along the entire chain. Both spatial and non-spatial data are then stored within the
platform database DB, i.e. PostresSQLPostGIS, accessible through the “Data access layer”. Geometries reconstructed at
different levels of details populate the DB and are associated with their corresponding non-spatial pieces of information via
unique identifiers. Users permissions and filters are managed
through the “Business logic layer”, developed in NodeJs. It handles different types of users and, correspondingly, different
types of permissions associated with them: this allows to handle data visibility and manipulation, and limit accessibility to
sensitive and confidential information
. The “Service layer” deals with data exchange with other layers through a RESTful
architecture. Additionally, it provides public API for a limited data exposure and data import, in order to allow for a potential
platform enrichment via integration with third-party software. Finally,
the “Presentation layer” provides the web application
This contribution has been peer-reviewed. doi:10.5194isprs-archives-XLII-1-W1-527-2017
530
that makes the data available to the end-users. It is developed in AngularJS
www.angularjs.org , an open source framework
with a Model-View-View-Model MVVM architecture. In order to exploit the potential of 3D data, increase the user
experience and the data representation quality, an in-house customized version of the open-source NASA World Wind
www.webworldwind.org was adopted. It allows for geometry
viewing, navigation, texture projection and map layers rendering.
4. RESULTS