Innovations for Community Services pdf pdf
Michal Hodoň Gerald Eichler Christian Erfurth Günter Fahrnberger (Eds.)
Communications in Computer and Information Science 863
Innovations for Community Services 18th International Conference, I4CS 2018 Žilina, Slovakia, June 18–20, 2018 ProceedingsCommunications
in Computer and Information Science 863
Commenced Publication in 2007 Founding and Former Series Editors: Alfredo Cuzzocrea, Xiaoyong Du, Orhun Kara, Ting Liu, Dominik Ślęzak, and Xiaokang Yang
Editorial Board
Simone Diniz Junqueira Barbosa Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil
Phoebe Chen La Trobe University, Melbourne, Australia
Joaquim Filipe Polytechnic Institute of Set úbal, Setúbal, Portugal
Igor Kotenko St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia
Krishna M. Sivalingam Indian Institute of Technology Madras, Chennai, India
Takashi Washio Osaka University, Osaka, Japan
Junsong Yuan University at Buffalo, The State University of New York, Buffalo, USA
Lizhu Zhou Tsinghua University, Beijing, China More information about this series at http://www.springer.com/series/7899
- • Michal Hodoň Gerald Eichler • Christian Erfurth Günter Fahrnberger (Eds.)
Innovations for Community Services
18th International Conference, I4CS 2018
Žilina, Slovakia, June 18- –20, 2018 Proceedings
ISSN 1865-0929
ISSN 1865-0937 (electronic) Communications in Computer and Information Science
ISBN 978-3-319-93407-5
ISBN 978-3-319-93408-2 (eBook) https://doi.org/10.1007/978-3-319-93408-2 Library of Congress Control Number: Applied for © Springer International Publishing AG, part of Springer Nature 2018
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, express or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by the registered company Springer International Publishing AG part of Springer Nature.
Foreword
The International Conference on Innovations for Community Services (I4CS) had its 18th edition 2018. It had emerged as the Workshop on Innovative Internet Community Systems (I2CS) in 2001, founded by Herwig Unger and Thomas Böhme, and continued its success story under its revised name I4CS in 2014. We are proud to have reached again the original number of scientific presentations, combined with a great social conference program.
The selection of conference locations reflects the conference concept: Our members of the Technical Program Committee (TPC) can offer suitable locations. In 2018, the Steering Committee had the honor of handing the organization responsibility over to Michal Hodoň and, therefore, of determining a Slovakian venue for the first time in the history of the conference. The University of Žilina was a remarkable place for offering a perfect climate to make the motto “Relaxation Teams Communities” happen.
I2CS published its first proceedings in Springer series Lecture Notes in Computer Science series (LNCS) until 2005, followed by the Gesellschaft für Informatik (GI), and Verein Deutscher Ingenieure (VDI). I4CS commenced with the Institute of Elec- trical and Electronics Engineers (IEEE) before switching back to Springer’s Commu- nications in Computer and Information Science (CCIS) in 2016. With 1,473 chapter downloads from SpingerLink for CCIS Vol. 717, publishing the I4CS proceedings of 2017, we envisaged an increasing result. I4CS has maintained its reputation as a high-class C-conference at the CORE conference portal http://portal.core.edu.au/conf-
ranks/?search=I4CS&by=all .
The proceedings of I4CS 2018 comprise five parts that cover the selection of 14 full and three short papers out of 38 submissions. Interdisciplinary thinking is a key success factor for any community. Hence, the proceedings of I4CS 2018 span a range of topics, bundled into three areas: “Technology,” “Applications,” and “Socialization.”
Technology: Distributed Architectures and Frameworks
- Data architectures and models for community services
- Innovation management and management of community systems
- Community self-organization in ad-hoc environments
- Search, information retrieval, and distributed ontologies
- Common data models and big data analytics
Applications: Communities on the Move
- Social networks and open collaboration
- User-generated content for business and social life
- Recommender solutions and context awareness
- Augmented reality and location-based activities
- Intelligent transportation systems and logistic services
Socialization: Ambient Work and Living
- eHealth challenges and ambient-assisted living
- Intelligent transport systems and connected vehicles
- Smart energy and home control
- Digitalization and cyber-physical systems
- Security, identity, and privacy protection
Many thanks to the 19 members of the TPC representing 12 countries for their valuable reviews, especially the chair, Christian Erfurth and, secondly, to the publi- cation chair, Günter Fahrnberger, who fostered a fruitful cooperation with Springer.
The 19th I4CS will be organized by the Ostfalia University of Applied Sciences and will take place in Wolfsburg/Germany in June 2019. Please check the permanent conference URL http://www.i4cs-conference.org/ for more details. Applications of prospective TPC members and potential conference hosts are welcome at [email protected].
April 2018 Gerald Eichler
Preface
Žilina is the natural center of northwestern Slovakia, which ranks among the largest and most important cities in Slovakia. It is located in the valley of the Váh River, surrounded by the beautiful mountain ranges of Malá Fatra, Strážovské vrchy, Súovské vrchy, Javorníky, and Kysucká vrchovina. The National Park of Malá Fatra comprises famous gorges, rock peaks, and an attractive ridge tour. The main subject of protection is the territory with varied geological history and dissected relief forms, rare and precious biocenoses, flora and fauna, and the exceptional value of the forest and mountain compounds with precious dwarf pinewoods, and rapacious animals, such as the wolf, lynx, or bear.
Žilina is a center of significant political, cultural, sport, and public health-care institutions. Its economic potential can be proven by the fact that Žilina has the second highest number of traders per thousand inhabitants. As for the number of joint stock companies and limited companies, Žilina holds third position in Slovakia. Nowadays, the city of Žilina represents a dynamic development accelerated by KIA Motors Slo- vakia investments. However, the city is not only a center of car production, but together with the Upper Váh River Region (Horné Považie) it is an interesting tourist desti- nation. The city of Žilina is a center of theaters, museums, galleries, parks, and sports facilities. Its historical center is crossed by one of the longest and the most beautiful pedestrian zones in Slovakia.
The University of Žilina was founded in 1953 by separating from the Czech Technical University in Prague, followed by its renaming to the University of Trans- port and Communications. Later in 1996, after broadening its fields of interest and other organizational changes, it was renamed as the University of Žilina. In its over 60 years of successful existence, it has become the alma mater for more than 70,000 graduates, highly skilled professionals mostly specializing in transport and technical fields as well as in management, marketing, or humanities. The quality and readiness of the graduates for the needs of practice is proved by long-term high interest in hiring them by employers that cooperate with the university in the recruitment process.
A stopover in the Malá Fatra Mountains offers unforgettable experiences enhanced through the selected venue of the Village Resort Hanuliak as a unique wellness resort located in the beautiful environment of the Malá Fatra National Park. The picturesque village of Belá is located only 20 km away from the city of Žilina.
We hope that all attendees enjoy the fruitful, friendly, and relaxed atmosphere during the conference. We trust they will gather professional experiences and be happy to come back in the future. April 2018
Michal Hodoň Organization Program Committee
Marwane Ayaida University of Reims Champagne-Ardenne, France Gilbert Babin HEC Montréal, Canada Gerald Eichler Deutsche Telekom AG, Germany Christian Erfurth Jena University of Applied Sciences, Germany Günter Fahrnberger University of Hagen, Germany Hacène Fouchal University of Reims Champagne-Ardenne, France Sapna Gopinathan Coimbatore Institute of Technology, India Michal Hodoň University of Žilina, Slovakia Peter Kropf University of Neuchâtel, Switzerland Ulrike Lechner Bundeswehr University Munich, Germany Karl-Heinz Lüke Ostfalia University of Applied Sciences, Germany Phayung Meesad
King Mongkut’s University of Technology North Bangkok, Thailand
Raja Natarajan Tata Institute of Fundamental Research, India Frank Phillipson TNO, The Netherlands Srinivan Ramaswamy ABB, USA Joerg Roth Nuremberg Institute of Technology, Germany Maleerat Sodanil
King Mongkut’s University of Technology North Bangkok, Thailand
Leendert W. M. Wienhofen
City of Trondheim, Norway Ouadoudi Zytoune Ibn Tofail University, Morocco
Contents
. . .
äthe . . .
J érémie Bosom, Anna Scius-Bertrand, Haï Tran, and Marc Bui . . .
Geoffrey Wilhelm, Hac ène Fouchal, Kevin Thomas, and Marwane Ayaida
. . .
. . .
Michal Kvet and Karol Matiasko . . .
Emilien Bourdy, Kandaraj Piamrat, Michel Herbin, and Hac ène Fouchal . . .
. . .
Veronika Olešnan íková, Ondrej Karpiš, Lukáš Čechovič, and Judith Molka-Danielsen
Julian Knoll, Rainer Gro ß, Axel Schwanke, Bernhard Rinn, and Martin Schreyer
Andreas Lommatzsch XII Contents
ès Dinant, Thomas Vilarinho, Ilias O. Pappas, Simone Mora, In Jacqueline Floch, Manuel Oliveira, and Letizia Jaccheri
Karl-Heinz L üke, Johannes Walther, and Daniel Wäldchen . . . Marcus Wolf, Arlett Semm, and Christian Erfurth
Sven Von Hollen and Benjamin Reeh
Janusz Furtak, Zbigniew Zieliński, and Jan Chudzikiewicz
Jakub Hrabovsky, Pavel Segec, Marek Moravcik, and Jozef Papan
R óbert Žalman, Michal Chovanec, Martin Revák, and Ján Kapitulík
Architectures and Management
Microservice Architecture Within
In-House Infrastructures for Enterprise
Integration and Measurement:
An Experience Report
( ) B
Sebastian Apel , Florian Hertrampf, and Steffen Sp¨ athe
Friedrich Schiller University Jena, 07743 Jena, Germany
{sebastian.apel,florian.hertrampf,steffen.spaethe}@uni-jena.de
Abstract.The project WINNER aims to integrate and coordinate
electromobility used through carsharing, the energy consumption of
tenant households and the local production of electricity, e.g., by
integrating photovoltaic systems into a smart local energy grid. While
the various components correspond to the currently available standards,
the integration has to be realised via a data processing and storage
platform, the WINNER DataLab. The goal of this platform is to provide
forecasts and optimisation plans to operate the WINNER setup as
efficiently as possible. Each data processing component is encapsulated
as a single service. We decided to use a microservice architecture
and further an execution environment like container instances within
a cloud infrastructure. This paper outlines the realisation as well
as a report of our experiences while realising this project related
microservice architecture. These experiences focus on development
complexity, modifiability, testability, maintainability and scalability as
well as dependencies and related lessons learned. Finally, we state,
that the practical application of setups like this helps to concentrate
on business models. It supports decoupling, helps in development to
focus on the essential things and increases efficiency in operation, not
least through good opportunities for scaling. However, it is required
to mastering the complexity which currently requires clean planning,
experience and a coordinated development process.Keywords: Smart grid Internet of things Microservice architecture
· ·
Experience report Measurement infrastructure ·
1 Introduction
Imagine a modern urban area with tenant households. Photovoltaic systems produce electricity; each flat knows when electricity is available. So, scheduling of consumers is possible as well as the management of electric cars that are charged when electricity is available. In contrast, electricity is offered when the
4 S. Apel et al.
car will not be used soon. The implementation of such smart grids, especially its network of actors and sensors, which is known as the internet of things (IoT), is a complex task
]. IoT is stated as the next big step in internet technology
], and there is a large number of different and heterogeneous devices to handle
]. One way to address this integration and measurement task are microservice
architectures and cloud computing infrastructures.Our research project “Wohnungswirtschaftlich integrierte netzneutrale Elektromobilitat in Quartier und Region” WINNER
] aims to integrate and
coordinate electromobility used through carsharing, the energy consumption of tenant households and the local production of electricity, e.g., by integrating photovoltaic systems into a smart local energy grid. Our primary goal of this project is avoiding the injection of electrical power into higher grid levels, which means Level 7 for local distribution and above referring to
]. While
the resulting installation uses currently available components, our focus within this paper is on creating an integration and measurement platform to gather, analyse and provide information from the installation. The objective of this platform is to provide forecasts and optimisation plans to operate the test setup as efficiently as possible. Due to the various endpoints and our agile process of implementing the overall system, we want to focus on infrastructure, architecture and development aspects to realise systems like this.
The so-called WINNER DataLab (WDL) is the integration and measurement platform of all project related sources and sinks, e.g., devices within the installation as well as external services. As evaluated within
, the
architectural backbone technology we want to use is Apache Camel
]. This
backbone allows to integrate various systems and to coordinate the resulting data flows. Further, each data processing component is encapsulated as a single small service and wired together by using representational state transfer (REST) and messaging services. This setup implies the use of a microservice architecture and further some kind of execution environment, e.g., isolated and independent container instances within a cloud infrastructure. But, there is a project related requirement, which states that the whole setup has to run on in-house infrastructure. This requirement is motivated by security issues to keep sensitive data within quarters near tenant households. So, the realised setup has to be deployable on locally executed hardware and data should not leave the area.
The following paper outlines the efforts in planning, development, deployment and operation of the WDL. Based on the necessity of in-house operation, this concerns, in particular, the components for a microservice architecture as well as the services for the operation of a compact cloud computing infrastructure and tools to support development and deployment. This applies to small sized setup and does not relate to large infrastructures, provided by, e.g., Amazon, Google or Microsoft. This experience report focus is on required components to realise a fully working setup. Thus, the evaluation outlines our experiences regarding development complexity, modifiability, testability, maintainability and scalability as well as dependencies and related lessons learned.
Microservice Architecture Within In-House Infrastructures
5
The remaining paper is organised as follows. In Sect.
we will go into
details about related work on microservices, measurement infrastructure and architectures for the IoT. Section
deals with details of the project, how the
different components interact as well as our required components to manage measurement and data flow. The following Sect.
goes into details about required components to manage a microservice infrastructure and how they interact.
Finally, Sect.
discusses our experiences regarding the already mentioned areas, like complexity, modifiability and scalability.
2 Related Work
Smart Cities, IoT and Cloud Computing, are ongoing research and industry efforts. These efforts are aiming at development as well as deployment standards and best practices for designing systems and platforms. More generally, IoT architectures are discussed in
]. They point out that a flexible layered
architecture is necessary to connect many devices. As an example, classical three- and five-layer architectures are mentioned here, e.g., as used within
], as
well as middleware or SOA-based architectures. Further,
] presents another
overview of the activities done in Europe towards the definition of a framework “for different applications and eventually enable reuse of the existing work across the domains”. Besides this discussion about architectures,
] demonstrates the
integration of IoT devices within a service platform which uses the microservice architecture for this approach, which can be understood as a specific approach for service-oriented architecture (SOA)
, P. 9]. However, thinking about
microservices requires regarding principles and tenets
, like fine-grained interfaces, business-driven, cloud-native, lightweight and decentralised.
In addition to IoT, measurement systems for IoT can also be considered. They also have to integrate different end systems. Further, they have to record different measured values and provide interfaces for analyses and calculations.
For this, approaches like SOA or event-driven architecture (EDA) can be taken up, as demonstrated in
]. This approach uses SOA and EDA in combination
with an enterprise service bus (ESB). Using the microservice architecture can be seen in
] as loosely coupled components by using the enterprise
application integration (EAI)
P. 3]. They describe a reference architecture
using microservices for measurement systems, which connects required data adapters as well as calculation and storage services, one more time through an ESB.
In summary, the shown references deal in particular with the design of IoT and measurement infrastructures. They use SOA, EDA or microservices, combine them with an ESB for the decoupled exchange of events or connect the various components directly to each other. The approaches appear domain-specific and list the necessary components; less attention is paid to the operation, the practices and development of the platforms.
6 S. Apel et al. External Power Grid
Photovoltaic Car Space Management
Meter Inverter Charging Infra.Meter Clamp Clamp Meter
Mobility Producer
Cluster Controller Electric Vehicle
Infrastructure Signal Converter Meter ClampResidents Device Connectivity
Bus Quarter Meter Clamp Field Energy Management WINNER DataLab Connectivity ManagementOther Cables Carsharing Controlling
Data Energy Weather Historical Weather Forecast Data Stream Processing Carsharing BookingsAnalog Legend Ethernet Services Energy Charge Forecast Fig. 1.
Overview of all components of the demonstrator in the WINNER project.
3 WINNER Setup
WINNER aims to integrate and coordinate electromobility used through carsharing, the energy consumption of tenant households and the local production of electricity, e.g., by integrating photovoltaic systems into a smart local energy grid. Thus, there are different components, as they are currently available, for the acquisition of measured values. Figure
shows all related
components within this setup. This architectural overview is derived and discussed in
. These components are divided into six parts. The first part
is related to the photovoltaic systems. It contains several photovoltaic panels, their inverters for connecting to the power grid and a controller for managing the system. Also, measuring points are provided for recording the amount of electricity generated, in particular, a meter and a clamp. The second part is related to the tenant households. These also include meters and clamps for each household, as well as a meter and clamp for installations used by all residents. The third part is about mobility. In addition to the meter and clamp, the charging infrastructure and car space management are required here. The fourth part refers to connectivity. This applies in particular to the acquisition of measured values from meters and clamps and the providing of data for processing components. The fifth part is related to external services. The project took into account weather services, car sharing services and electricity exchange services,
Carsharing Bookings Device Connectivity Microservice Architecture Within In-House Infrastructures
<< use >> << use >>
Data Accessor Services
<< use >> Visualization7 Carsharing Controlling Realtime Measurement Values Weather Historical Weather Forecast Optimization Roadmap Forecasts Message Device Connectivity Master Data Runtime Data
Energy Charge Forecast Subsystem Carsharing Controlling
Energy Management Processing Energy ManagementInput Adapters Services Output
Fig. 2.Generalized representation of the involved adapters and services for the WDL.
e.g., provided by European Energy Exchange (EEX). Finally, all five parts have to be integrated for further analyses and calculations within the already mentioned WDL as the sixth part of this setup. This sixth part also contains the component Energy Management, which uses WDL to access the data flows and databases via events and interfaces to carry out further analyses, such as the preparation of forecasts and optimisation plans.
Based on the components in the demonstrator mentioned above, an architecture for gathering, processing and analysis of various data streams can be designed. This data stream processing platform for enterprise application integration, the WDL, is visualised on its Level 1 component view in Fig.
The
illustration on the left-hand side shows the expected inputs of the demonstrator and the external services, which provide in particular the various measured values from installations such as meters and clamps, as well as information on carsharing, weather and electricity exchange. These adapters in the WDL generate events on different data streams and are made available to other services via a message service. The right-hand side of the figure is dominated by advanced services that gather and process events from the various message queues. This gathering applies, for example, to services that continuously persist events in databases for time series or master data, as well as services that generate events for the demonstrator by continuously evaluating the data streams. Also, there are services for accessing persistent data in the databases that are not specified in detail. These can be used for mapping, enrichment and evaluations, for example. The structure is comparable to the reference architecture found in .
4 Microservice Infrastructure Setup
While up to this point the microservice architecture and the components are comparable to the publication mentioned in Sect.
the question arised how such a setup can be realized, orchestrated and operated in an in-house infrastructure.
8 S. Apel et al.
In-house, in this case, means an underlying Infrastructure as a Service (IaaS), which offers the availability of hardware and associated software. This IaaS covers server, storage and network, as well as operating systems virtualisation technology and file systems
]. There are some virtual machines. The necessity
for in-house operation lies in the continuous but secure processing of sensitive tenant household data to make the necessary decisions for the quarters.
Within our use case, we decided to use a cluster of docker engines. In this cluster, the components required for our microservice architecture are operated in individual containers. This individualisation means that a container always corresponds to precisely one service, services to integrate various components as well as analyse data and provide plans for optimised usage of energy. While this is state-of-the-art, the question of how to bring up this bunch of containers into execution remains, especially when thinking about tenets and principles.
First of all, we need a management and orchestration toolset. With the increasing number of microservices, the effort for administration increases. We used docker compose
] to orchestrate multiple containers and configure the
basic setup. Further, we have used a web tool called Portainer
to monitor
and operate our containers. The primary purpose of this application are a health check and possibilities to stop or restart services. The service and the communication between the service and the Docker engines must be adequately secured and protected against external access. In this case, we have set up a communication layer based on internal network communication as well as transport layer security (TLS). One instance within the cluster is sufficient to manage it.
The next topic is related to management of container related events, e.g., log messages. For this purpose, we have used an ELK Stack
. Using logstash, the
various messages of the containers are recorded, forwarded to the Elasticsearch database and finally viewed within Kibana. Logstash is recommended for each cluster node, especially when processing file-based logs. The Elasticsearch as database and Kibana for viewing is only necessary once. With increasing event traffic, the database may be scaled. Additionlly, collection metrics about CPU, RAM and Storage usage as well as incoming requests and method invocations is advisable. For this application case, we used Stagemonitor
] for Java-based
services, which is executed within the Java VM of each services, collects the desired information and pushes them also to the Elasticsearch database.
A service discovery component realises connecting services or containers. Eureka from Spring Cloud Netflix
is used for this purpose. This service
provides a directory and makes it possible that each service can register, services can find other services, health check for services and various services do not have to be dependent on specific configurations. One instance per cluster is minimum; multiple can be used mainly to separate environments, e.g. staging and production systems. For services that want to use other services, a request to the discovery service is necessary, as well as the decision which one should be requested from the set of available services. The task can be realised with the help of a client-side load balancer such as Ribbon in Spring applications
or Microservice Architecture Within In-House Infrastructures
9
Resilent.js
in case of Node.js. An alternative would be to rely on the service
discovery strategies of the execution environment. For example, Kubernetes
provides a grouping of services based on a label selector to create a virtual and addressable endpoint. This endpoint has its own IP and can be used by services that want to talk to a service from the group. For the requesting service, it is not clear which service from the group will ultimately process the request.
In addition to the service discovery, a gateway is required to provide service interfaces for external and frontend clients. In our case, we use Zuul
as a
gateway and thus offer external access to HTTP-based service interfaces. This gateway uses the Service Discovery component to coordinate communication between clients and microservice instances. One instance per cluster is minimum.
For the configuration of the individual services, it is necessary to schedule a central configuration service. The task of the service is to return a set of key-value pairs, broken down by application, profile and label, which reflect the configuration of the service. Due to the use of Spring Cloud Components like Eureka and Zuul, the tool Cloud Configuration
from Spring was used in
our application from the same toolbox. Only one instance in overall is required because the service allows differing between application profiles and labels.
Services that publish user-specific interfaces require securing them. The use of OAuth2 and the use of access tokens, as well as their verification against the OAuth2 service by the respective services, are suitable for this purpose. Alternatively, the use of JWT is also possible. The combination of JWT and OAuth2 is possible and avoids the communication between service and OAuth2 server to check the tokens.
Further tools are recommended to support the service development. This recommendation applies, for example, to a source code management environment. It serves for versioning, coordination of changes and control of workflows within the development team. In our case, a Gitlab is used for this purpose, which offers repositories, simple management for documents, issue tracking, as well as the integration of additional tools for communication and process support.
Furthermore, the usage of a continuous integration platform makes sense. The aim is the continuous merging of new developments. Events in the repositories are used for this purpose, for example, and lead to construction and subsequent test processes to identify possible errors at an early stage after integration. In our tested setup, we chose the continuous integration platform integrated into Gitlab and deployed a pool of Gitlab-CI-Runner, which schedules the build and test jobs created by events from repositories.
The topic of continuous delivery is directly linked to the build and test phase. If the preceding process is successful, executable containers are created and published in a private registry. However, until now, we do not make use of the following step of continuous deployment.
Figure
visualises the outlined components used within initial sequences for
discover and configuration (1) as well as an API call for “SomeService”. The first sequence covers registration of this new service with the discovery service as
10 S. Apel et al.
Load Register and Heartbeat 1 Token Check 2.2.2 1.2 Client-side Load Balancing
Discovery Client
2.2.1 1.1 Discovery Server Eureka OAuth2 Server Load and Change SomeServiceFunction Model 2.2.3 OAuth2 Client Discover OAuth2 and Message Queue Instance 2.1 Discover SomeService InstanceSomeService
Database Server Broadcast 2.2.4 Call SomeService 2.2 Client-side Load Balancing 2 Gateway Server Change StateClient
Call SomeService Zuul Message Queue with Token Fig. 3.Example on communication flow in case of start up and calling a service API.
well as the initial discovery of the configuration service and the further loading of related configuration. The second sequence covers an exemplary API call. This one starts with a request at the gateway. The gateway must then discover the actual service instance and forward the received request. The “SomeService” instance itself has to discover an endpoint to validate the token (especially if you are not using something like JWT) as well as other required services, a message service in this example. The instance finally validates the token and handles its business logic “SomeServiceFunction” which might result in a database call. Finally, the instance broadcasts some information through the already discovered message queue. The resulting response to this request will be transported back to the client.
Monitoring Staging Productive
Container Engine Container Engine Container Engine
Log Event Service Service Service Service 1 n ... ... 1 n Database Log Event Service Service Service Service Discovery Gateway Discovery Gateway
Visualizer Container Messaging Messaging Log Log Management Service Service
Processor Processor Authorization
Source Issue Continious Cloud
Visualiz-Support Repository Tracking Integration ation
Fig. 4. Visualization of the in-house infrastructure for operating the services.
Microservice Architecture Within In-House Infrastructures
11
Figure
combines the outlined components within a deployment plan as
we would do it. We have currently distributed these containers and our encapsulated microservices manually across the already mentioned and available IaaS infrastructure. This setup contains a staging and a production environment, especially to test new builds and to prevent side effects, e.g., register untested service names within the namespace of the system used in productive. This deployment uses some components only once because they are not required to replicate.
5 Evaluation of Experiences Our microservice architecture is partly realised as described in Sect. Within this
evaluation, we want to focus on our experiences and lessons we have learned when realising an architecture setup like this. This outline covers development complexity, modifiability, testability, maintainability and scalability as well as dependencies, development skills and related learned lessons. An overview of the experiences are listed in Table and are presented in detail below.
The realisation currently contains 17 service containers. There are six integration services, e.g., for interfaces like carsharing, smart meters, weather and energy prices as well as eleven infrastructure services, e.g., a timeseries database, a NoSQL database, messaging, discovery, gateway, cloud configuration and an Elasticsearch stack with Kibana for logging. More integration and analysis related services will be added in the future.
The first outlines are related to development complexity. Developing and deploying integration services as they are mentioned above help to focus on small and fine-grained tasks. So, the complexity per service task is lowered. Thus, the resulting services are quite easy to handle. However, the application of the whole microservice infrastructure of this “integration services”, by achieving “isolated state, distribution, elasticity, automated management and loose coupling”
,
requires additional concerns. These concerns and their complexities are mainly related to configuration, discovery, gateway and logging services. The developer should be trained to gain knowledge about tools and best practices of loosely coupled services. Otherwise, at least a provided tool stack which encapsulates these topics has to be used. Remarkable, these topics have to be considered within monoliths as well. However, they are getting much more impact within a highly decoupled and distributed microservice architecture. The complexity increases in case of small teams. While in theory teams specialise in the services they are given responsibility for, small teams work on the entire service composition. Therefore, all services must be developed and maintained. As we have observed, this may cause developers to get lost in the many fine-grained service implementations. This should be taken into account and, if necessary, compensated by clear processes and issue management.
Modifiability is our second topic to review. If services are modelled around business concepts, publish well formed API and development processes are highly
12 S. Apel et al.
Table 1. Comparison of advantages and disadvantages of experience.
Advantages Disadvantages Complexity Focus on small and fine-grained tasks Isolating may requires additional dependencies Complexity per service task is lowered Few Teams for all services within a distributed architecture may cause higher complexity ModifiabilityService focus on bounded context Infrastructure and backbone decisions may have large impacts Features change independently Testability Unit, Acceptance and Integration Tests are application-specific and clean Management of the isolated and separated networks Frequently switching of execution environment Maintainability
Significantly simplified for developers and operators, especially in case of clean DevOps Requires extended tools to monitor and visualize communication flows for bug tracking
Scalability Replicas can be efficiently initialised Caching strategies may require additional handling Works out of the box Partitioning of data streams, their configuration and distribution of analysation tasks among instances Handle analytical state distribution to new instances in case of failures
Dependencies Each service can maintain its own dependency set Additional infrastructure components are required Dependency quantity can quickly become large automated, changes within services seem to be trivial and mostly easy to do.
Thus, the modifiability is quite nice in the business use case. Apart from that, the developer should be careful while doing infrastructure and backbone decisions as well as modifications. Changing critical components which, e.g., helps to register and discover services, leads to modifications of all services. So, the system setup has to be carefully considered.
Microservice Architecture Within In-House Infrastructures
13
Testability is the third topic for outlines of experiences. The amount of use cases and tasks a microservice is used for should be well-defined and delimited. Thus, testing can be done cleanly regarding white and black box tests, e.g., testing the model itself as well as the published APIs, primarily if they are described within unit tests. Additionally, we noticed challenges and understanding issues while working with microservices regarding communication networks, visibility and API accessibility. For example, the developer is using a dedicated environment while developing, boots required infrastructure services as documented and starts working on its current feature. As suggested, each runtime test should be done by compiling the source, building the container and deploy this one to the test environment. This building takes time, much more time than simply press run within the integrated development environment (IDE). As we noticed, some developers like to preserve this possibility to merely run and test applications, and move them into containerised environments after they know it is running. We do not want to weight which development style is better; we want to retain that developers should regard the isolated and separated networks, and that they have to manage changes which have to be done to connect different execution environments.
The next outlines are related to maintainability. Maintenance of individual services is significantly simplified. Administrators can easily monitor the services through a wide variety of tools and event management systems; for example, errors can be tracked for individual services. Other tools may be required to look at errors in a transaction or processing chain.
The scalability, our next topic, can be realised well, particularly by horizontal scaling. Developed services that are available as containers can be replicated. The resulting replicas can be efficiently initialised if they can obtain the necessary configuration through a service. Thus, merely starting and anything else should work out of the box. However, if scaling is used and the services work with caching strategies, precautions must be considered. For example, as long as a service instance is available and not overloaded, it may be useful to ensure that requests of the same context are processed by the same service instance when distributing the load. For example, requests from the same user and related queries for third-party services could be cached locally more efficiently. Alternatively, if caching infrastructure is used, e.g., via a Redis database, it is necessary to consider how to clean up the cache. Besides, especially in services for data stream analysis, challenges arise in scaling. These scaling challenges happen, for example, when partitioning of the data stream takes place. In this case, for example, individual service instances could take over a subset of the partitions. You have to think of the method, the volume of partitions to be considered is communicated to a single service instance, especially if instances of the same service use the same configuration. Further challenges arise when time windows are considered. In this case, it has to be considered whether gaps in the analysis are justifiable. If they are not, the intermediate results for the observation interval should be persistent, so that other instances could use them as a basis. Finally, it is necessary to consider message services during scaling.
14 S. Apel et al.