D2.1.2 Data Acquisition Specification 2

1 Introduction

This deliverable is the second version of GAMBAS data acquisition specification. This version, like the first version, contains the description about the adaptive data acquisition framework as part of the WP2 specification. The description includes discussions on the system architecture for the framework, including the component system for developing context recognition applications and the activation system for enabling automatic, state‐based activation of different configurations. The document also provides insight into the design rationale for the system. Various details on how specific objectives will be achieved are given. This includes motivation behind the component based approach for context recognition, chosen component model, energy efficient techniques to perform context recognition on resource constrained mobile devices, etc. Furthermore, rationale behind the state machine abstraction for the activation system and how energy efficient techniques used in the component system are utilized by the activation system is given. In the following we first give details about the purpose, scope, updates, innovations and structure of the document before we describe the framework in detail.

1.1 Purpose

This document describes the architectural design to meet the requirements and use cases defined in deliverables D1.1 [GCRS12] and D1.2 [GCUC12] respectively. Thereby, it represents a main deliverable of WP2 as it contains the internal specification of the adaptive data acquisition framework introduced in D1.3.1 [GCAD12] as well as the data acquisition components that will encompass the framework.

As the second of the two versions, this document also provides details on the integration of the framework with other architectural components of the GAMBAS middleware. Like the first version, the intended audiences for this document are primarily the GAMBAS stakeholders, as well as the GAMBAS research and development teams of the consortium. However, as a public document this WP2 specification will be freely available for other interested parties as well.

1.2 Scope

This deliverable is the second of two versions of the WP2 specification on the adaptive data acquisition framework. This deliverable extends the first deliverable which was produced based on the project details carved out in the requirements specification (D1.1), use case specification (D1.2) and the high level architectural overview of GAMBAS middleware (D1.3.1). Therefore, in order to fully understand the rationale for the various design decisions made for the component system as well as the activation system described in the deliverable, it is highly recommended to the reader to read these documents as well.

1.3 Updates

This version of the deliverable extends the first version by providing details about the external interfaces of Data Acquisition Framework (WP2) with other GAMBAS system components namely the Privacy Preserving Framework (WP3) and Semantic Data Storage (WP4). The details about these external interfaces are accompanied with example applications that have been developed during the second year of the project. The details are provided in Section 4 of this document.

Moreover, this version of the deliverable also includes a subsection on innovations. This section (1.4) briefly outlines the novel research work carried out during the first year of the project and also the Moreover, this version of the deliverable also includes a subsection on innovations. This section (1.4) briefly outlines the novel research work carried out during the first year of the project and also the

1.4 Innovations

The GAMBAS data acquisition framework is responsible for providing a platform for the development of context recognition applications which can acquire data from the physical and virtual sensing sources, can extract features from the acquired data and also deduce meaningful context information from it. In GAMBAS we employ component based abstraction for the development of these applications. The component based abstraction not only provides a generic way of developing these applications and also enhances the reusability of already implemented code, but it also provides means to analyze the application’s structure such that energy efficiency techniques can be applied on them.

The two main innovations of the data acquisition framework in GAMBAS therefore are, (1) use of component based abstraction for achieving generic context recognition and (2) using it to achieve energy efficient execution of context recognition applications.

1.4.1 Generic Context Recognition

The component based abstraction models every context recognition functionality as an independent piece of code called component. A component is an atomic building block that constitutes specific recognition logic. It has input and output ports which allow it to communicate with other components and with the parameterization support. A component can be configured according to the developer specific requirements. One key advantage of using a component abstraction is that same components can be used to recognize different type of contexts, thus enabling generic context recognition. By using the component based abstraction, the GAMBAS’ data acquisition framework provides context recognition at two levels.

The lower level is the component system, which is used to develop components required for performing context recognition functionalities. The set of components linked together to perform specific context recognition is called a configuration. The developer can create configurations either by creating new components or using the already existing ones and linking them as per requirements of their applications.

The higher level of context recognition provided by the data acquisition framework consists of an activation system. The activation system uses state machine abstraction to model the set of activities

a user may be involved during the course of the day. These activities are represented by a set of states and the change of a state is represented by a set of transitions. The states are realized as a set of configurations (using the same component abstraction described for the component system) and the transitions are realized as if‐else rules.

The use of the component model as a fundamental abstraction in the two levels of the data acquisition framework helps in providing context recognition in a generic manner. Further details on the component system and the activation system are provided in Section 2 of this document.

1.4.2 Energy Efficient Recognition

As mentioned above, the GAMBAS data acquisition framework uses component based and state machine based abstractions for modeling GAMBAS applications. The component based abstraction allows applications to be composed of independent pieces of code logic, glued together to perform As mentioned above, the GAMBAS data acquisition framework uses component based and state machine based abstractions for modeling GAMBAS applications. The component based abstraction allows applications to be composed of independent pieces of code logic, glued together to perform

In both of the above mentioned cases i.e., when multiple independent applications are executed simultaneously for detecting different context characteristics, or a single application using state machine abstraction is executed to detect different context characteristics at different points in time, the execution of these applications poses challenges for the energy efficient execution of these applications.

In GAMBAS a novel energy efficient technique has been developed for the data acquisition framework which will be integrated in the component system and the activation system during the course of the project. The details on our energy efficient technique, called configuration folding, are provided in the subsequent sections of this document. In the following we provide a brief description of this innovation to give readers a general idea about it.

The configuration folding technique removes redundancies between different context recognition applications running simultaneously, by analyzing their structures and by providing a single configuration valid for all the applications. The resulting configuration is then instantiated by the runtime system and is used to perform context recognition. Experiments with sample applications have shown that up to 48% of energy can be saved if configuration folding is applied.

One limiting factor for configuration folding is the case when different context recognition applications have redundancies between them but the redundancies are parameterized differently, thus causing the use of transformations in the final folded configuration. The use of transformation prohibits further removal of redundancies between the applications as described in the relevant sections, thus causing suboptimal energy savings. In order to deal with such cases we are investigating possibilities of enhancing energy savings when transformations are used and a brief discussion is given in Section 2.3 of this document.

1.5 Structure

The structure of the remainder of this document is as follows. In Section 2, we give a high level description of the data acquisition framework followed by the design rationale and various building blocks of the component system and the activation system. In Section 3, we describe the context recognition components and intent recognition components to be developed using the component system. Besides providing a test case for the framework, these components are also required for creating the prototype applications of the GAMBAS project. In Section 4, we discuss the integration of the framework with other work packages such as WP3 and WP4. In Section 5, we perform a requirement coverage analysis for the requirements related to the data acquisition framework and thereafter, we conclude the document.

2 Data Acquisition Framework

The data acquisition framework (DQF) is one of the fundamental building blocks of the GAMBAS middleware. Conceptually, the DQF is responsible for context recognition on personal mobile devices including smart phones, PDA’s and laptops. The DQF supports various platforms including Android, Windows and Linux. The DQF is a multi‐stage system. On the one hand it allows developing reusable context recognition applications. On the other hand it enables relevant applications at a particular time automatically. Specifically the DQF consists of a component system and an activation system as shown in Figure 1.

Figure 1 – Data Acquisition Framework Overview

The component system uses a component abstraction to enable the composition of different context recognition stacks that are executed continuously. A context recognition stack or simply a configuration refers to a set of sampling, preprocessing and classification components wired together to detect a specific context. Examples of such context include physical activity of a person, location of a person, etc. These configurations can be used to detect context for a multitude of purposes and have applications in areas of smart home environments, assisted living for elderly, proactive route planning, budget shopping, etc.

The activation system uses a state machine abstraction to determine the point in time when a certain configuration or a set of configurations will be enabled. The activation system enables the required configurations in an automatic manner based on the conditions associated with the state transitions. An example of a simple (coarsely granular) state machine associated with an employee could consist of two states “Working” and “Relaxing”. State “Working” may consist of configurations “Meeting”, “Cafeteria” etc. and state “Relaxing” may consist of configurations “Living Room” and “Gardening”. Based on the transition values the activation system will disable the configurations associated with one and enable the ones associated with the other. The states machine can also have more fine granular states representing stages specific to a single task, e.g. a state can represent a sampling of an accelerometer with lower or higher rate. In such a case a state change occurs when the device The activation system uses a state machine abstraction to determine the point in time when a certain configuration or a set of configurations will be enabled. The activation system enables the required configurations in an automatic manner based on the conditions associated with the state transitions. An example of a simple (coarsely granular) state machine associated with an employee could consist of two states “Working” and “Relaxing”. State “Working” may consist of configurations “Meeting”, “Cafeteria” etc. and state “Relaxing” may consist of configurations “Living Room” and “Gardening”. Based on the transition values the activation system will disable the configurations associated with one and enable the ones associated with the other. The states machine can also have more fine granular states representing stages specific to a single task, e.g. a state can represent a sampling of an accelerometer with lower or higher rate. In such a case a state change occurs when the device

2.1 Component System

In the DQF, the user’s context and activity recognition is done using a component based approach. This approach promotes reusability and rapid prototyping. It also gives us the ability to analyze application structures in order to optimize their execution in an energy efficient manner. In the component system, each application consists of two parts: the part containing the recognition logic and the part containing the application logic. The part that contains the recognition logic consists of sampling, preprocessing and classification components that are connected in a specific manner as shown in Figure 2. The part that contains the remaining application logic can be structured arbitrarily. Upon start up, a context recognition application passes the required configuration to the component system which then instantiates the components and executes the configuration. Upon closing, the configuration is removed by the component system which eventually releases the components that are no longer required. The component system supports various platforms such as J2SE and Android. Using an Eclipse‐based tool, application developers can visually create configurations by selecting and parameterizing components and by wiring them together.

Figure 2 – Component System Overview

2.1.1 Component Model

To structure the recognition logic, our component system realizes a lightweight component model which introduces three abstractions. First, components represent different operations at a developer ‐defined level of granularity. Second, connectors are used to represent both the data as well as the control flow between individual components. And third, configurations are used to define

a particular composition of components that recognizes one or more context characteristics.

2.1.1.1 Components

Components are atomic and reusable building blocks that constitute the recognition logic. Similar to other systems such as J2EE or OSGi, components can be defined at arbitrary levels of granularity. Yet, Components are atomic and reusable building blocks that constitute the recognition logic. Similar to other systems such as J2EE or OSGi, components can be defined at arbitrary levels of granularity. Yet,

Figure 3 – Speech Detection Configuration Example

As depicted in Figure 3, the recognition logic of a speech recognition application consists of a number of components which can be divided into three levels. At the lowest level, the sampling components are used to gather raw data from an audio sensor. On top of sampling components, a set of preprocessing components takes care of various transformations, noise removal and feature extraction. Finally, the extracted features are fed into (a hierarchy of) classifier components that detect the desired characteristics. Depending on the purpose and extent of the application logic, it is usually possible to further subdivide the layers into smaller operators. Although our component system does not enforce a particular granularity, such operators should usually be implemented as individual components to maximize component reuse.

2.1.1.1.1 Parameters Parameterizations increase the reusability of a component implementation across different

applications. The component system allows components to support a developer‐defined set of parameters. Components expose these parameters to adapt their internal behavior. As shown in Figure 3, at the sampling layer, these parameters might be used to express different sampling rate, sampling depth, frame sizes and duty cycle. At the preprocessing layer, they might be used to configure different filters or the precision of a transformation. In our component system these parameters are not exposed to other components; instead, they can be accessed and manipulated by the components system itself.

2.1.1.1.2 Ports In order to support application independent composition, each component may declare a number of

strongly typed input and output ports. Input ports are used to access results from other components. Output ports are used to transfer computed results to another component. Thus ports enable components to interact with each other in a controlled manner. The developer can add multiple input and output ports of different types without worrying about their memory allocation, ordering and memory de‐allocation. The internal buffer management for the ports is transparent to the developer and done by the component system itself.

2.1.1.2 Connectors

In order to be reusable, components are isolated from each other by means of ports. However, the recognition of a context feature often requires the combination of multiple components in a specific way. Connectors express such combinations by determining how the typed input and output ports of different components are connected with each other. In order to minimize the overhead of the component abstraction, connectors are implemented using an observer pattern [GAMM1] in which the output ports are acting as subjects whereas the input ports are acting as observers. This enables modeling of 1:n relationships between the components, which is required to avoid duplicate computations. To avoid strong coupling between components, input ports do not register themselves at the output ports but the component system takes care of managing all required connections. An example of connectors can be seen in Figure 3, where the output port of the fast Fourier transform component is connected to the input ports of bandwidth, spectral roll off and spectral entropy components.

2.1.1.3 Configurations

Each context recognition application must explicitly list all required components together with their connectors in a configuration. While this approach slightly increases the development effort, it also increases the potential reuse of components that can be applied on data coming from different sources. As an example of such component, consider a Fast Fourier Transform (FFT) that converts a signal from its time domain into the frequency domain. Clearly, such a component can be applied to various types of signals such as acceleration measurements or audio signals. Thus, by explicitly modeling the wiring of components as part of a configuration, it is possible to reuse such a component in different application contexts. In addition to listing components together with their connectors, the support for parameterizable components also requires the developer to explicitly specify a complete set of parameter values that shall be used by each component. As a result, every configuration consists of a parameterization as well as associated connectors. An example of a speech recognition configuration is shown in Figure 3.

2.1.2 Runtime System

The main task of the runtime system for the component system is to perform the context recognition in an energy efficient manner. This includes loading the configurations specified by the context recognition applications, instantiating the components with right parameterizations and connecting them in the manner specified by the application. In addition to that, the runtime system applies The main task of the runtime system for the component system is to perform the context recognition in an energy efficient manner. This includes loading the configurations specified by the context recognition applications, instantiating the components with right parameterizations and connecting them in the manner specified by the application. In addition to that, the runtime system applies

2.1.2.1 System Structure

As shown in Figure 4, the main elements of the runtime system of the component system are the configuration store, the configuration folding algorithm [IQBA12] and the applications. The configuration store is used to cache the configurations associated with applications that are active. It is also used to store their folded configuration. The configuration folding algorithm provides energy efficient execution of context recognition applications, provided that more than one application is executed simultaneously. The entity responsible for managing the runtime system is called the component manager. The component manager will be implemented as an Android service (recognition service) and must be installed on the device separately. When the application developer creates an application it must provide code to bind the application with this recognition service and when the application is finally deployed on the device and executed, the recognition service gets activated, instantiated and executes the configuration associated with it.

Figure 4 – Component System Structure

2.1.2.2 Configuration Execution

As specified in the previous section, the entity in the component system responsible for the execution of configurations is the component manager. The component manager controls the execution of the componentized recognition logic of all running applications. To manipulate the components executed at any point in time, the component manager provides an API that enables developers to add and remove configurations at runtime. When a new configuration is added, the component manager first stores the configuration internally. Then it initiates a reconfiguration of the running recognition logic that reflects the modified set of required configurations. To reduce the As specified in the previous section, the entity in the component system responsible for the execution of configurations is the component manager. The component manager controls the execution of the componentized recognition logic of all running applications. To manipulate the components executed at any point in time, the component manager provides an API that enables developers to add and remove configurations at runtime. When a new configuration is added, the component manager first stores the configuration internally. Then it initiates a reconfiguration of the running recognition logic that reflects the modified set of required configurations. To reduce the

2.1.2.3 Platform Support

The core abstractions of the component systems as well as the component manager are implemented in Java 1.5. In order to support multiple platforms, different wrappers have been implemented that simplify the usage of the component system on platforms including Windows, Linux and Android.

2.1.2.4 Tool Support

In addition to the platform support, the component system provides offline tools to support rapid prototyping. These tools include a visual editor which is used for creating and updating configurations for the context recognition applications. The visual editor provides a user friendly interface which allows developers to drag, drop, parameterize and wire existing components to create new configurations or update existing ones. The visual editor is implemented as a plug‐in for the Eclipse IDE (Version 3.7 and above).

Figure 5 – Component System Tool Support

In addition to the visual editor, the component system also provides a large set of sampling, preprocessing and classification components as part of the component toolkit. At the sampling level, our toolkit provides components that access sensors available on most personal mobile devices. This includes physical sensors such as accelerometers, microphones, magnetometers, GPS as well as Wi‐Fi and Bluetooth scanning. In addition, we provide access to virtual sensors, for instance, personal calendars. For preprocessing, the toolkit contains various components for signal processing and statistical analysis. This includes simple components that compute averages, percentiles, variances, entropies, etc. over data frames as well as more complicated components such as finite impulse response filters, fast Fourier transformations, gates, etc. Furthermore, the toolkit also contains a number of specialized feature extraction components that compute features for different types of sensors such as the spectral rolloff and entropy or zero crossing rate which are used in audio recognition applications [LU09] or Wi‐Fi fingerprints which can be used for indoor localization. At the classification layer, the toolkit contains a number of trained classifiers which we created as part of the audio and motion recognition applications. Furthermore, there are a number of platform‐specific components which are used to forward context to an application which enables the development of platform ‐independent classifiers. In Android, for example, a developer can attach the output of a classifier to a broadcast component which sends results to interested applications using broadcast intents. We have also developed a number of components that are useful for application development and performance evaluation. These includes components that record raw data streams coming from sensors as well as pseudo sensors that generate readings using pre‐recorded data streams. Together, these components can greatly simplify the application development process on mobile devices as they enable the emulation of sensors that might not be available on a development machine. The tool support for component system is depicted in Figure 5.

2.2 Activation System

To fully understand the context of an entity, usually more than one context characteristics are required. As an example, to know if a person is working in his office, context characteristics such as his location, pattern of movement, types of meetings, and classification of ambient sounds are required. As described earlier, such context characteristics can be detected using the component system by developing configurations with the appropriate components, parameterizations and connections. Furthermore, in order to fully identify a particular context, more than one configuration would be needed at a particular time. In real life, however, the context of an entity does not remain static and over the period of time it requires detection of different/new context characteristics.

Figure 6 – Activation System Overview

Moreover, the context of an entity depends on the task that the entity is involved in. In other words, to know the context of an entity it is essential to know the task that the entity is involved in. Furthermore, these tasks usually follow a fixed pattern, e.g. tasks that a working person usually has consist of waking up in the morning, dressing up according to the weather, traveling to the work place, sitting in the office, holding meetings and discussions, going for lunch and coffee breaks, working on a computer, going for shopping, going home, relaxing, having dinner, sleeping, etc. The example shows that a routine of an average working person is quite predictable, at least partially. We can also further break the individual tasks mentioned in this example into smaller tasks e.g. when a person travels to his workplace, he either walks, drives his own car or takes a public transport.

Given the presence of such regular patterns of reoccurring tasks, the goal of the activation system is to exploit the knowledge about their existence in order to minimize the amount of sampling and processing that is needed to detect the user’s context. To do this, the activation system enables the developer to model individual tasks as a set of states that occur sequentially. For each of the states, the developer may specify a set of configurations that describe the context that shall be recognized. In addition, the developer specifies a set of transitions between the states that define possible sequences. Using this model, the activation system as shown in Figure 6 takes care of executing the right configurations at the right time. In the following, we describe this basic idea in more detail.

2.2.1 Activation Model

In the GAMBAS DQF, the modeling of routines of a user’s tasks is managed by the activation system, which uses a state machine abstraction for this task. Specifically, the activation system enables the automatic, state‐based activation of different configurations associated with developer defined tasks. Hence, in our activation system, the entity’s context is modeled as a state with different configurations associated with it, irrespective of its granularity. The transitions between the states are modeled using a rule based approach, e.g., if for a state “working”, the associated configurations provide negative results (e.g. the user is not present in his/her office anymore) or results not up to a certain threshold, the activation system uses this to trigger an associated state transition.

2.2.1.1 States

A state refers to a particular decision point during the execution of a larger task. It entails a set of configurations that individually detect different context characteristics but collectively identify one of the possible decisions taken by the user.

Thereby, states may be used at different levels of granularity. An example of a coarse grained state is shown in Figure 7(a). In this example, a “Working” state may encompass configurations that detect whether the person is in a meeting, working in his office or having lunch at the canteen. An example for a fine‐grained use of state is shown in Figure 7(b). Here, the state “Fast Sampling” may be used in conjunction with a “Slow Sampling” state in order to control the precision of a certain set of configurations such as a movement detector and a sound classifier.

The states are created by using the configurations saved in the activation system or by creating new ones from scratch and labeling them with a particular state name. After creating the states, the transitions are defined between them as described in the following section.

Figure 7 – Examples of Activation System States

2.2.1.2 Transitions

Transitions are defined by the conditional changes in the configurations associated with a state. When the changes suggest that a certain condition holds, the activation systems disables the current state and its associated configurations and enables the ones associated with the new state. This is done by modeling the transitions using a rule based approach. Each transition is represented by an abstract syntax tree, in which conditions or thresholds for each configuration are evaluated. Depending upon the evaluation of the abstract syntax tree, the activation system decides whether a state change has occurred or not.

Figure 8 – Example for Rule‐based Transitions between States

Figure 8(a) shows two example states. State 1 has two configurations, configuration A and configuration

B. State 2 also has two configurations, configurations C and configuration D. The transition from State 1 to State 2 is labeled as Transition 1 2 and transition from state 2 to state 1 is labeled as Transition 21. The abstract syntax tree for Transition 12 and Transition 21 is shown in Figure 8(b) and Figure 8(c) respectively. Assuming that State 1 is currently the active state, the activation system continuously evaluates the abstract syntax tree for Transition 12 and when the outcome of the tree, here represented by an AND operator is true it will disable Configuration A

and Configuration B and enable Configuration C and Configuration D. Similarly when State 2 is the current state, the activation system evaluates the abstract syntax tree associated with Transition

2.2.2 Runtime System

The main task of the runtime system is to load the state machine pertaining to a user and ‐ as indicated by the application logic ‐ instantiate the configurations associated with states, identify the current state, instantiate rules for different transitions and evaluate the abstract syntax trees associated with the respective transitions. Furthermore, the activation system executes the state machines in an energy efficient manner by applying configuration folding among all configurations across all the different states. The outcome of such a “folded” state machine is a single folded configuration. Clearly, it is possible that in such a folded configuration different configurations share the same graph structure, at least to a certain level. Therefore the activation system provides appropriate logic for evaluating transition between the states. Further details on how this can be achieved are described in the following sections.

2.2.2.1 System Structure

The main structural elements of the activation system are shown in Figure 9. These include a state machine store, configuration folding algorithm, rule engine and state machine manager. The state machine store is used to cache the state machines associated with the applications. The configuration folding algorithm is used to have an energy efficient configuration of the entire state machine. To do this, the activation system applies configuration folding on the configurations of the currently executed state machines. The transitions between the states are modeled as if‐else rules and are managed by the rule engine. Once the folded configuration of the state machine and the if‐ else rules for the state transitions are available, the state machine manager attaches the rules in the folded configuration, instantiates it and executes it. Similar to the component system, when the application logic indicates that no further context information is needed, the activation system stops executing the state machine.

Figure 9 – Activation System Structure

2.2.2.2 Configuration Mapping

In this section we describe how different configurations related to different states are folded and how the rule engine applies rules representing transitions between the states. To understand the mappings consider an example of a state machine with two states as shown in Figure 10(a). Each state has two configurations attached to it. When the activation system loads the state machine it applies the configuration folding algorithm on all configurations associated with both states and the result is shown in Figure 10(b).

Figure 10 – Example Mapping of State Machines to Components

Let’s assume the following rules for the two transitions:  Transition 12: IF result of Config. A OR result of Config. B EQUALS FALSE then State 2

 Transition 21: IF result of Config. C OR result of Config. D EQUALS FALSE then State 1

Their abstract syntax trees and how they are integrated with folding is shown in Figure 11.

Figure

11 – Example for Active Graph Structures in Folded Configuration

Let’s assume that the current state of the state machine is State 1. In this case the configurations to

be evaluated are configuration A and configuration B. The entire state machine has already been folded. As a result, the required graph structure for configuration A and B is present in two different graphs. Moreover, these graph structures are also shared with configurations from other states. Therefore, in order to evaluate the relevant configurations only, the activation system enables the graph structures needed to compute configuration A and configuration B as shown in Figure 11(a) and disables the rest. When the rule for Transition 12 is evaluated to be true, the activation system deactivates the graphs structures for configuration A and B, activates the ones for configuration C and

D, and starts evaluating the rule for Transition 21 .

2.2.2.3 Platform Support

The core abstractions of the activation systems will be implemented in Java 1.5. In order to support multiple platforms, different wrappers will be implemented that will simplify the usage of the activation system on platforms including Windows, Linux and Android.

2.2.2.4 Tool Support

In addition to the platform support the activation system will provide offline tools to support rapid prototyping. These tools will include a visual editor which will be used for creating and updating configurations for context recognition applications. The visual editor will provide a user friendly interface which will allow developers to drag, drop, parameterize and wire existing configurations to create new state machines or update existing ones. The visual editor will be implemented as a plug‐ in for the Eclipse IDE.

Figure

12 – Activation System Tool Support

In addition to the visual editor the activation system will provide a set of configurations as part of the configuration toolkit for detecting different context such as location detection, speech, motion etc. With the availability of such a toolkit the developer will not have to create configurations from scratch, train classifiers for them and perform their testing, thus, saving a lot of development efforts. The tool support for component system is depicted in Figure 12.

2.3 Optimized Configuration Folding

The energy savings using the configuration folding algorithm for the component system and the activation system described earlier has a limitation when the components to be folded have different parameterizations. The difference in parameterization requires use of transformation components as described in [IQBA12]. Figure 13 highlights this limitation as the two configurations have same component

A but these components differs in the parameterization. This difference is indicated as A and A’. The resulting folded configuration will have a transformation component to perform folding for A’ but this will result in ceasing of further folding of other components as shown in Figure 13.

Figure

13 – Folded Configuration with Transformation Component

As can be seen in Figure 13 the further folding of components could be beneficial as both configurations have same components at the higher levels. This could only be achieved if the transformation introduced for component A’ can be introduced later or in other words at a higher levels of folded configuration as shown in Figure 14.

Figure 14 – Folded Configuration with Delayed Transformation

In order to delay the use of transformation components it is necessary to identify the characteristic of component at levels above the level of the component for which the transformations are used. As an example, in order to achieve the folded configuration shown in Figure 14, it would be necessary that component B and C have certain characteristic which ensures that delaying the use of In order to delay the use of transformation components it is necessary to identify the characteristic of component at levels above the level of the component for which the transformations are used. As an example, in order to achieve the folded configuration shown in Figure 14, it would be necessary that component B and C have certain characteristic which ensures that delaying the use of

3 Context Recognition Components

The context recognition components are the basic building blocks of a context recognition application. The component toolkit provided with the component system consists of a large number of sampling, preprocessing and classification components. These components can be used to create new applications. Moreover, with the toolkit support developers can implement their own components with little effort. For the scope of this project and test applications to be developed we mostly focus on physical activity recognition, location recognition and audio recognition components as shown in Figure 15. During the second iteration of the implementation, location prediction and duration components will also be developed. These components will be able to predict next location of the user and also their intended stay at a particular location. With the help of such prediction components, applications built on top of the GAMBAS middleware will be able to provide users with timely and relevant services. Moreover, the service providers will be able to better plan their businesses and services models.

Figure 15 – Context Recognition Components Overview

3.1 Activity Recognition Components

Based on the scenarios defined in [GCUC12], the activity recognition components will mostly be recognizing travel‐related information. The travel related information has different aspects of the information relevant for different kinds of users, i.e. travel related information is different for a single user from a group of users and travel related information of a group of users is different from transport regulating authorities as well as transport providing companies. Using the relevant information, single users or group of users can plan their trip efficiently and transport providers can provide better services. Example of travel related information for a single user or group of users may include the best possible route to reach their destination. The notion of best depends upon the users preferences and it can mean the fastest route or the route in which users can do some shopping or meet a friend, for instance. For the transport providing companies, the users travel related information (his route, preferences) can help them adding more busses to a route or removing busses from the routes, displaying relevant advertisements on screens installed in busses or offering different kinds of incentives when the user is onboard the bus, for instance.

In order to access such information, different context recognition components are required. These include activity recognition components, audio recognition components, and location recognition components. With the activity recognition components, the system can identify that the user has started moving or performing an activity that may trigger the system to compute routes for a pre‐ stored destination for him. Example of such an activity could be climbing down the stairs, closing door of his office or house. The activity recognition component may also be used to detect if the user is standing or sitting in the bus. This may help transport providers to alert other users about the crowdedness in the busses. The location recognition component may tell where the user is and based on this information the service providing companies can offer different incentives to him. An example of such an incentive could be that if the transport providing company knows that the user is in Bus 933 near University of Duisburg‐Essen, then transport provider may alert the user to visit a newly opened café in the vicinity. The audio recognition components may sample and record the ambient sounds to alert the user of any important announcement being made in the bus or can also

be used find out the level of noise pollution in the environment.

3.1.1 Location Recognition

In order to perform location recognition, our components will rely on radio signal based and GPS techniques as opposed to image processing or any other kind of techniques. As mentioned earlier, the context recognition for the user will be done using the user’s smart mobile phone, therefore we will

be solely relying on the on‐board sensors on the mobile phone to gather the information on these radio signals. Specifically, we will be using information from GPS, GSM and Wi‐Fi sensors of the phone. Each of them has its own advantages and limitations but their collective use can provide

efficient and accurate location recognition. With the wide spread use of Wi‐Fi, a user can typically see multiple Wi‐Fi access points in the surroundings. With the limited range or signal strength of a typical Wi‐Fi signal, a user can see different set of access points as he moves from one location to another. Thus capturing this information alone can provide user with a fair bit of idea about his location. However, places where Wi‐Fi signals are not available or are very weak e.g. in outdoor location such as shopping markets, play grounds etc. GSM signals can be used. Typically a mobile phone can report up to 6 neighboring cell towers. Though range of a GSM cell tower is usually large and same locations may receive same information about the cells, together with Wi‐Fi, GSM can provide accurate location information. Lastly, we will also make use of GPS signals in outdoor locations where Wi‐Fi and GSM signals are not present or are very weak. Since each of these technologies have different energy requirements, energy efficient techniques will be developed to perform the location recognition.

In order to recognize a user location a typical approach is to gather the fingerprints of the location. This means that in the testing or development phase user takes the phone to different places and the phone performs Wi‐Fi, GSM and GPS scans depending on the user preferences. On a particular location, these scans return a list of Wi‐Fi access point with their signal strengths and neighboring GSM cells with their signal strengths. This set of signal values acts as a unique signature for that location. After that, the user marks that location with a label. Once all the places of interests are marked by the user, the system uses them to identify the user location in real time.

3.1.2 Trip Recognition