Assumptions and methodology Performance of OGC WMS and WMTS data services in the Cloud Section 1

12 Copyright © 2014 Open Geospatial Consortium. o Max first byte received in sec. o Highest observed request load average o Server bandwidth usage MBsec. ฀ Concurrent users for the test – For each test, users were added incrementally to the test. This was done to avoid an initial overloading condition on the VM. Each user was added to the VM within a period that corresponds to 10 of the duration of the test. i.e. for the 1,000 concurrent users test, one user was added to the test every 0.06 sec.. ฀ Generation of map requests – For all tests, each map request was generated randomly by a custom stress-tester tool within a window covering 95 of the image data set. Each request was randomly generated from each user every 4 seconds. Each map request was issued within a large geographic region at one of the selected multiple resolutions. No caching mechanisms were used and all cache images that were temporarily generated during the test by the client application and the web server were cleared after each test. ฀ Distribution of map requests – For all tests, each map was generated following a known distribution of map requests based on image resolution. The map request distribution was selected to match a typical map request pattern from someone using a GIS or a web browser application for accessing maps from an OGC- compliant map server: o 30 of map requests were selected between 0.5m and 4m per pixel, o 40 of map requests were selected between 4m and 50m, and o 30 of map requests were selected between 50m and 850m. Copyright © 2014 Open Geospatial Consortium. 13

5.1.5 Use of Cloud computing resources

Assumptions were made regarding the use of resources required for the project. For the “EBS configuration”, we used the following resources: ฀ A “c3.8xlarge” EC2 instance 64 bit with 18TB of standard direct disk-attached EBS storage. Because the maximum size of an EBS volume is 1TB we recreated a RAID 0 set of 18 EBS volumes. The c3.8xlarge is described by AWS as a compute optimized VM with 32 hardware hyperthreads running on 2.8 GHz Intel Xeon E5-2680 v2 Ivy Bridge processors with 60GB of available memory on a 10 Gigabit ethernet network. ฀ WMS and WMTS data servers. ฀ Stress tester software was deployed on “m3.2xlarge” instances. Of the many EC2 types available, the m3.2xlarge is categorized by AWS as a general purpose VM type. The stress tester tool was used to generate random map requests as per the methodology described above. For the “S3 configuration”, we used the same resources as above except for the data storage: ฀ A “c3.8xlarge” EC2 instance 64 bit with S3 network attached storage. The c3.8xlarge is described by AWS as a compute optimized VM with 32 hardware hyperthreads running on 2.8 GHz Intel Xeon E5-2680 v2 Ivy Bridge processors with 60GB of available memory on a 10 Gigabit ethernet network. ฀ WMS and WMTS data servers. ฀ Stress tester software was deployed on Amazon “m3.2xlarge” instances. Of the many EC2 types available, the m3.2xlarge is categorized by AWS as a general purpose VM type. The stress tester tool was used to generate random map requests as per the methodology described above.

5.1.6 Architecture diagram

5.1.6.1 Amazon EC2 with Elastic Block Storage EBS

The architecture diagram presented below illustrates resources used at typical data provider sites for deploying OGC map services. Whether the data provider uses a virtual or standard computer, the approach is the same. In the diagram below, we simulate access to map services by using a “stress tester” tool, deployed on a number of “m3.2xlarge” computer instances. These computers were used as client computers to generate continuous map requests to a single “c3.8xlarge” map server instance. The “stress tester” tool deployed on different instances was used to simulating access to map services from a large number of concurrent users. 14 Copyright © 2014 Open Geospatial Consortium. This use case aims to measure the performance and scalability of a single Virtual computer and its ability to support the largest possible number of concurrent users requesting maps from both OGC-compliant WMS and WMTS servers. Since the objective is to measure the capacity of the data service to serve maps to users, data latency between the client environment and the map server environment was kept to the minimum. To that end, the “stress tester” tool was deployed in the same AWS availability zone as the map server. This was done to avoid masking the performance and scalability results with client side latency issues. The diagram in Figure 1 below illustrates the Amazon EC2 with an EBS configuration. Each m3.2xlarge computer resource is used to simulate end-users requesting maps from the map server on a continuous basis. Each m3.2xlarge computer could support 1,600 open connections to the c3.8xlarge instance hosting the map server. Figure 1 – Diagram illustrating concurrent user access to OGC-compliant map servers using an Amazon EC2 with an EBS direct disk-attached storage configuration

5.1.6.2 Amazon EC2 with Simple Storage Service S3

The diagram in Figure 2 below illustrates the Amazon EC2 with an S3 configuration. Each m3.2xlarge computer resource is used to simulate end-users requesting maps from the map server on a continuous basis. In this environment, the imagery data is hosted in an S3 network attached storage system. Each m3.2xlarge computer could support 1,600 open connections to the c3.8xlarge instance hosting the map server.