A Comparative Study of a Client Based Vendor Neutral Cloud QoS Monitoring Tool and Cloud Providers’ Platform Integrated QoS Monitoring Tools

—Cloud service providers have a QoS monitoring capability integrated in their cloud platforms. This is to aid in monitoring the performance of the platform as well as for Service Level Agreement confirmation to the clients. Unfortunately this arrangement serves the interest of the cloud provider more than the cloud client since the service providers gauge their services using their own tools. This paper performs a comparative study on the capabilities of the client based vendor neutral QoS tool, developed from a vendor neutral QoS monitoring model against the cloud provider integrated QoS monitoring tools. The comparison was done on four global SaaS cloud service providers, namely SalesForce, Google, Hubspot and Shopify. From the comparative study, it emerged that the client based vendor neutral tool has more capabilities than the cloud provider integrated tools since it has the capability to monitor three key QoS metrics, namely service response time, service availability and service stability as opposed to the cloud providers’ tools which only have one quantitative capability. Further the vendor neutral model can be used across any cloud platform that is accessible via the web browser. This provides a capability for cross platform performance comparison for the various cloud providers. This can aid in decision making with regards to which cloud service provider to procure based on the desired performance.


I. INTRODUCTION
Cloud computing involves delivery of hardware and software resources and services to users over the Internet [1]. Due to the inherent characteristics of the cloud, namely dynamic in nature, scalability, resource pooling and on demand self service, the ability to monitor, measure and report on the received QoS is a key feature that has to be available for cloud computing services. From [2] there are still many technical reservations relating to the features of cloud computing and the provision of quality service, leading to a delay in adopting cloud computing.
The need to monitor and differentiate cloud services per cloud provider is necessitated further by the fact that same cloud services are differentiated solely by price [3].
Published on January 11, 2020. Frankline Makokha, School of Computing and Informatics, University of Nairobi, Kenya.
To enable fair comparison on the QoS from the cloud service providers, a neutral mechanism is required that can perform the monitoring and measurements on all cloud providers.
According to [4] there are reservations regarding security, privacy and trust that deter the adoption of cloud computing in spite of several beneficial features. This further hints to a need for a neutral cloud QoS monitoring framework, especially one that is client centric.
It is on this basis that [5] concludes that trust has to be made measurable, in order to represent it in decision making contexts like provider selection. Some of the factors that form a basis for cloud trust establishment between the cloud providers and their clients include QoS, SLAs, publicly available reviews, audits based on established standards and Client support [5].
QoS in cloud computing is complicated further by the fact that conventional frameworks for measuring quality of service such as ISO 9126 are limited in evaluating the quality of SaaS, mainly due to the gap between the conventional computing paradigms and cloud computing paradigm. This is due to the fact that, conventional quality measurement frameworks do not effectively evaluate cloud computing specific quality aspects [6].
According to [7], firms worry whether cloud computing solutions have sufficient availability and as such proposes use of multiple cloud providers for redundancy. This however introduces the need for a cross platform tool that can monitor and measure across different cloud platforms for comparison purposes.
To select the most suitable cloud provider based on organization needs, cloud users require a method to recognize and evaluate crucial performance standards [8].
In an environment where many providers are offering similar cloud services, a client might need a pragmatic assessment tool to discriminate between providers, as well as checking the validity of the provisioned QoS levels [9].
From [10] performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis, prediction and assurance. In cloud environments, performance could be affected by limited bandwidth, disk space, memory, and the CPU cycle and network connection latency [11]. The performance issue in cloud platforms is hinged on load balancing, an inherent problem in cloud computing platforms [12]. This is because load balancing optimizes resource use, maximizes throughput, minimizes response time, and avoids overload [13].
Further, the auto provisioning feature, based on dynamic user needs, praised as a strength in cloud computing, poses a performance bottleneck since sometimes demand increases very rapidly and resources are not available, resulting delay in the services, or non availability of services during the peak load [14].
To aid in cross-cloud platform QoS Monitoring, a vendor neutral model was proposed by [15]. The Model is not tied to the underlying architecture of any cloud service provider, it is client based in that it is resident on the user's terminal and the results are therefore stored on the user's terminal and are measured as experienced at the user's end therefore monitoring end to end cloud service QoS.
Further, among QoS metrics, values of some metrics such as response time, user-observed availability, etc. are essential to be measured at the client-side as it is impractical to get such QoS information from service providers, since these values are susceptible to the uncertain Internet environment [16].
The proposed vendor neutral model was implemented by [17] as a browser integrated vendor neutral model, on chrome browser and experimented on four cloud service providers, namely Salesforce, Google, Shopify and Hubspot.
The implemented vendor neutral model works only for Software as a Service (SaaS) Cloud Computing Solutions that are accessible via the web browser. The QoS metrics used by the vendor neutral model are: service response time, platform availability and platform stability.
To be able to gauge and validate the credibility of the vendor neutral model, this paper performs a comparative study of the vendor neutral tool against the cloud service providers' tools II. RELATED WORK The need to compare the services of cloud service providers has been and is an ongoing task in the field of cloud computing.
According to [18], it is now of paramount importance for cloud consumers to understand the functional and nonfunctional requirements of the cloud provider service quantitatively, so that the services can be benchmarked for the quality against the services provided by the multitude of the clod providers.
To abate the cloud computing decision problem, various frameworks have been developed for ranking cloud services.
One of the frameworks is the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), developed by [19]. This framework uses both qualitative and quantitative metrics to calculate and rank cloud services based on user requirements. Its architecture is composed of cloud administrator, cloud data discovery, cloud service discovery and cloud services.
The shortcoming of this framework is that it is not based on the quality of service as experienced by the customer at the moment of using the cloud services, but is based on historical data of previously met service level agreements of other clients, weights attached on each user requirement, customer feedbacks and advertised cloud provider capability. This makes it more of a mathematical model than a pragmatic live usage based comparison model.
Another framework, proposed by [20] compares the cloud monitoring tools based on the monitoring level at which the tool operates, namely User level, Application/Service level and Infrastructure/Resource level. The comparison is also based on the monitoring technique used by the tool, like scripts injected in the virtual machine, plugin, kernel data and REST protocol.
At user level, data generated by the user during interaction with the cloud platform is analyzed. This framework categorizes the cloud monitoring tools based on their capability, and therefore does not rank the performance of the cloud service. This comparison does not aid in decision making on which cloud service to procure.
From the analysis done by [21], cloud monitoring tools have been compared using various dimensions, namely visibility (layer agnostic or layer specific) which is the cloud layer at which the tool operates (IaaS, PaaS and SaaS), the monitoring architecture of the tools (centralized or distributed), interoperability across different cloud providers (cloud agnostics and cloud dependent), single or composite parameters that the tool can monitor and the programming interface provided by the tool, (command line , API and Widgets).
Whereas the tool comparison done by [21] is comprehensive, the results from this analysis cannot be used in the cloud provider section decision.
The effectiveness of job scheduling algorithms used by different cloud service providers, with a view of improving the quality of services offered by their cloud platforms, has been reviewed in [22]. This analysis involved identification of the various QoS parameters optimized by the different job scheduling heuristics. The findings were categorized based on better load balancing, better resource utilization and optimized execution time.
This analysis by [22] is beneficial to cloud providers as it would enable them decide on which algorithm to use in their cloud platform. However as far as the cloud service users are concerned, it would be a daunting task to choose a cloud provider based on the underlying job scheduling algorithms utilized..
Other cloud QoS comparison frameworks analyzed by [23] are use of private agents for data prioritization, Cross Layer Multi-Cloud Application Monitoring-as-a-Service (CLAMS), Cloud2Bubble and the Mobile Cloud Gaming (CMG) framework.
The comparison by [23] was done using deployment parameters, analysis support, network and client monitoring, reporting, policy change and types of QoE supported. This Quality of Experience frameworks lack in monitoring of the user device for resources and services that are received at the client side.
A heterogeneous similarity metrics (HSM) for cloud service ranking and selection was proposed by [24]. This approach uses three quantitative and three qualitative attributes to rank SaaS services. The selected attributes are service response time, availability, cost, security, usability, and flexibility.
The shortcoming of this approach is that it used synthetically generated datasets because a QoS dataset for cloud services that perfectly fit the context of the experiment could not be found [24].
It is on the basis of shortcomings in the existing cloud QoS tools comparisons that this paper introduces a pragmatic approach, based on real data from live use of cloud services.
The results from this comparison can be used in SaaS cloud service selection as well as validation of cloud QoS values as reported by the cloud provider tools.

III. RESEARCH DESIGN
The comparative study involved collection of primary data using both the cloud provider integrated QoS monitoring tool and the new vendor neutral QoS monitoring tool. For the new vendor neutral model, the collected QoS data were service response time, Service availability and service stability. As for the vendor integrated QoS monitoring tool, the measured parameters were as designed by the cloud provider.
The QoS data collection method adopted was live use of the cloud provider's platform in a way similar to normal usage of the cloud services, using chrome browser, which had been integrated with the new vendor neutral QoS monitoring tool.
During use of the cloud provider's platform, the cloud provider's tools monitored and measured the QoS. Simultaneously, the browser integrated vendor neutral model was also monitoring the QoS as experienced at the client side.
The cloud services chosen for the live usage for comparison purposes were Google docs, Salesforce, Hubspot and Shopify. These were chosen because according to [25], they are among the most widely used software as a service services. One of the main uses of cloud computing is creation of virtual office, and Google docs is the most popular suite for running office as compared to think free and Microsoft office live [26].
The QoS metrics used in the experimentation were service response time, service availability and service stability. These are derived from the Quality of experience model as highlighted by [27]. According to [27] QoS Metrics mainly focus on quality of platform (QoP), quality of application (QoA) and quality of experience (QoE).
The definitions for the identified SaaS QoS metrics as well as other QoS metrics are as highlighted in [28].
The Quality of Platform (QoP) consists of: Transparency Location-aware capability, SLA management, Portability and Data auditing; The Quality of the Application (QoA) consists of Multitenancy, Configuration, Interoperability and Software fault tolerance while the Quality of Experience focuses on Service availability, Usability, Performance and Response timeliness. This experiment focused on the QoE metrics and the metrics were defined as: the service response time is the average time it took for the user specified service to be initialized and ready for use; Availability was measured, as the number of instances the user request for a service and gets the service against the number of instances the requested service is not available, While service stability was computed using standard deviation. A standard deviation value greater than the mean means the system is not stable while a standard deviation value of less than the mean implies the system as stable.
The vendor neutral platform configuration for experimentation purposes was done as highlighted in [17].
The sampling and selection of the various tasks to be undertaken on the cloud platform was done using random selection of the tasks available on the platform and at times using judgmental selection where tasks viewed as critical, most commonly used and likely to yield better results were selected.

A. Experimentation with Existing Cloud Provider's Platforms
The experimentation methodology involved creating accounts on the cloud provider's platform and using the platform in a way that an ordinary user would use the services. Incases where challenges were encountered or clarifications needed during system use, online chats, video calls and emails were used to get help from the cloud providers.
On Gsuite, the process involved opening Google docs, sheets, forms and slides. The opened apps were used in the same way an ordinary user would open and use the apps, close and re-open them.
Sales force involved creating an account on the platform, creating products for sales, configuring prices, proving clients with quotes and responding to queries from clients.
Hubspot involved creating an account on the cloud provider's platform and configuring customers in the Hubspot Customer Relationship Management system, configuring products for marketing and setting prices.
Shopify involved creating an account on the platform, setting up an online store, configuring products and prices and creating sales and well as generating invoices.
While the tasks stated for Gsuite, SaleForce, Hubspot and Shopify were being executed, the vendor neutral model was running in the background monitoring the performance of the cloud services.
Service response time was computed as the average time it took for the user specified service to be initialized and ready for use, availability was measured as the number of instances the use requested for a service and gets the service against the number of instances the requested service was not available while service stability was computed using standard deviation.
A standard deviation value greater than the mean means the system is not stable while a standard deviation value of less than the mean implies the system as stable.
The experiments were done randomly during official working days and hours as from 3rd September 2019 to 7th December 2019. This was done to be in line with the way a client would use cloud computing services.

B. Experimentation Platform
The comparison experiments were measured under the same system conditions and Internet conditions namely a mac book pro laptop with Intel(R) Core(TM) i5-4288U CPU @ 2.60GHz and an average Internet effective type 3G.

IV. FINDINGS
The QoS monitoring results from the experiments done on the selected global cloud providers was documented and analyzed for comparison purposes.

A. Gsuite
G-suite offers its clients a dashboard which provides the current performance status of the app they are using. The dashboard is accessed via the link: https://www.google.com/appsstatus#hl=en&v=status The performance metrics for Gsuite are grouped as: No Issues, Service Disruption and Service outage. No issues means the app is up and running, Service Disruption means the app has been switched off temporarily for maintenance purposes while Service outage means the app is not functional due to a technical problem.
A sample screenshot for the dashboard is shown in figure 1.

B. SalesForce
Sales force offers its clients a platform to check on the status of the services they have subscribed to. The platform has four metrics namely: Available, Performance degradation, Service disruption, Maintenance.
The dashboard is accessed via the link: https://status.salesforce.com/products/all. A sample screenshot from the monitoring platform is shown in figure 2.

D. Shopify
The platform provides quantitative metrics in terms of the average time taken by the platform to respond to user requests. The performance of the platform can be accessed via the link https://status.shopify.com.
A screenshot of the Shopify QoS monitoring platform is as shown in figure 4.

E. The Vendor Neutral Model
The QoS results form the vendor neutral model, across the selected global cloud providers is as shown in table 1. The results were measured under the same system conditions and Internet conditions namely a mac book pro laptop with Intel(R) Core(TM) i5-4288U CPU @ 2.60GHz and an average Internet effective type 3G.
From table 1, a summary of the capability of QoS tools used by the four cloud services providers is given as shown in table 2.

VI. CONCLUSION
The Vendor neutral model outperforms the vendor specific models, insofar as the client is concerned in the sense that the tool can be used across all SaaS cloud providers, hence providing room for performance comparison. Further, the vendor neutral model measures all three key Quantitative metrics namely service response time, service availability and service stability unlike the vendor specific tools that only monitor one parameter.
With the vendor neutral tool being more user centric in the sense that it measures end to end QoS as experienced at the user's terminal, its adoption will increase confidence in the cloud platform due to the validation capability it provides to the users against cloud provider QoS values, and its ability to be used across cloud providers QoS comparison.