Home > Computer science essays > Dissertation: Performance characterization in Virtualization Based Environment

Essay: Dissertation: Performance characterization in Virtualization Based Environment

Essay details and download:

  • Subject area(s): Computer science essays
  • Reading time: 27 minutes
  • Price: Free download
  • Published: 24 July 2014*
  • File format: Text
  • Words: 7,852 (approx)
  • Number of pages: 32 (approx)

Text preview of this essay:

This page of the essay has 7,852 words. Download the full version above.

Many service provider today using virtualization because of its various benefits. Virtualization is
useful for resources utilization, testing purposes and better server management. Several application
are being consolidated to virtual environment using various virtualization techniques. One of the
most popular use of virtualization is cloud computing where service providers can provide continuous
service to its client with minimum resource usage with ability to scale up for sudden load in system. It
is important to identify the performance change for each service provider to provide better service to
its client. Main aim of seminar is to study the performance affect on application running on different
virtual environment. Here we focus on three types of performance measurement in virtualization
based environment. First, performance characterization of application in virtual machines. Second,
performance characterization of different virtualization solution and third, comparing performance
measurement on cloud applications and providers.
In IT industry the use of virtualization is increasing rapidly. For each service provider, it is useful that
he provide availability and quality of service to its clients. Further service provider should ensure that
resources are utilized efficiently without affecting services to clients. So service provider started using
virtualization techniques to work on this model. Virtualization helped providing various services in
Cloud computing like hosting multiple operating system on same physical machine, dynamic allocation
of resources to OS according to need and live migration. Virtual machine monitor or hypervisor is used
to create and manage virtual machines. Hypervisor is responsible for sharing physical resources among
virtual machines and running guest OS and it application on physical machine. Because of virtualization
overhead, resource sharing and scheduling decision the performance of application running on guest OS
is affected. Performance critical application can not provide same service in virtualized environment as
it did in native environment.
Need of Performance Measurement
Performance measurement of any system or application server can be helpful to identify whether a server
can fulfill clients requirements. Benchmarking a server is also helpful for finding improvement in system
and for comparison among systems. For a service provider it is essential to provide quality services
to its clients without discontinuity of service and quality of service is just a comparison of customers
expectation with performance. Server benchmarking is being done by many system developers and
service providers, to get resource requirement of server to provide for target workload, and to get
performance level achieved by system. David Patterson famously said:’For better or worse, benchmarks
shape a field’.
Need of Performance Measurement in Virtualized Environment
Virtualization allows to run multiple OSes simultaneously on same physical hardware for resource utiliza-
tion. Application running on those OSes faces performance affect because of overhead of virtualization
and resource sharing. So the performance in the native environment for same application may be dif-
ferent in virtualized environment. There are various virtualization techniques used for running different
Operating system on same physical machines. These techniques uses different methods to interact and
share hardwares among multiple virtual machines so performance of application in one technique may
be different form other. Hardware resources are shared and allocated to each VM and one can also
change resources allocated to each VM .Virtualization allows virtual machine monitor to over-commit
hardware resources to VM but managing over-committed VM need extra processing and this also affect
the performance of application running on guest OS. CPU and memory sharing is done by hypervisor
so hypervisor also need some processing unit to divide resources and schedule VMs on available the
resources which induces a performance overhead.
Server virtualization has several advantages like resource utilization, isolating applications, reducing
server downtime, etc therefor many application are being migrated to virtualized environment. Several
data centers are using virtualization for resource utilization and easy server management. There are
various cloud service providers which provide various three types of services.
1. Infrastructure as a service (IaaS): This is most basic cloud-service model where provider provides
virtual machines and other resources to clients. This type of service include virtual machines,
servers, storage, network.
2. Platform as a service (PaaS):In this model, cloud providers provides a computing platform includ-
ing operating system, database and web-server. A customer can run application software without
complexity of buying and installing hardware and softwares.
3. Software as a service (SaaS): In this model, service providers delivers access to application software
and databases to clients. Clients need not manage the infrastructure or software but can use
installed softwares directly.
Cloud providers uses virtualization to provide these services to clients. Different Cloud providers may
use different methods to provide these services to client. Benchmarking these cloud providers will help
customer to choose one or more among several cloud provider.
So far we have discussed how the performance measurement in virtualized environment is different
form native environment. Performance measurement is useful to answer following questions in virtual-
ization based environment :
1. How much workload can a system sustain for a given amount of resources allocated to VM ?
What is maximum achievable throughput and response time of a system running in virtualized
environment ?
2. What are resource required to sustain a specified load ?
3. What is relation between VM resources and throughput of an application?
4. Which virtualization techniques can be used to achieve better performance with less cost ?
5. Which cloud provider gives better performance ?
6. What is affect on performance because of other VM sharing the same physical resources ?
Answer to these question will allows user to predicting cost to provide services. The user will be able
to decide whether he should use virtualization to provide some service or not. It will help in deciding
one among various virtualization solutions, to use so that initial cost and managing cost for run server
is less.
Before starting with performance characterization, we need to understand different techniques of virtu-
alization. Next section describes virtualization and its types.
Virtualization and Its Types
Virtualization is adding an extra virtual hardware layer over physical hardware so that we can run appli-
cations(OS) on that virtual hardware like physical hardware. Virtualization allows to run multiple OSes
simultaneously on a physical machine. Virtual machine monitor(VMM) or hypervisor is responsible for
communication between guest OS (OS running on Virtual Machine) and physical hardware. There
are three techniques using which we can implement virtualization. All techniques shares few common
characteristic but implementation is different. Based on implementation we can classify virtualization
in three types:
‘ Full Virtualization: In Full virtualization the guest OS instruction are translated and run on
the physical machine with the help of hypervisor.Hypervisor interacts directly with all physical
resources like CPU,memory,I/O devices. In full-virtualization the guest OS is kept independent
and unaware of virtualization. No modification is required in guest operating system. In the full
virtualized mode, the host OS runs on highest privileged mode(ring 0) and guest OS run on lower
privileged mode so hypervisor need to trap and emulate all instruction therefore the guest OS run
slower. Some VMM like VMware provide full virtualization to guest OS.
‘ Para virtualization: In para-virtualized mode the guest OS is modified so that it can run privileged
instruction. All privileged instruction are run through hypercalls on direct physical machine. The
modification in guest OS costs very less but in full virtualization the cost to trap and emulate is
much higher. In para-virtualization guest OS is aware of virtualization so it does not need much
processing power compared to full virtualization to manage guest operating system. XEN virtual
machine monitor is based on para virtualization. Figure shown below shows architecture of XEN
‘ Hardware Assisted Virtualization: In hardware assisted virtualization the VMM uses processor
extension to emulate privileged instruction running on guest OS. The processor extension adds
guest mode which has all privilege level of normal processing mode to run a guest mode. Whenever
guest want to run some privileged instruction the processor switches to guest mode from kernel
mode and runs privileged instruction on hardware and on exit it switches back to kernel mode.
When the processor is in guest mode, it seems like normal to guest unmodified OS. Hardware
vendors like Intel and AMD have added extension like Intel-VT and AMD-V respectively. KVM
uses hardware assisted virtualization
Scope Of Seminar
Scope of seminar is to study the methodologies used for measuring performance. There are various
types of performance measurement we will see in next section. Each type of measurement involves
different methodology and requires different parameters to be measured and different tools to be used
for measurement purpose. The main aim of seminar is to understanding the methodology used for
performance measurement of application in different virtualization scenarios.
Performance Characterization
Virtualization has various benefits like reduced hardware cost, reduced power consumption and scalable
working environment but against these benefit we loss performance because of virtualization. The
performance overhead because of virtualization and the way how virtualization is being used affects the
performance of applications. In this section we will study parameters and techniques to measure the
performance of any application or system.
Basic Technique to Measure Performance
For performance measurement of a server, a common approach is to measure server’s behavior under
load from a workload generator. Multiple such experiments are required with different combination of
server configuration and workload parameter to get result for a benchmarking objective. For measuring
a performance we need the performance metric which is to be measured over some specified load. The
load generated should match real world behavior so that the measurement of performance metric should
also be correct and valid for real scenario where the server is going to provide service. Type of workload
should be decided on the basis of service provided by server and resources allocated to server. The
amount of load should be controlled by load generator. While generating the load performance parame-
ters are measured. The benchmarking technique should support multiple virtualization techniques and
it should be common for different platforms. Tight synchronization of load generation and profiling is
required to maintain correctness.
Types of performance Measurement
we can divide performance measurement is to three types:
1. Performance characterization of application in virtual machines: In this category, we will
measure the performance of application in virtual environment. There are various tools available
for measuring the performance of an application but those are not built for virtualized environ-
ment. Those tools are aware of flexibility in resource allocation and application of virtualization
like VM Migration. So there was a need of virtualization aware tools which can be used for
capacity planning.
2. Performance Measurement on cloud: In this category, we will measure the performance of
application running on cloud and compare the performance of different cloud providers. Different
cloud providers offer different types of service and on the back-end they use different techniques
to provide those services. Comparing these cloud providers is useful for an user to choose one of
the cloud provider according to there needs. It may also help cloud providers to improve where
they are lacking as compared to other providers.
3. Virtualization Solution Benchmarking: In this category, we will learn how the virtualization
solutions affects the performance of overall system. We will also look at some solutions to achieve
more performance by introducing new solutions for virtualization.
Parameters to be Measured
Different parameters can be used for different category of performance measurement. The performance
of an application affects the transaction time for a user. So to measure application behavior, response
time and throughput (transaction per second) are valid parameters. For comparing different cloud
provider, the parameters should be chosen such that they are common to all providers. The time
to compute a task by all cloud provider can be used to compare the performance but different cloud
provider have different variety of options in pricing also, therefor cost for running a particular task is
also a parameter for comparison. Virtualization solution benchmarking parameters depends the type
of solution itself.
‘ Application Behavior: response time , and throughput (transaction per sec)
‘ Comparing Cloud Services: benchmarking time, cost
The following sections of report will describe each type of performance measurement in detail.
Performance characterization of application
Several server applications are being consolidated in virtual environment using different virtualization
technologies. To estimate the resource requirement and achievable performance level a service provider
need benchmarking of server. There are various benchmarking tools for application running on native
machines but they are not virtualization aware. These tools have not information about dynamic
resource allocation and virtualization specific services such as VM migration. So we need separate
framework and policies to conduct server benchmarking activity in virtualized environment which can
be used for server benchmarking automatically and efficiently. The benchmarking is useful for variety of
purposes like virtual machine capacity planning, comparing competing product, predict performance for
loads that can not practically tested, etc. As discussed earlier, the basic approach for benchmarking is to
generate load for a server and measure the performance of server in terms of response time or throughput.
There are various load generator tools available for different server protocols and application. The
common goal is to find the peak throughput attainable by a given server configuration under a given set
of workload conditions, where peak throughput may be the throughput at which response time exceeds
some threshold on some load. An automated performance profiling tools can be used to measure the
performance of application in different virtualization scenario and different type of workloads. The
virtualization specific process like VM migration affects the performance of server, so benchmarking
tool can be used to measure the performance of an application during migration also.
Objective of application benchmarking
Application benchmarking in virtualized environment is used for evaluating competing products, ca-
pacity planning (server configuration for target workload) and to get the performance level achievable
under given resources. The performance is measured in terms of performance metric, the performance
metric like response time and throughput are well suited for application performance measurement. So
the goal is to find out maximum achievable throughput (saturation throughput) or peak rate, where
peak rate is the highest arrival rate of requests that does not drive server into saturation state . The
saturation state can be defined in two ways, first, the throughput (transaction per second) goes be-
low the specified level, or the response time exceed the specified threshold. the final goal may be the
mapping of resource allocation and peak rate or load (no. of concurrent users).
‘ Finding Peak rate: Given a resource allocation and configuration, the maximum achievable
performance can be derived by peak rate. Peak rate is the highest request arrival rate before
server reaches saturation state. Peak rate can also be the maximum load generated on server
before server reaches saturation state. Saturation state is the state where response time exceeds
certain threshold or throughput (transaction per second) goes down below some value.
‘ Mapping Response surface: The another useful goal of benchmarking might involve mapping
the response surface by varying the workload for different server configuration and resource al-
location. This will be helpful in finding relation between VM resources and response time or
throughput. One can decide the resource required for his application by looking at mapping
between peak rate and resource allocation.
Requirements of benchmarking tool
A virtualization aware performance benchmarking tool should have following essential properties:
‘ support for multiple virtualization techniques like Xen, KVM, VMWare etc
‘ Tight synchronization between workload generators and profiling tools
‘ Selection of load parameters according to nature of application
‘ Detection of warm-up and stable period of application for correct profiling
‘ Multiple-tier application support
‘ Should support real load scenarios, request level think time configuration, limit on resource avail-
ability of VM, mapping VMs to CPU cores
‘ Accuracy in result, multiple number of trials with same load
Inputs to Benchmarking Tool
The performance of a server depends on workload on it, its configuration and the hardware resources
allocated to it. The benchmarking tool takes workload information as a input because load parameters
should be according to nature of application and what application is running on server is know to user,
so user should provide workload information according to application running on server. Configuration
may include the location of server and hosts where different tiers of a application are running. We will
describe how workload, configuration and resource allocation can be useful for meeting benchmarking
1. Workload Description (W) : Workload to be stressed on the server can be given as number of
concurrent users, number of concurrent request per user and a think-time probability distribution
along with distribution parameters. To simulate the real world scenario, workload should be mix
of request. Mix of request can be derived by mix of transaction according to a customer behavior
model graph. Workload should be according to the behavior of server application e.g. for rating a
network file system (NFS), Fstress can be used and for rating CPU behavior, RUBiS can be used.
‘ Think Time specification: Think time is the average time a client remains idle after
receiving a response from server and before sending next request to server. Different distri-
bution of think time samples between can be used according to real world behavior. ‘VirtPerf
currently supports Poisson, uniform and Exponential distribution and has been architected
so that additional distribution can be added with no change in VirtPerf code.'[1]
‘ Mix of transaction: Real scenarios can be emulated by generating a workload that contains
the mix of transaction. Mix of transaction includes all type of request to server which are used
in real world. User can specify the mix of transaction manually according to real scenario or
can use a customer behavior model graph (CBMG). CBMG consist of a matrix of n ‘ n where
each block represents the probability of transition between states. The states vary according
to application and each state represents steps in an end to end transition.
2. Deployment and configuration information (C): Configuration information involves the
location of server (server’s IP address), port number on which server receives request and server
process identifier. it may also include the number of tiers and other deployment configurations of
3. Resource allocation information (R): Resource allocation parameters like CPU cores, range
of CPU capacity to be allocated to host and guest VMs, memory size of VMs, number of disks,
CPU pinning information, migration start time (if migration is being done) and other resource
allocation information for which the server is to be profiled.
Architecture of a benchmarking tool
Benchmarking tool have master-slave architecture and consist of four main modules, the controller, the
load generator, profiling agents and VMM agent. It may also have analyzer to analyze previous result so
that next load parameter can be decided accordingly. Controller and load generator are part of Master
and profiling agent and VMM agent are part of slave. Figure 1 shows architecture of a benchmarking
1. Controller: The controller is responsible for co-ordinating the load generation and resource
configuration and monitoring the performance and resource parameters. The controller reads the
input files given to it and controls other modules of system by sending relevant command to
other modules like configuration information to VMM agent and transaction information to load
generator. Load generator start generating load at selected level (based on think time distribution
Figure 1: Architecture of an Automated Profiling Tool
given in input files). Warm-up detection is done by controller on the basis of stable response time
from server for requests. After warm-up controller commands profiler agents to start monitoring
the server. After completion of load generation phase, result data is collected in ‘Experiment
Result DB’ form both load generator and profiler agents and given to analyzer. These results
drive the selection of next load level. The result data includes resource usage metrics form the
VMM agent and profiling agent after each round of load generation.
2. Load generator: The generator generate request at specified load level. The load generation will
include no. of client, no. of request per client and think time distribution. Load generator initially
generates the load and measure the response time to get the warm-up state. When warm-up is
detected, controller commands load generator to generate the load on server under controlled
manner and starts profilers to monitor the servers.
3. Server Profiling Agents: Profiling agent is useful to collect the resource usage in VM while load
is being generated. These data can provide more accurate result then simple black-box approach.
4. VMM agent: The VMM agent is useful to measure the per VM statistics like memory usage
and CPU usage of VM. It is also used in defining the resources to VM under control of the
controller. Controller signals VMM agent to change resource allocated to VMs according to input
configuration and VMM agent changes resources allocated to VM to measure the behavior of
server on those resources.
Working of Benchmarking tool
Figure 2 explains the Working of Benchmarking tool.
Finding peak rate
Controller allocated the resources to server and commands load generator to generate the load at a
specified rate. After some specific run-length, controller commands load generator increases the load
with some factor. The load is increases until server does not reaches saturation state.
Figure 2: Working of automated benchmarking tool
Response Surface Mapping
For each of server resource configuration < C, R, W >, controller iterate for finding peak rate and maps
the output of the round to the allocate resources.
Choosing Run-length and Number of Trials
‘Benchmarking can never produce an exact result because complex systems exhibit inherent variability
in their behavior'[2]. So the best way is to make probabilistic claim about the interval for example, by
observing mean response time for multiple trials at a test load Y, we may be able to claim that we are
95% confident that mean server response time at that load level lies with in the range [Z1,Z2]. These
probabilistic claim can be characterize by a confidence level ( 95% in previous example) and confidence
interval ([Z1,Z2] in previous example) at this confidence level. ‘Suppose the mean server response time
for t trials is ??, and standard deviation is ??, then the confidence interval for ?? at confidence level c is
given by:
Zp ??
Zp ??
?? ‘ ‘ ,?? + ‘
Zp is a reading from a table of quantiles for the unit normal distribution, and p is a function of c,
such that Zp increases with c, p = (1 + c)/2 . ‘[2] To capture the accuracy of true value of metric,
tightness of confidence interval is required. Accuracy of a given interval [high,low] can be defined as:
accuracy = 1 ‘ error = 1 ‘
high ‘ low
high + low
‘ 100
Number of Trials
For a test load X, first conduct two trials to generate initial confidence interval for response time for
that load at 95% confidence level, from the next tests onwards we see overlap of confidence interval
current test with average confidence level. If regions overlap then accuracy is found using that interval,
if accuracy reaches target accuracy level than average response time is mapped with current load X,
otherwise more trials are conducted with same load X.
Use a fixed run-length for each experiment. The run-length can be manually determined for each type
of workload generator such that the response time does not have much variability and requires less
number of trials.
Load Picking algorithms
Load picking is useful to decrease the cost of benchmarking. A good load picking algorithm converges to
the peak rate earlier than the basic linear load picking algorithm. In basic linear load picking algorithm,
the load is incremented each time with some factor.
Binsearch Load-Picking Algorithm
Binsearch load-picking algorithm allows controller to find the peak rate using logarithmic number of
test loads. Algorithm for Binsearch load picking algorithm:
1. Initialization:
if(current load==0)
next load= 50 request/sec;
Start geometric phase, and return next load
2. Geometric Phase:
if( current responseT ime < saturation responseT ime)
return next load = current load *2;
// End Geometry Phase, start binary search
binsearch low= previous load and go to step 3
3. Binary Search Phase:
if( current responseT ime < saturation responseT ime)
return current load
binsearch high= current load
return next load= (binsearch low + binsearch high)/2;
Linear Load-Picking Algorithm
Linear load picking algorithm is same as Binsearch except at geometric phase the next load is incre-
mented with some factor and the load differs from previous by a small fixed increment. This algorithm
takes much time to reach the peak rate for some configuration.
Better seeding is important for fast converging, so when the goal is to plot a response surface, the
seed can be chosen as the peak rate of the previous sample.
Performance Measurement on cloud
Cloud computing has become the most buzzword in the computer science. Virtualization is the enable
of cloud computing. Cloud customer can run and store their computational data on the cloud and they
pay to a cloud provider for the service usage on demand. Working on cloud helps customer to manage
and install required hardware and software for their application. A customer need not pay large cost
for hardware purchase or cost for upgrading the resource for future demand, instead the cloud’s pay as
you go model enable customers to pay only for the resources you want to use. Their are multiple cloud
service provider like Amazon,Google,Microsoft, RacksSpace etc. Different cloud provider offers different
variety of options in pricing, performance, and feature set. For example, different cloud provers provide
Platform as a service (PaaS) where a cloud customer build and run application using API provided by
cloud providers. Other offers Infrastructure as a service(IaaS) where a customer run application inside
a virtual machine. Different cloud provider differs with pricing model also, for example Google’s App
engine charges by number of CPU cycles consumed by customers application, where as Amazon’s AWS
charges by duration and count of VM instances being used by a customer. A particular cloud provider
may also have different type of platform or infrastructure with different capabilities. The cloud provider
provide services to their customer as they demand and pay for the resources. The resources given to
a customer depends on what he pays to provider. For example, Amazon EC2 provide different type of
instances for IaaS, these instances have different allocation of resources and they differ in cost also.
Because of difference among cloud providers, customers would be interested in the questions like:
1. How well does a cloud provider perform as compared to other providers?
2. Which cloud provide will provide better service in lesser cost for a particular type of application?
3. How the performance is affected on different machine instances with different capabilities?
In this section, we will learn how to measure the performance on cloud, and compares the cloud
provider for both cost and performance.
Comparing Public cloud providers
As discussed earlier, different cloud provider have different options for performance, cost and feature
set. Comparison among the cloud provider will help user to chose a cloud provider for running or
developing application on it. The challenge is that every provider uses different approach to provide
services to its customers, the virtualization technologies and solutions used by them are also different
so to compare all these provider one need a common methodology which can compare a provider in
all dimensions (compute, storage, network, scaling and cost). Several technical challenges arise during
comparing different cloud provider:
1. The first challenge is to identify the parameters of measurement which are common to all service
providers regardless of difference in implementation of each cloud provider
2. Second is to measure the performance metrics from customer’s perspective
3. Third is to measure the cost of measurement of performance metrics for each cloud provider
Goals and Approach
In this section, we will describe design Goals for a Comparator of various cloud provider and briefly
describe the solution approach:
‘ Customer’s Choice of Provider: The primary goal of a comparator is to provide cost and
performance information for various cloud provider to a customer. This information can be used
to select the right cloud provider for a customer for his application. The choice of cost and
performance metric will be relevant to all typical application a customer my deploy. The metrics
cover all cloud services including elastic computing, persistent storage, wide area network and
intra-cloud network.
‘ Relevant to Cloud Provider: A comparison will be useful for a provider to identify the under-
performing service as compared to other providers. The reasons along with comprehensive set of
measurement result can be used for improvement of any service of a provider.
‘ Fairness: The same sets of workload and metric are used for each provider, instead of comparing
the specific metric of any provider, the focus is on set of common services offered by each provider.
‘ Throughput vs. measurement cost: For various cloud provider, throughput is compared by
measuring all cloud provider across all their data centers. This incurs significant measurement
overhead and monetary cost. Each provider is measure at different times of day across all its
location periodically.
‘ Coverage vs. Development Cost: Running the benchmark for each cloud provider takes
much cost and time so we choose the cloud provider which have larger number of customer and
represents different models like IaaS and PaaS. The comparator should also be extensible to other
Measurement Methodology
1. Identify Common Services: The each cloud provider includes some common sets of function-
ality like:
(a) Elastic Compute Cluster: A cluster have multiple virtual machines running the application,
each virtual machine may have different hardware resources allocate to it, and the physical
hardware may also differ.
(b) Persistent Storage: A storage is used to store the application data and state?? this data can
be accessed by APIs
(c) Intra-Cloud Network: The virtual instances are connected to each other by intra-cloud net-
work and they can share services and connect the application instances as well.
(d) Wide-Area network: The cloud is connected to Internet where the user can access the content
from multiple data center.
2. Choosing Performance & Cost Metric: for each service type, the performance metric are
chosen such that they are common to every cloud provider:
(a) Metrics for Elastic Compute Clusters: The server hardware may differ with resources
and configuration, for example, the instances in the higher tier may have faster and more
CPU cores. The customer can dynamically scale up and down the number of instances
according to the application requirement. The computer cluster charges per usage, some
provider charges for the time the instance is up and some also charges for how many CPU
cycles consumed by customer’s application (for example PaaS service by Google). So for
comparing the provider for this type of service we uses three metrics to compare clusters,
benchmarking finish time, cost, scaling latency.
Benchmarking finish time is the time taken by each provider to run the benchmark which
stress on all computing resources like CPU,memory,disk I/O and network.
Cost is the total monetary cost to complete each benchmarking task. This metric is useful
for criteria where customer want best service within cost budget.
Scaling latency is the time taken by each provider to allocate a new instance for handling
workload change, Scaling latency may also affect the performance of application. There
are other metrics like automation in manageability and customization of virtual instance but
these are hard to quantify so we will focus on performance and cost for running a application.
(b) Metrics for Persistent Storage: The cloud storage providers uses different type of services,
like Table,Blob,Queue. Most of these storage services are implemented over HTTP tunnels
and user interfaces are almost similar across all providers. The performance is affected by
the operation time and availability and consistency. There are three metrics to compare the
storage providers.
Operation response time: The time taken by a service provider to a operation. The
operation chosen is common for all provider and popular among customers. (Operation may
include lookup, write, update in database table in all type of service like table,blob or queue).
Time to consistency: The time between datum is written to storage device and when all
reads of datum return consistent and valid result. This metric is useful because a customer
may want immediate availability of data with consistency.
Cost per Operation: This metric to see how much each operation takes to compare the
cost with performance for each cloud provider.
(c) Intra-Cloud Network: The intra-cloud network may be helpful for communicating between
the server with in cloud, for example one instance may divide the request coming to it to
multiple instances of same server but it requires the consistency among all server instances.
All cloud provider promise top have higher intra-cloud network bandwidth. To compare the
intra-cloud network we can use TCP throughput because TCP is most used traffic and path
latency affects both TCP throughput and end-to-end response time.
(d) Wide-area Network: Several data centers differs in the location, on cloud provider may
also have multiple data center in different location. To compare the affect of wide are network
on performance we use optimal wise-area network latency, it is maximum latency between
the customer’s location and any data center owned by provider.
Working of Comparator Tool
The performance measurement are done for each cloud provider. Different cloud provider offer different
type of instances so the benchmarking task should be executed on each type of instances with varying
number of instances and performance metrics and cost is measured for each cloud provider. Different
experiment with different services type will be useful to compare for application specific comparison.
For comparing the cloud provider following experiments may be helpful:
‘ Finish time and Cost of each cloud provider for CPU,Memory and Disk I/O operations separately.
‘ Scaling latency for lowest instance type for each cloud provider
‘ response time comparison for each operation among various storage provider (response time vs
cumulative fraction)
‘ Consistency time comparison among storage providers
‘ TCP throughput comparison within cloud for each cloud provider
‘ TCP throughput between two different data center for each cloud provider [TCp throughput vs
‘ Operation time comparison among storage provider [operation time vs no. of concurrent operation
for each provider]
‘ Round trip time comparison for each cloud provider [cumulative fraction vs Round trip time (for
different customer location) for each provider]
Impact of virtualization on Cloud Network
Most cloud provider uses virtualization to provide flexible and cost-effective resource sharing. Cloud
provider runs multiple VM instances on a physical machine, and the hardwares are shared among them.
Cloud providers also offer multiple type of instances according to their capabilities. The comparison
between different types of instances and the impact of virtualization on those instances will help user to
choose a type of instance according to need. The performance of an instance may be affected because
of sharing of resources among VM. The performance of a VM is affected by number of VM running on
a PM, the physical hardware and other factors.
We will describe a method to measure the Impact on performance because of virtualization:
Measurement Tools and properties
1. Processor Sharing: In cloud hardware is shared with multiple instances so a use may ask ‘How
does this cloud provider assign physical processor to my instance’? We will run a CPU-Test
program to answer this question. This program consist of iteration for 1 million time, in iteration
program simply calls the get time of day() and saves the time-stamp into memory. If the processor
is not shared then each iteration will take same amount of time (assuming only this application
running on processor). Virtual machine scheduling may cause some iteration to take more time
than others. If we get the difference in iteration gap than we ca estimate the processor sharing
property of a cloud provider.
2. TCP-UDP throughput: UDP throughput can be calculated by running UDP server on the
instance and running UDP client to receive the data with highest possible speed. In case hypervisor
can not handle the data, it may drop some packets as well. Final throughput can be measured by
total data received by client over time. TCP throughput can also be done same way, maximum
window size can be fixed. We can study TCP and UDP throughput for much smaller time to see
how resource sharing is affecting the throughput.
3. Packet Loss: Badabing tool is the state-of-the-art loss rate estimation tool, which we can use
here to estimate the packet loss.
Various cloud provider offers multiple type of instance so for comparison we can pick any two instances
to work with. for example, Amazon EC2 provide small and medium instance, medium instance has
higher capacity than small one. For experimentation both type of instances are compared for iden-
tifying the difference. Amazon uses Xen hypervisor for virtualization. We do spatial experiment to
measure the network performance difference because of different network location. We do temporal
experiment to evaluate the network performance difference for a cloud provider over a long time period.
The performance metric is RTT,TCP/UDP throughput because they are valid metrics for network per-
formance. Experiments are performed on Amazon EC2. Amazon EC2 offer two type of instances, small
and medium, with different hardware capabilities.EC2 has data center in different physical location and
allows customers to choose a data center by its location.
1. Processor Sharing
For measuring the processor sharing different type of instance are taken and then CPU-test pro-
gram is run on it. In Paper G. Wang has compared three systems, one with non-virtualized
computer with AMD Dual core Opteron 252.2GHz, second is EC2 High CPU medium instance
with Intel Xeon Dual core 2.4GHz , and the third is EC2 small instance with sharing one core of
AMD Opteron Dual core 2218E 2.8GHz. Figure 3cite shows the CPU-test program plot which
shows how small instance faces the huge time stamp gap. Figure 4cite shows the distribution of
process share where we can see the small instance always share CPU resources and the instance
always get 40% to 50% of physical CPU sharing. Medium instances have more than 95% of CPU
shares always, it could be 100% but context switch between kernel and CPU-test program may
affect it.
Figure 3: CPU-Test Trace plot
Figure 4: Distribution of CPU sharing
2. TCP-UDP Throughput: The experiment done in cite shows the bandwidth difference between
small and medium instance of EC2. We will see bandwidth measurement result for spatial ex-
periment and TCP-UDP throughput of small instances at fine granularity. In the experiment
TCP/UDP throughput of 750 pairs of small instances and 150 pair of medium instances are used.
TCP windows size used in experiments is 256KB which can achieve 4Gb/s throughout if network
allows. In the figure 5cite shows the how TCP-UDP throughput of small and medium instances
differ from each other. UDP throughput is higher than TCP because Xen stores the UDP packets
in UDP burst buffer and when the VM is scheduled up then all packets are transferred to it. In
figure 6cite we can see timestamps gap between the the throughput.
Figure 5: The Distribution of bandwidth measurement results in spatial experiment
Figure 6: TCP and UDP throughput of small instances at fine granularity
Virtualization Solution Benchmarking
In this section we will study the effect of different virtualization solution on the performance, the
methods to measure the performance and solution to improve the performance. The virtual machines
running on a physical machines cause performance overhead because of the facility they provide and the
solution they use.Here we will focus of two solutions of virtualization and their affect on performance.
Identifying and Managing Performance Interference in Virtualized Environment
Various IaaS providers uses virtualization to package one application in to one or more virtual machines
this helps user to isolate misbehaving application and service provider offer facility to manage the
VM at low operating cost. When multiple VMs are running on a physical machines then one virtual
machine detects performance interference between that machine and other machines running on the
system.To detect the interference and efficiently dealing with interference is useful for a user. Detection
and managing interference include three steps:
1. Detecting Interference:
To detect the interference we divide this part into two modules, first will measure the low level
metrics of the VM and check the behavior of the VM, if it founds the change in the system then it
calls for second module where the deep analysis of the VM is done to verify whether the behavior
change is because of change in workload or it is experiencing the interference because of other
To detect the behavior change, the module uses low-level performance metric so that the exact
information about the system behavior can be known. This module runs previous record of data
and learn the old behavior of system and keep analyzing the current behavior, if it finds the change
in behavior then it transfer control to second module which analyses the system in detail.
Second module clones the VM to a sand-boxed environment where it can not be affected by any
other VM and then analyses the performance metrics again. It also have the performance metric
value first module used for suspecting VM for change in its behavior. Now if the precious values
matches with the current values of performance metric then the behavior change is because of
workload increase otherwise it is because of interference.
Now we describe the low level metrics used for differentiate VM behavior and the metric which
can be used for confirming the behavior change because of interference.
Figure 7 shows the low level metrics used for detecting the behavior change.
Figure 7: Low level metrics used to differentiate normal VM behavior form interference. The iostat and
netstat can be used to approximate I/O related stalls associated with different VM.
For confirming the interference we clone the VM to a sandbox environment and send copies of
request which were used in real environment in sandbox environment also. Now the VM is isolated
from other VMs now the behavior change can be analyzed by taking ratio for performance metrics
and if the ratio is above some threshold then declare the interference.
2. Select a appropriate destination Physical machine
For selecting the physical machine (PM) which can run the VM without affecting its performance
we run a synthetic benchmark on the physical machine. Synthetic benchmark will generate same
type of load on the physical machine as the actual VM and if physical machine is capable of
sustain then load without affecting the performance of other VM then that machine is chosen. If
the physical machine performance is affected then we run same benchmark on the other physical
machine. The destination PM should have available resources required for VM.
3. Migrating the VM to selected PM
File Cache Deduplication
Each system uses file cache management to reduce gap between CPU and Disk I/O speed is among the
most significant factors affecting a computing system’s performance. Operating system normally uses
large space in memory to cache the file data, this helps to reduce the time to access a file. The file
read operation are synchronous and mostly used so these are more sensitive to the effectiveness of cache
management. The virtual machines are often over-committed to achieve higher economics of scale, but
they also create memory pressure. So deduplication of host and guest cache is the most import to
increase the performance.
1. Previous techniques of Cache Deduplication
First, Cooperative Cache Mechanism, where the cache information is shared among all the virtual
machines and the physical machine, and then the selection of what to cache is done. It involves
communication among the virtual machines and physical machine which results complexity in
development and maintenance.
Second, Page sharing mechanism, where the each page is scanned and data blocks are merged with
identical content. The scanning of memory pages frequently induces the performance overhead on
the system
2. I/O performance analysis with cache
If any application running on guest want to access a file, then first it check against the guest page
cache, if it is hit then access latency is 1 memory copy. If there is a miss then request is forwarded
to guest virtual disk and new cache page in guest is allocated, now the virtual disk read request is
converted by I/O virtualization layer to read from the image file. The read request is then check
against host cache, if it is hit then host block cache is transfered guest block cache and overall
access latency is 2 memory copy. If there is miss the the host cache then total access latency is 2
memory access + 1 physical device transfer.
3. Functionally partitioned File Caching
The goal is to develop a caching mechanism which can utilize the cache space in VM and PM
by deduplication. We use functional partitioning, which use simple rules for each component to
understand its private date and store is on its cache. in the host side, the VM base images are
important to be cached because multiple VMs uses same base image. The most of content of VMs
images are same so if we store only base image in file cache then very much space can be saved.
For each VM, the private data should be have higher priority to stay in the cache because two VM
rarely have same private data. Keeping private data in the guest cache and base image, common
data in host cache will improve the performance of the system. To identify that the block belong
to base image or private, we use simple heuristic where if the block is changed after guest OS boot
time then it is more likely belong to private data. When a guest want to access the data belongs
to base image, the check against guest cache is skipped using O DIRECT flag.
Experimentation Methodology
In this section, we will describe how the experiments done for performance measurement.
Selection Of Performance Metrics
For measuring performance of any system, first we need to list down the performance metrics according
to the type of measurement. As discusse d in the Introduction section, the performance parameters
should have relation with what you want to benchmark.
Selection of workload tool
There are various workload tool available for generating the different type of workloads. The selection
of workload tool should be according to the experiment. There are some common tools used in for
experiment for various purposes:
1. SPECsfs97
SPECsfs is the Standard Performance Evaluation Corporation’s system file server benchmark,
this benchmark is useful to measure the performance of a system file server or network file server
2. SPEC CPU2006
‘CPU2006 is SPEC’s next-generation, industry-standardized, CPU-intensive benchmark suite,
stressing a system’s processor, memory subsystem and compiler. SPEC designed CPU2006 to
provide a comparative measure of compute-intensive performance across the widest practical range
of hardware using workloads developed from real user applications.’ cite This can be used for
generating CPU,memory intensive load.
3. Iperf
Iperf is commonly used benchmarking tool which is useful to create UDP and TCP streams. This
tool can be used to know the TCP throughout between two systems.
4. Ping
Ping is the network utility to test whether a network host is up. Ping can also be used to measure
the round trip time between systems.
5. RUBiS
RUBiS is a benchmark prototype to model an auction site, this is used to evaluate application
design pattern and application server performance scalability. It includes basic functionality like
selling browsing and bidding.

...(download the rest of the essay above)

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Dissertation: Performance characterization in Virtualization Based Environment. Available from:<https://www.essaysauce.com/computer-science-essays/performance-characterization-virtualization-based-environment/> [Accessed 19-02-24].

These Computer science essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on Essay.uk.com at an earlier date.