Transcription

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014Evaluation of Different Hypervisors Performance inthe Private Cloud with SIGAR FrameworkP. Vijaya Vardhan ReddyDr. Lakshmi RajamaniDepartment of Computer Science & EngineeringUniversity College of Engineering, Osmania UniversityHyderabad, IndiaDepartment of Computer Science & EngineeringUniversity College of Engineering, Osmania UniversityHyderabad, IndiaAbstract— To make cloud computing model Practical and tohave essential characters like rapid elasticity, resource pooling,on demand access and measured service, two prominenttechnologies are required. One is internet and second importantone is virtualization technology. Virtualization Technology playsmajor role in the success of cloud computing. A virtualizationlayer which provides an infrastructural support to multiplevirtual machines above it by virtualizing hardware resourcessuch as CPU, Memory, Disk and NIC is called a Hypervisor. It isinteresting to study how different Hypervisors perform in thePrivate Cloud. Hypervisors do come in Paravirtualized, FullVirtualized and Hybrid flavors. It is novel idea to compare themin the private cloud environment. This paper conducts differentperformance tests on three hypervisors XenServer, ESXi andKVM and results are gathered using SIGAR API (SystemInformation Gatherer and Reporter) along with Passmarkbenchmark suite. In the experiment, CloudStack 4.0.2 (opensource cloud computing software) is used to create a privatecloud, in which management server is installed on Ubuntu 12.04 –64 bit operating system. Hypervisors XenServer 6.0, ESXi 4.1 andKVM (Ubuntu 12.04) are installed as hosts in the respectiveclusters and their performances have been evaluated in detail byusing SIGAR Framework, Passmark and NetPerf.Keywords—CloudStack; Hypervisor; Management Server;Private Cloud; Virtualization Technology; SIGAR; PassmarkI. INTRODUCTIONCloud computing is a model for enabling convenient, ondemand network access to a shared pool of configurablecomputing resources such as networks, servers, storage,applications, and services that can be rapidly provisioned andreleased with minimal management effort or service providerinteraction [1].Virtualization, in computing, refers to the act of creating avirtual version of something, including but not limited to avirtual computer hardware platform, operating system, storagedevice, or computer network resources. Storage virtualizationis amalgamation of multiple network storage devices into whatappears to be a single storage unit. Server virtualization ispartitioning of a physical server into smaller virtual servers.Operating system-level virtualization is a type of servervirtualization technology which works at the operating system(kernel) layer. Network virtualization is using networkresources through a logical segmentation of a single physicalnetwork. Virtualization is the technology which increases theutilization of physical servers and enables portability of virtualservers between physical servers. Virtualization Technologygives the benefit of work load isolation, work load migrationand work load consolidation.For being able to reduce hardware cost, cloud computinguses virtualization. Virtualization technology has evolvedreally quickly during past few years. Also it is particularly dueto hardware progresses made by AMD and Intel. Virtualizationis a technology that combines or divides computing resourcesto present one or many operating environments usingmethodologies like hardware and software partitioning oraggregation, partial or complete machine simulation,emulation, timesharing, and many others [2]. A virtualizationlayer provides an infrastructural support using the lower-levelresources to create multiple virtual machines that areindependent and isolated from each other. Such a virtualizationlayer is also called Hypervisor. [2].Cloud computing allows customers to reduce the cost of thehardware by allowing resources on demand. Also customers ofthe service need to have guaranty of the good functioning ofthe service provided by the cloud. The Service LevelAgreement brokered between the providers of cloud and thecustomers is the guarantees from the provider that the servicewill be delivered properly [3].This paper provides a quantitative comparison of threehypervisors Xen Server 6.0, VMware ESXi Server 4.1 andKVM (Ubuntu 12.04) in the private cloud environment.Microsoft Windows 2008 R2 server is installed on threehypervisors as a guest operating system and a series ofperformance experiments are conducted on the respective guestOS and results are gathered using SIGAR [36], Passmark [16]and NetPerf [35]. This technical paper presents and analysesthe results of these experiments. The discussion in this papershould help both IT decision makers and end users to choosethe right virtualization hypervisor for their respective privatecloud environments. The experimental results indicate that bothXenServer and VMware ESXi Server deliver almost equal andnear native performance in all the tests except in CPU testESXi is performing marginally better than XenServer and inMemory test XenServer performing slightly better than that ofESXi Server. Furthermore, KVM performance is noticeablylower than that of XenServer and ESXi Server, hence it needsto improve in all the performance aspects.60 P a g ewww.ijacsa.thesai.org

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014II. VIRTUALIZATION TECHNIQUESThis section describes the different virtualization techniquesnamely, Full virtualization and Paravirtualization used bydifferent hypervisors.X86 operating systems are designed to run directly on thebare-metal hardware, so they naturally assume they fully ‘own’the computer hardware. The x86 architecture offers four levelsof privilege known as Ring 0, 1, 2 and 3 to operating systemsand applications to manage access to the computer hardware.While user level applications typically run in Ring 3, theoperating system needs to have direct access to the memoryand hardware and must execute its privileged instructions inRing 0. Virtualizing the x86 architecture requires placing avirtualization layer under the operating system (which expectsto be in the most privileged Ring 0) to create and manage thevirtual machines that deliver shared resources. Threealternative techniques now exist for handling sensitive andprivileged instructions to virtualize the x86 Architecture. Fullvirtualization [17] approach, translates kernel code to replacenon-virtualizable instructions with new sequences ofinstructions that have the intended effect on the virtualhardware. This combination of binary translation and directexecution provides Full virtualization as the guest OS is fullyabstracted (completely decoupled) from the underlyinghardware by the virtualization layer. The full virtualizationapproach allows datacenters to run an unmodified guestoperating system, thus maintaining the existing investments inoperating systems and applications and providing a nondisruptive migration to virtualized environments. VMwareESXi server uses a combination of direct execution and binarytranslation techniques [4] to achieve full virtualization of anx86 system. Paravirtualization [17], involves modifying the OSkernel to replace non-virtualizable instructions with hyper-callsthat communicate directly with the virtualization layerhypervisor. The hypervisor also provides hyper-call interfacesfor other critical kernel operations such as memorymanagement, interrupt handling and time keeping. Theparavirtualization approach modifies the guest operatingsystem to eliminate the need for binary translation. Therefore itoffers potential performance advantages for certain workloadsbut requires using specially modified operating system kernels[4]. The Xen open source project was designed initially tosupport paravirtualized operating systems. While it is possibleto modify open source operating systems, such as Linux andOpenBSD, it is not possible to modify “closed” sourceoperating systems such as Microsoft Windows. Hardwarevendors are rapidly embracing virtualization and developingnew features to simplify virtualization techniques. Firstgeneration enhancements include Intel VirtualizationTechnology (VT-x) and AMD’s AMD-V which both targetprivileged instructions with a new CPU execution mode featurethat allows the VMM to run in a new root mode below ring 0.The hardware virtualization [17] support enabled by AMD-Vand Intel VT technologies introduces virtualization in the x86processor architecture itself.III.HYPERVISOR MODELSAll three hypervisors which used in the experiment arediscussed from viewpoint of their virtualization technique.A. Paravirtualized HypervisorXenServer - Citrix XenServer is an open-source, complete,managed server virtualization platform built on the powerfulXen Hypervisor. Xen [21] uses para-virtualization. Paravirtualization modifies the guest operating system so that it isaware of being virtualized on a single physical machine withless performance loss. XenServer is a complete virtualinfrastructure solution that includes a 64-bit Hypervisor withlive migration, full management console, and the tools neededto move applications, desktops, and servers from a physical to avirtual environment [8]. Based on the open source design ofXen, XenServer is a highly reliable, available, and securevirtualization platform that provides near native applicationperformance [8]. Xen usually runs in higher privilege level thanthe kernels of guest operating systems. It is guaranteed byrunning Xen in ring 0 and migrating guest operating systems toring 1. When a guest operating system tries to execute asensitive privilege instruction (e.g., installing a new pagetable), the processor will stop and trap it into Xen [9]. In Xen,guest operating systems are responsible for allocating thehardware page table, but they only have the privilege of directread, and Xen [9] must validate updating the hardware pagetable. Additionally, guest operating systems can accesshardware memory with only non-continuous way because Xenoccupies the top 64MB section of every address space to avoida TLB flush when entering and leaving the Hypervisor [9].XenServer is a complete virtual infrastructure solution thatincludes a 64-bit Hypervisor [8].B. Full virtualized HypervisorESXi Server - VMware ESXi is a Hypervisor aimed atserver virtualization environments capable of live migrationusing VM motion and booting VMs from network attacheddevices. VMware ESXi supports full virtualization [7]. TheHypervisor handles all the I/O instructions, which necessitatesthe installation of all the hardware drivers and related software.It implements shadow versions of system structures such aspage tables and maintains consistency with the virtual tables bytrapping every instruction that attempts to update thesestructures. Hence, an extra level of mapping is in the pagetable. The virtual pages are mapped to physical pagesthroughout the guest operating system‘s page table [6]. TheHypervisor then translates the physical page (often-calledframe) to the machine page, which eventually is the correctpage in physical memory.This helps the ESXi server better manage the overallmemory and improve the overall system performance [19].VMware‘s proprietary ESXi Hypervisor, in the vSphere cloudcomputing platform, provides a host of capabilities notcurrently available with any other Hypervisors. Thesecapabilities include High Availability (the ability to recovervirtual machines quickly in the event of a physical serverfailure), Distributed Resource Scheduling (automated loadbalancing across a cluster of ESXi servers), Distributed PowerManagement (automated decommissioning of unneeded serversduring non-peak periods), Fault Tolerance (zero downtimeservices even in the event of hardware failure), and SiteRecovery Manager (the ability to automatically recover virtualenvironments in a different physical location if an entiredatacenter outage occurs) [7].61 P a g ewww.ijacsa.thesai.org

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014C. Hybrid methodsKVM - KVM (Kernel-based Virtual Machine) is anotheropen-source Hypervisor using full virtualization apart fromVMware. And also as a kernel driver added into Linux, KVMenjoys all advantages of the standard Linux kernel andhardware-assisted virtualization thus depicting hybrid model.KVM introduces virtualization capability by augmenting thetraditional kernel and user modes of Linux with a new processmode named guest, which has its own kernel and user modesand answers for code execution of guest operating systems [9].KVM comprises two components: one is the kernel module andanother one is userspace. Kernel module (namely kvm.ko) is adevice driver that presents the ability to manage virtualhardware and see the virtualization of memory through acharacter device /dev/kvm. With /dev/kvm, every virtualmachine can have its own address space allocated by the Linuxscheduler when being instantiated [9]. The memory mapped fora virtual machine is actually virtual memory mapped into thecorresponding process. Translation of memory address fromguest to host is supported by a set of page tables. KVM caneasily manage guest Operating systems with kill command and/dev/kvm. User-space takes charge of I/O operation‘svirtualization. KVM also provides a mechanism for user-spaceto inject interrupts into guest operating systems. User-space is alightly modified QEMU, which exposes a platformvirtualization solution to an entire PC environment includingdisks, graphic adapters and network devices [9]. Any I/Orequests of guest operating systems are intercepted and routedinto user mode to be emulated by QEMU [9].IV. RELATED WORKThe following papers are studied to understand about therelevant work which had happened in the selected researcharea.Benchmark Overview - vServCon a white paper byFUJITSU [10], scalability measurements of virtualizedenvironments at Fujitsu Technology Solutions are currentlyaccomplished by means of the internal benchmark "vServCon"(based on ideas from Intel‘s "vConsolidate"). The abbreviation"vServCon" stands for: "virtualization enables SERVerCONsolidation. A representative group of application scenariosis selected in the benchmark. It is started simultaneously as agroup of VMs on a virtualization host when making ameasurement. Each of these VMs is operated with a suitableload tool at a defined lower load level. All known virtualizationbenchmarks are thus based on a mixed approach of operatingsystem and applications plus an "idle" or "standby" VM, whichrepresents the inactive phases of a virtualization environmentand simultaneously increases the number of VMs to bemanaged by the Hypervisor [10].The virtualization overhead involves performancesdepreciation rather to native performances. Research have beenmade to measure the overhead of the virtualization for differenthypervisor such as XEN, KVM and VMware ESX [11]; [12];[13]; [14]; [15]. For their researches Menon used a toolkitcalled Xenoprof which is a system wide statistical toolimplemented specially for Xen [13]. Due to this toolkit theyhave managed to analyse the performances of the overhead ofnetwork I/O devices. Their study has been performed withinuniprocessor as well as multiprocessor. A part of their researchhas been dedicated to performance debugging of Xen usingXenoprof. Those researches have permitted to correct bugs andimprove by that the network performances significantly. Afterthe debugging part it has been focused on the networkperformances. It has been observed that the performance seemsto be almost the same between Xen Domain0 and nativeperformances. However if the number of interfaces increase,the receive throughput of the domain0 is significantly smallerthan the native performances. This degradation of networkperformances is cause by an increasing CPU utilisation.Because of the overhead caused by the virtualization there aremore instructions that need to be managed by the CPU. Thisinvolves more information to treat and bufferization by theCPU which cause a degradation of receive throughputcompared to native performances. More recent studies try tocompare the differences between hypervisors and especially theperformances of each one according to their overhead[12];[15]. They are using three different benchmark tools tomeasure the performances: LINPACK, LMbench and Iozone.Their experiment is divided in three parts according to thespecific utilisation of each tool. With LINPACK Jianhua hadtested the processing efficiency on floating point. Differentpick value has been observed over the different systems testedwhich are native performance, Xen and KVM. The result ofthis show that the processing efficiency of Xen on floatingpoint is better than KVM because Fedora 8 virtualized withXen have performances which represent 97.28% of the nativerather than Fedora 8 virtualized with KVM represent only83.46% of the native performances. The virtualization ofWindows XP comes up with better performances than with thevirtualization of fedora 8 on Xen. This is explained by theauthors by the fact that Xen own fewer enhancement packagesfor windows XP than for fedora 8because of that theperformances of virtualized windows XP are slightly betterthan virtualized fedora 8.After having testing the processing efficiency withLINPACK, Jianhua have analysed memory virtualization ofXen and KVM compared to native memory performances withLMbench. It has been observed that the memory bandwidth inreading and writing of Xen are really close to nativeperformances. However the performances of KVM are slightlyslower for reading but significantly slower concerning thewriting performances. The last tool used by Jianhua is IOzonewhich is used to perform file system benchmark. Once againthe native performances are compared to the virtualizationperformances of Xen and KVM. Without Intel-VT processorthe performances of either Xen or KVM are around 6 or 7times slower than the native performances. However within theIntel-VT processor the performances of Xen increasesignificantly because the performances are even better thannative performances. However KVM does not exploit thefunctionalities of the Intel-VT processors and because of thatdoes not improve its performances.After analysing the relevant work on hypervisorsperformance we have chosen the below experimentation tocompare the respective hypervisors in the private cloudenvironment with CloudStack using SIGAR framework whichis a novel idea.62 P a g ewww.ijacsa.thesai.org

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014V.TEST METHODOLOGY - PRIVATE CLOUD: CLOUDSTACKWITH HYPERVISORSIn our experiment, the proposed test environment containsfollowing infrastructure using open source cloud computingsoftware. CloudStack is an Infrastructure as a service (IaaS)cloud based software which is able to rapidly build and provideprivate cloud environments or public cloud services.Supporting KVM, XenServer and Vmware ESXi, CloudStackis able to build cloud environments with a mix of multipledifferent hypervisors. With rich web interface for users andadministrators with operations of cloud use and operation beingperformed on a browser. Additionally, the architecture is madeto be scalable for large-scale environments [22]. CloudStack isopen source software written in java that is designed to deployand manage large networks of virtual machines, as a highlyavailable, scalable cloud computing platform. CloudStackoffers three ways to manage cloud computing environments: aneasy-to-use web interface, command line and a full-featuredRESTful API [22]. Private clouds are deployed behind thefirewall of a company where as public cloud is usuallydeployed over the internet. It is always ideal to use open sourcesolutions to perform any experiment related to cloudcomputing.In our test environment XenServer, ESXi and KVM areused as hypervisors (Hosts) in the CloudStack (private cloud).One machine is Management Server, runs on a dedicatedserver. It controls allocation of virtual machines to hosts andassigns storage and IP addresses to the virtual machineinstances. The Management Server runs in a Tomcat containerand requires a MySQL database for persistence. In theexperiment, Management Server is installed on Ubuntu (12.0464-bit). On the host servers XenServer 6.0, ESXi 4.1 and KVM(Ubuntu 12.04) [31] hypervisors are installed as depicted inFig. 1. Front end will be any base machine to launchCloudStack UI using web interface (with any browser softwareIE, Firefox, Safari) to provision the cloud infrastructure bycreating zone, pod, cluster and host in the sequential order.After respective hypervisors are in place, guest OS Windows2008 R2 64-bit [33] installed on them to carry out allperformance tests.A typical enterprise datacenter runs a mix of CPU, memory,and I/O-intensive applications. Hence the test workloadschosen for these experiments comprise several well-knownstandard benchmark tests. Passmark, a synthetic suite ofbenchmarks intended to isolate various aspects of workstationperformance, was selected to represent desktop-orientedworkloads. Disk I/O performance is measured using Passmark.CPU and Memory performance on the guest OS are measuredusing SIGAR Framework. SIGAR (System InformationGatherer and Reporter) is a cross-platform, cross-languagelibrary and command-line tool for accessing operating systemand hardware level information in Java, Perl and .Net. In theexperiment, Java program has written to gather systeminformation using SIGAR API by deploying sigar-amd64winnt.dll for Windows. And for network performance Netperfis used in the experiment. Netperf was used to simulate thenetwork usage in a datacenter. The objective of theseexperiments was to test the performance of the threevirtualization hypervisors. The tests were performed using aWindows 2008 R2 64-bit as guest operating system. Thebenchmark test suites are used in these experiments only toillustrate performance of the three hypervisors.VI. RESULTSThis section provides the detailed results for each of thebenchmarks run. Disk I/O and Network Performance resultshave been normalized to native performance measures. Nativeperformance is normalized at 1.0 and all other variousbenchmark results are shown relative to that number. Hencebenchmark results of 90% of the native performance would beshown as 0.9 on the scale in the graph. Higher numbersindicate better performance of the particular virtualizationplatform, unless indicated otherwise. Near-native performancealso indicates that more virtual machines can be deployed on asingle physical server, resulting in higher consolidation ratios.This can help even if an enterprise plans to standardize onvirtual infrastructure for server consolidation alone. CPUutilization tests indicate lower CPU utilization is better for ahypervisor, which is evaluated by using SIGAR API. In case ofMemory tests, High available memory indicated betterperformance of a hypervisor which gathered using SIGAR.A. SIGARCPU utilization on the guest Operating System is capturedwhen it is running on the respective Hypervisor. CPUutilization details are captured through java program usingSIGAR API on the guest OS for each hypervisor. As shown inFig. 2, ESXi for its guest OS shows less utilization of CPU ascompared to other hypervisors. Lower utilization CPUindicates the better performance for a hypervisor. XenServeralso shows low utilization of CPU for its guest OS but littlehigher than ESXi hypervisor. On the other hand KVM’s CPUutilization is slightly high for its guest OS as compared to othertwo hypervisors.Memory performance is evaluated by considering theavailable memory in the respective hypervisor when the singleguest Operating Systems is given full available memory.Fig. 1. Test Environment Architecture – Private Cloud (CloudStack withMultiple hypervisors)63 P a g ewww.ijacsa.thesai.org

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014Fig. 2. CPU Utilization captured using SIGAR (Lower value is better)Fig. 3 shows Available memory on the respectivehypervisor when guest OS is running. Memory details arecaptured using Java program with SIGAR API on the guest OS.XenServer for its guest OS shows maximum available memoryas compared to other hypervisors. Higher available memoryindicates better performance for a hypervisor. ESXi alsoexhibits higher available memory only but slightly lesscompared to XenServer. KVM indicates marginally lessavailable memory compare to other hypervisors.Fig. 4. Passmark – Disk I/O Read Write results compared to native (Highervalues are better)In Sequential Read and Sequential Write XenServer slightlyshows better performance than that of VMWare ESXi Server.In overall disk mark performance XenServer shows 2.7%overhead vs native whereas ESXi shows 3.4% overhead vsnative. KVM significantly falls behind other two hypervisorsand native as well.C. NETPERFFor experiment, in the private cloud for all the threehypervisors, Netperf test involved running single clientcommunicating with single virtual machine through a dedicatedphysical Ethernet adapter and port. All tests are based on theNetperf TCP STREAM test. Fig. 5 shows the Netperf resultsfor send and receive tests. XenServer and ESXi demonstratednear native performance in Netperf test, while KVM lagsbehind other hypervisors and native.Fig. 3. Available Memory captured using SIGAR (Higher Value is better)B. PASSMARKThe following Fig. 4 shows benchmark results for PassmarkDisk I/O read write tests. Sequential Read and Sequential Writeare the disk mark tests which were conducted on the threehypervisors in the private cloud environment. Both XenServerand ESXi perform almost equal to native performance.Fig. 5. Netperf results compared to native (higher values are better)64 P a g ewww.ijacsa.thesai.org

(IJACSA) International Journal of Advanced Computer Science and Applications,Vol. 5, No. 2, 2014VII. DISCUSSION ON RESULTSPerformance results show convincingly that XenServer andESXi Server both perform equally well in all experiments closeto near native performance without showing the signs of anyvirtualization overhead except KVM falling behind other twohypervisors and native as well.In CPU utilization tests ESXi CPU utilization is 0.06% lessthan that of XenServer and 0.24% less than that of KVM thusexhibiting better performance in CPU utilization. In memorytests XenServer available memory is 1% more than that ofESXi Server and 6% more than that of KVM hence showingbetter memory performance among two other hypervisors. InI/O tests XenServer scores over ESXi and KVM, whereXenServer shows 4% overhead in sequential read and 6%overhead in sequential write as compared to native. ESXishows 5% overhead in sequential read and 7% overhead insequential write as compared to native. And KVM shows 35%overhead in sequential read and 36% overhead in sequentialwrite as compared to native. In Network performance tests bothXenServer and ESXi gives near native performance and KVMfalls marginally behind other two hypervisors. In ClientReceive tests both XenServer and ESXi gives performanceequal to native and in Client-Send tests XenServer gives equalto native performance but ESXi shows 3% overhead ascompared to native. In Client-Send and Client-Receive testsKVM shows 22% overhead as compared to native.On overall XenServer and ESXi two hypervisors arereliable, affordable and offer the windows or any other guestoperating system IT professional a high performance platformfor server consolidation for production workloads. KVM needsto improve up on almost all fronts if it has to become on parwith other two hypervisors. ESXi and XenServer are maturedhypervisors as compare to KVM and their Reliability,Availability and Serviceability (RAS) is significantly higherthan that of KVM.VIII. CONCLUSION AND FUTURE WORKThe objective of this experiment is to evaluate theperformance of VMWare ESXi Server, XenServer and KVMHypervisors in the private cloud environment. After evaluationresults indicate that XenServer and ESXi hypervisors exhibitimpressive performance in comparison with KVM.Virtualization infrastructure should offer certain enterprisereadiness capabilities such as maturity, ease of deployment,performance, and reliability. From the test results VMwareESXi Server and XenServer are better equipped to meet thedemands of an enterprise datacenter than the KVM hypervisor.And KVM needs significant improvement to become anenterprise ready hypervisor. The series of tests conducted forthis paper proves that VMware ESXi Server and XenServerdelivers the production-ready performance needed toimplement an efficient and responsive datacentre in the privatecloud environment.The performance tests are conducted in the private cloudwith 64-bit Windows guest operating system. While evaluatingnetwork performance, one client send and receive tests areperformed on three hypervisors which are supported byCloudStack private cloud platform. The future work caninclude multiple client send and receive network tests forhypervisors. Experiments can also be carried out withparavirtualized Linux guest operating system as well. Withmore workloads scalability tests can be performed with otherhypervisors which are not covered in the present experiment.And future work can also consider public cloud environmentfor EFERENCESMell, P. & Grance, T. (2009) The NIST Definition of Cloud Computing.Version 15, 10-7-09. National Institute of Standards and Technology,Information Technology Laboratory.Nanda, S., T. Chiueh, ―A Survey on Virtualization Technologies,Technical report, Department of C

benchmark suite. In the experiment, CloudStack 4.0.2 (open source cloud computing software) is used to create a private cloud, in which management server is installed on Ubuntu 12.04 – 64 bit operating system. Hypervisors XenServer 6.0, ESXi 4.1 and KV