This document is collection of VMware when I was learning VMWARE hope this helps
Contents
CONCEPT 2
Hosted Virtualization 3
Bare-Metal Virtualization 4
Infrastructure Services 4
VMware vCompute 5
Resource Management 7
Virtualization 9
Vsphere4: 10
vCENTER- 10
Understanding the Architecture of VMware ESXi 17
vMOTION 18
Migration 23
Cold Migration 23
HOT Migration 23
Architecture Considerations 25
Application Services 27
Availability 27
NETWORKING 32
SNAPSHOT 34
CONCEPT
VMware vSphere 4
The VMware vSphere product suite includes
• VMware ESX and ESXi
• VMware Virtual Symmetric Multi-Processing(SMP)
• VMware vCenter Server
• VMware vCenter Update Manager
• VMware vSphere Client
• VMware VMotion and Storage VMotion
• VMware Distributed Resource Scheduler(DRS)
• VMware High Availability(HA)
• VMware Fault Tolerance (FT)
• VMware Consolidated Backup
• VMware vShield Zones
• VMware vCenter Orchestrator
Type 1 and Type 2 Hypervisors
Hypervisors are generally grouped into two classes: type 1 hypervisors and type 2 hypervisors. Type 1 hypervisors run directly on the system hardware and thus are often referred to as bare-metal hypervisors. Type 2 hypervisors require a host operating system, and the host operating system provides I/O device support and memory management. VMware ESX and ESXi are both type 1 bare-metal hypervisors. Other type 1 bare-metal hypervisors include Microsoft Hyper-V and products based on the open source Xen hypervisor like Citrix XenServer,KVM and Oracle VM.
VMware ESX consists of two components that interact with each other: the Service Console and the VMkernel.
The Service Console is the operating system used to interact with VMware ESX and the virtual machines that run on the server. The Linux-derived Service Console includes services found in traditional operating systems, such as a firewall, Simple Network Management Protocol (SNMP) agents, and a web server.
The second installed component is the VMkernel. While the Service Console gives you access to the VMkernel, the VMkernel is the real foundation of the virtualization process. The VMkernel manages the virtual machines' access to the underlying physical hardware by providing CPU scheduling, memory management, and virtual switch data processing.
A bare-metal hypervisor that enables full virtualization of industry-standard x86 hardware forms the foundation of this virtualization platform. In addition to this hypervisor, vSphere includes several advanced features that support innovative applications of virtualization technology. These features range from resource pooling, dynamic balancing of workloads, high availability, and disaster recovery.
Hosted Virtualization
Hosted virtualization was the first x86 virtualization technology widely available to the public to virtualize an x86 PC or server and its available hardware. This type of virtualization is known as a type 2 or hosted hypervisor because it runs on top of the Windows or Linux host operating system. As shown in Figure 2.1, the hosted hypervisor allows you to run virtual machines as applications alongside other software on the host machine.
Figure 2.1: Hosted virtualization products run atop a host operating system
VMware Workstation, Fusion( for mac), and Server are examples of hosted virtualization products.
The host operating system controls the access to the physical resources. The guest OS inside the virtual machine has to traverse through the host operating system layer before it can access these physical resources, thus introducing significant performance degradation within the virtualized guest OS environment. This type of virtualization is suitable for testing and developmental environments where developers can test their code in self-contained test and developmental environments.
This technology is not well suited for production use cases, and we strongly recommend that you do not deploy any application that must maintain data integrity or would interrupt your daily business operations because of possible data corruption in the event of virtual machine failure.
Bare-Metal Virtualization
The second type of hypervisor is known as type 1 native or bare-metal. This is commonly implemented as software running directly on top of the hardware with direct access and control of the hardware's resources. Since it has direct access to the hardware resources rather than going through an operating system (Figure 2.2), the hypervisor is more efficient than a hosted architecture and delivers greater scalability, robustness, and performance.
Figure 2.2: Bare-metal hypervisors have direct access to the physical hardware, providing better performance than hosted virtualization products
VMware ESX is an example of the bare-metal hypervisor architecture. We strongly recommend that you use a bare-metal hypervisor such as VMware ESX (or its free edition, ESXi) when deploying server applications in a virtual environment.
Because the hypervisor directly controls the physical resources and the virtual machine hardware is abstracted and captured in a set of files, this type of virtualization also enables interesting use cases, such VMotion and VMware HA, which can significantly benefit your application availability
Infrastructure Services
VSphere infrastructure services are the core set of services that allows you to virtualize x86 servers (see Figure 1.1). First, these services abstract the physical x86 hardware resources, such as CPU, memory, storage, and network adapters, into virtual hardware to create virtual machines (VMs). Next, these services enable vSphere to transform resources from individual x86 servers into a shared computing platform with several operating systems and applications running simultaneously in different virtual machines. Finally, the infrastructure services provide several sophisticated features to optimize resources in such a shared environment. Figure 1.1 provides an overview of all the services in vSphere 4.
VMware classifies all these vSphere features into this set of services:
• Infrastructure services
• Application services
• Management services
The Infrastructure and Application services are part of vSphere, and the Management services are provided by VMware vCenter Server.
Figure 1.1: VMware vSphere overview
VMware vSphere provides the following types of infrastructure services:
• VMware vCompute
• VMware vStorage
• VMware vNetwork
VMware vCompute
VMware vCompute services virtualize CPU and memory resources in an x86 server. The vCompute services also aggregate these resources from several discrete servers into shared logical pools that can be allocated to applications running inside virtual machines.
The vCompute services comprise the following:
• VMware ESX (and VMware ESXi) A bare-metal hypervisor that runs directly on server hardware. It supports different x86 virtualization technologies such as VMware-invented binary translation, hardware-assisted virtualization, and paravirtualization. VMware ESXi is a free version of ESX with a smaller footprint that minimizes the surface area for potential security attacks, making it more secure and reliable. ESX also includes several advanced CPU scheduling capabilities, as well as unique memory management features such as transparent page sharing and memory ballooning. These sophisticated features enable ESX to achieve higher consolidation ratios compared to its competition.
• VMware Distributed Resource Scheduler (DRS) Extends the resource management features in ESX across multiple physical servers. It aggregates CPU and memory resources across many physical servers into a shared cluster and then dynamically allocates these cluster resources to virtual machines based on a set of configurable options. DRS makes sure that resource utilization is continuously balanced across different servers in the shared cluster.
• VMware Distributed Power Management (DPM) Included with VMware DRS, DPM automates energy efficiency in VMware DRS clusters. It continuously optimizes server power consumption within each cluster by powering on or off vSphere servers as needed.
Virtualization Technologies
VMware ESX and ESXi offer a choice of three virtualization technologies (Figure 1.2):
• Binary translation
• Hardware-assisted virtualization
• Paravirtualization
• Binary translation is the virtualization technique that VMware invented for x86 servers. The x86 processors were not designed with virtualization in mind. These processors have 17 CPU instructions that require special privileges and can result in operating system instability when virtualized. The binary translation technique translates these privileged instructions into equivalent safe instructions, thus enabling virtualization for x86 servers. Binary translation does not require any specific features in the x86 processors and hence enables you to virtualize any x86 server in the data centre without modifying guest operating system and applications running on it.
• Hardware-assisted virtualization relies on the CPU instruction set and memory management virtualization features that both AMD and Intel have recently introduced in the x86 processors. The first generation of these hardware-assisted virtualization processors, called AMD-SVM and Intel-VT, only supported CPU instruction set virtualization in the processors. This alone did not perform fast enough for all different workloads, compared to the binary translation technology.
Recently, AMD and Intel have introduced newer processors that also support memory management virtualization.Virtualization using these second-generation hardware-assisted processors usually performs better than binary translation. Consequently, with the release of vSphere, VMware ESX, and ESXi now default to hardwareassisted virtualization out of the box, but you do have the choice to override this setting.
• Paravirtualization VMware vSphere also supports paravirtualized Linux guest operating systems—Linux kernels that include Virtual Machine Interface (VMI) support—that are virtualization-aware. Because the VMI standard is supported out of the box in newer Linux kernels, there is no need to maintain separate distributions of Linux specifically for virtualization.
Advanced Memory Management
VMware vSphere uses several advanced memory management features to efficiently use the physical memory available. These features make sure that in a highly consolidated environment virtual machines are allocated the required memory as needed without impacting the performance of other virtual machines. These advanced features include the following:
• Memory over-commitment Similar to CPU over-commitment, memory over-commitment improves memory utilization by enabling you to configure virtual machine memory that exceeds the physical server memory. For example, the total amount of memory allocated for all virtual machines running on a vSphere host can be more than the total physical memory available on the host.
• Transparent page sharing Transparent page sharing uses available physical memory more efficiently by sharing identical memory pages across multiple virtual machines on a vSphere host. For example, multiple virtual machines running Windows Server 2008 will have many identical memory pages. ESX will store a single copy of these identical memory pages in memory and create additional copies only if a memory page changes.
• Memory ballooning Memory ballooning dynamically transfers memory from idle virtual machines to active ones. It puts artificial memory pressure on idle virtual machines, forcing them to use their own paging areas and release memory. This allows active virtual machines in need of memory to use this memory. Keep in mind that ESX will ensure that a virtual machine memory usage cannot exceed its configured memory.
• Large memory pages Newer x86 processors support the use of large 2 MB memory pages in addition to the small 4 KB pages. Operating systems rely on the translation lookaside buffers inside the processor to translate virtual to physical memory addresses. Larger page sizes mean that a TLB cache of the same size can keep track of larger amounts of memory, thus avoiding the costly TLB misses. Enterprise applications such as database servers and Java virtual machines commonly use large memory pages to increase TLB access efficiency and improve performance. ESX supports the use of large memory pages in virtual machines and backs up with its own large memory pages to maintain efficient memory access.
Resource Management
• VMware vSphere allows you to establish minimum, maximum, and proportional resource shares for CPU, memory, disk, and network bandwidth for virtual machines. The minimum resource setting or reservation guarantees the amount of CPU and memory resources for a virtual machine, while the maximum resource setting or limit caps the amount of CPU and memory resources a virtual machine can use. The proportional resource allocation mechanism provides three levels—normal, low, and high—out of the box. These settings help configure virtual machine priority for CPU and memory resources relative to each other. These can be set at the resource pool level and are inherited or overridden at the individual virtual machine level. You can leverage these resource allocation policies to improve service levels for your software applications. The key advantage of these settings is that you can change resource allocations while virtual machines are running, and the changes will take place immediately without any need to reboot.
• You need to be careful when assigning the minimum settings or reservations because they guarantee resources to a virtual machine. If too much CPU and memory resources are reserved, you may not be able to start virtual machines.
Processor Scheduling (CPU Over-Commitment)
VMware vSphere includes a sophisticated CPU scheduler that enables it to efficiently run several machines on a single ESX host. The CPU scheduler allows you to over-commit available physical CPU resources; in other words, the total number of virtual CPUs allocated across all virtual machines on a vSphere host can be more than the number of physical CPU cores available.
Starting with the Virtual Infrastructure 3 (VI3) release, ESX has gradually shifted from "strict" to "relaxed" co-scheduling of virtual CPUs. Strict co-scheduling required that a virtual machine would run only if all its virtual CPUs could be scheduled to run together. With relaxed co-scheduling, ESX can schedule a subset of virtual machine CPUs as needed without causing any guest operating system instability.
HOSTS CLUSTERS and RESOURCE POOLS
A host represents the aggregate computing and memory resources of a physical x86 server. For example, if a physical x86 server has four dual-core CPUs running at 4GHz each with 32GB of system memory, then the host has 32GHz of computing power and 32GB of memory available for running the virtual machines that are assigned to it.
A cluster represents the aggregate computing and memory resources of a group of physical x86 servers sharing the same network and storage arrays. For example, if a group contains eight servers, each server has four dual-core CPUs running at 4GHz each with 32GB of memory. The cluster thus has 256GHz of computing power and 256GB of memory available for running the virtual machines assigned to it.
Resource pools provide a flexible and dynamic way to divide and organize computing and memory resources from a host or cluster. Any resource pools can be partitioned into smaller resource pools at a fine-grain level to further divide and assign resources to different groups, or to use resources for different purposes.
Virtualization
Virtualization exists in almost every layer, from the application to the operating system, server, networks, and storage devices.
APPLICATION: Application clustering technologies such as Microsoft Clusters and Oracle’s Real Application Clusters (RAC).
OS:
SERVER: Logical Volume Manager( Group of Volumes or LUNs),RAID,
NETWORK : VLAN
STORAGE : THIN Provisioning(Dynamic),
VSphere
Application lifecycle: Provisioning OS, testing in labs
ESX 4.0 8CPU 256GB Memory 200000 IO/40GB
ESX 3.5 4CPU 64 GB Memory 100000 IO/9GB
Standardize on preconfigured gold images
Preconfigured VM Templates
Provision on Demand
ESX server configuration
4GB ram per Core .example for 4 cores CPU then 16 GB RAM required
4 to 6 VM's can run per Core.
Vsphere4:
A virtual datacenter operating system. It Virtualize Server/Storage and Network.
The key Component are vCenter and ESX
vCENTER-
VCenter Server is single point of control to datacenter and used for performance monitor and configuration. It can manage multiple ESX hosts.
vCenter can be accessed by vSphere client or web browser or Terminal Service.
Key components of vCenter Server: Database,API for third party,Active Directory Interface ,User access control and vServices.
VCenter DB is either SQL or Oracle Database which stores VM configuration, logs etc.
Components which uses DB
Vcenter, VUM, and
VMware vCenter Update Manager (VUM) is a tool packaged with vCenter Server that provides automatic patching in Windows environments for guest OSes as well as automatic updates to underlying drivers from VMware, including VMware Tools and VM hardware modules.
Understanding the Architecture of VMware ESXi
VM plaform provided by esx host is independent of system and hardware
Although functionally equivalent to ESX, ESXi eliminates the Linux-based service console that is required for management of ESX. The removal from its architecture results in a hyper-visor without any general operating system dependencies, which improves reliability and security. The result is a footprint of less than 90MB, allowing ESXi to be embedded onto a host’s flash device and eliminating the need for a local boot disk.
The heart of ESXi is the VMkernel. All other processes run on top of the VMkernel, which controls all access to the hardware in the ESXi host. The VMkernel is a POSIX-like OS developed by VMware and is similar to other OSs in that it uses process creation, file systems, and process threads. Unlike a general OS, the VMkernel is designed exclusively around running virtual machines, thus the hypervisor focuses on resource scheduling, device drivers, and input/output (I/O) stacks.
VMware has provided two remote command-line options with the vCLI and Power-CLI.
ESXi can be deployed in the following two formats: Embedded and Installable. With ESXi Embedded, your server comes preloaded with ESXi on a flash device. You simply need to power on the host and configure your host as appropriate for your environment.
ESX and ESXi both are considered bare-metal installations. VMware ESX and ESXi differ in how they are packaged.
vMOTION
With vMotion you can migrate VM from one Physical ESX host to another within Datacenter
Storage vMotion: LUNS can be moved .It moves vmhome and disks to new storage
Migration
Cold Migration- When VM is powered off or suspended .It doesn’t require shared storage, It supports VM Migration between datacenters.
Hot Migration: when VM is powered on. It requires shared storage and Doesn’t not support VM Migration between Datacenters. For vMotion source and destination Processor should be from same family and vendor, p3-p3 intel-intel
Cold Migration
HOT Migration
Concepts
Virtual SMP: allows single virtual machine to use up to 8 CPU’s simiultyenously
Architecture Considerations
1. Define Key requirements that impact architecture
• Workload Capacity
• Availability & Risk
• Storage & Backup
Design Virtual Infrastructure steps
• Capacity planning and ROI; Determine todays needs and growth plan.
• Design and build Virtual infrastructure; ESX
• Build Virtual Servers ;P2V
• Backup VM's
2. Questions to ask before Virtualization
• Is Application code written to support multithreading technology for more than three processors or so (If software can take advantage of more than two core CPU’s)?
• Is Application 32 Bit or 64 Bit? ( 32 bit application can run on 64 bit OS)
• Is application in VM environment is supported by Application vendor?
• Whether VM requires HA and FT (vMOTION with SRM)
You must take into account your business requirements regarding the following:
• Access security
• Performance
• Infrastructure and application maintenance
• Service-level agreements (SLAs)
• Disaster recovery and business continuity
HA & FT
Application Services
VMware vSphere provides the following types of application services:
• Availability
• Security
• Scalability
Availability
Improving availability for applications is probably the most innovative and exciting use of virtualization technology. With availability services in vSphere, you can lower both planned and unplanned downtime for all applications running inside VMware virtual machines. Furthermore, vSphere enables this high availability without the need for complex hardware or software clustering solutions.
To minimize service disruptions because of planned hardware downtime, VMware vSphere includes the following availability services:
• VMware VMotion Using VMware VMotion, you can migrate running virtual machines from one vSphere server to another without impacting the applications running inside virtual machines. The end users do not experience any loss of service. You can leverage VMotion to move virtual machines off a vSphere server for any scheduled hardware maintenance without the need for any application downtime.
• VMware Storage VMotion VMware Storage VMotion enables similar functionality at the storage level. You can migrate virtual disks of running virtual machines from one storage array to another with no disruption or downtime. Storage VMotion will help you avoid any application downtime because of planned storage maintenance or during storage migrations.
• To reduce service disruptions because of unplanned hardware downtime, VMware vSphere availability services include the following features:
• VMware High Availability (HA) if a virtual machine goes down because of hardware failures, VMware HA automatically restarts the virtual machine on another ESX server within minutes of the event. HA does not protect you from application failure or OS corruption. Neither does FT in vSphere. With a FT enabled VMs, it must be noted that when the primary VM blue screens, so does the secondary VM and you are left with two identical server both not functioning.
• VMware Fault Tolerance (FT) VMware FT improves high availability beyond VMware HA. By maintaining a shadow instance of a virtual machine and allowing immediate failover between the two instances, VMware FT avoids even the virtual machine reboot time required in the case of VMware HA. Thus, it prevents any data loss or downtime even if server hardware fails. Like VMware HA, it can be a cheaper and simpler alternative to traditional clustering solutions.
• VMware Data Recovery VMware Data Recovery enables a simple disk-based backup and restore solution for all of your virtual machines. It does not require you to install any agents inside virtual machines and is completely integrated into VMware vCenter Server. VMware Data Recovery leverages data deduplication technology to avoid saving duplicate storage blocks twice, thus saving both backup time and disk space.
VMware VMotion
VMotion enables live migration of running virtual machines from one ESX server to another with no downtime (Figure 1.11). This allows you to perform hardware maintenance without any disruption of business operations. The migration of the virtual machine is quite seamless and transparent to the end user. When you initiate a VMotion, the current state of the virtual machine, along with its active memory, is quickly transferred from one ESX server to another over a dedicated network link, and the ESX server gets the control of virtual machine's storage using VMFS. The virtual machine retains the same IP address after the migration. VMotion is the key enabling technology that allows VMware DRS to create a self-managing, highly optimized, and efficient virtual environment with built-in load balancing.
HA:
ESX Hosts and VM’s are two components where ESX Host uses HA while VM’s use FT.
HA Checks ESX Hosts while FT checks Individual VM’s
• Server Failure (ESX Host Failure):If Server fails ,it restart VM on other ESX Host within Cluster.
• Application Failure :If failure is detected in application, it restarts application
FT
Fault tolerance gives protection interms fo Data,Trasactions
With FT VM Creates Duplicate copy of each running VM which takes over when Primary VM fails
NETWORKING
TERMS
Licensing: vCPUs, Cores, Sockets
Dual core CPU= Each core is treated as a separate CPU
there is already workload running, you can use vmware capacityIQ to predict how much more workloads you can put in
The number of VMs supported depends on the workload
According to VMware Inc.’s website, server consolidation ratios commonly exceed 10 virtual machines per physical processor; so presumably, a blade server with two CPUs, should be able to support at least 20 VMs.
Within HP’s ProLiant blade server line, the ProLiant BL460c/465c and BL680c/BL685c would be a good choice for a virtual server platform, primarily because they offer a large memory footprint, which means more than 16 VMs per blade in both cases, plus more network expansion and storage performance
DEll: Number of VM's per Physical core recommendation is 4:1 .that is 4 VM's per Physcial core
SNAPSHOT
Snapshot Size
We’ll try to keep things simple and not go in too much detail. To start off let’s imagine you have a virtual machine with a single hard disk. This disk is represented as a “vmdk” file in the datastore (e.g. VM.vmdk). Whenever you write something on the hard disk of the VM it is saved in that VM.vmdk file (figure-1).
When you create a snapshot this VM.vmdk file gets “frozen” and a new one (VM-1.vmdk) is created. The “frozen” VM.vmdk file represents the exact state of the VM’s hard disk at the time when the snapshot was taken. From that point on all changes on the VM’s hard disk are reflected in the new VM-1.vmdk file. Also a new “vmsn” file is created (e.g. Snapshot1.vmsm) which represents the state of the VM’s memory at the time of the snapshot creation (figure-2).
Similarly when a second snapshot is taken VM-1.vmdk gets “frozen” and VM-2.vmdk and Snapshot2.vmsn files are created, the same thing happens when we create a third snapshot (figure-3).
So how is the snapshot size calculated?
For each snapshot the size includes the sizes of the files needed to capture the state of the VM at snapshot time (e.g. hard disk and memory).
For Snapshot2 (figure-3) these files are Snapshot2.vmsn and VM-1.vmdk. The VM-1.vmdk contains all changes made after the first snapshot and it is required part of Snapshot 2.
For the currently active snapshot (e.g. Snapshot3), its size also includes the file which stores disk changes after the snapshot (e.g VM-3.vmdk, figure-3). Thus Snapshot3 files are Snapshot3.vmsn, VM-2.vmdk and VM-3.vmdk. VM-2.vmdk contains all changes since previous snapshot and VM-3.vmdk contains the current changes.
The root snapshot (e.g. Snapshot1) is based directly on the VM’s disk (e.g. VM.vmdk) but its size is not calculated (it’s calculated towards the size of the hard disk itself, not the snapshot). That way the files calculated in the size of each snapshot are the ones marked in orange in figures 2 and 3.
(Note: if we consider figure 3 and imagine that Snapshot2 is the currently active snapshot, then the size of VM-3.vmdk will be calculated in Snapshot2’s size, not Snapshot3’s)
Now for the “SizeMB” property of the Snapshot object. When calculating its value we use the above mentioned approach and calculate the size correctly for ESX 3.0 and 3.5. However there are some changes in the API behavior from 3.5 to 4.0 that we overlooked, thus resulting in wrong calculation of the snapshot size on ESX 4.0.
This issue will be fixed in a future release, but until then you can use the attached script to get the correct snapshot sizes. The script works on all ESX versions and uses the above mentioned approach for calculation.
• Snapshots are not complete copies of the original vmdk disk files. The change log in the snapshot file combines with the original disk files to make up the current state of the virtual machine. If the base disks are deleted, the snapshot files are useless.
• Snapshot files can grow to the same size as the original base disk file, which is why the provisioned storage size of a virtual machine increases by an amount equal to the original size of the virtual machine multiplied by the number of snapshots on the virtual machine
Vm.vmdk (11GB) ……………….original disk
Vm-001.vmdk (1000k) ………………..Delta copy
Vm-001.vmsd (0.70k) ……………..snapshot conf database
Vm-001.vmsn (4 GB) ……………max size of RAM allocated to original VM