[NetApp] LUN copy utility to replicate our workstation copies off of a master image.”
Ballard stores his images on NetApp’s SAN arrays that have their own utility — called FlexClone — to make virtual copies of the data. “We had EMC and also looked at IBM, but both of them had limited dynamic-provisioning features,” he says, adding that a VMware upgrade that required 4.5TB on J&B Group’s old SAN now uses just 1.5TB on the company’s new storage infrastructure.
3. More granularity in backup and restoration of virtual servers.
Product picks: Vizioncore vRanger Pro, Symantec Netbackup, Asigra Cloud Backup
When combined with de-duplication technologies, more granular backups make for efficient data protection — particularly in virtualized environments where storage requirements quickly balloon and it can take longer than overnight to make backups. Backup vendors are getting better at enabling recoveries that understand the data structure of VM images and can extract just the necessary files without having to restore an entire VM disk image. Symantec Netbackup and Vizioncore vRanger both have this feature, which makes them handy products to have in the case of accidentally deleted configuration or user files. For its part, Asigra Cloud Backup can protect server resources both inside the data center and the cloud.
4. Live migrations and better integration of VM snapshots make it easier to back up, copy and patch VMs.
Product picks: FalconStor FDS, VMware vMotion and vStorage APIs, Citrix XenServer
VMware vStorage API for Data Protection facilitates LAN-free backup of VMs from a central proxy server rather than directly from an ESX Server. Users can do centralized backups without the overhead and hassle of having to run separate backup tasks from inside each VM. These APIs were formerly known as the VMware Consolidated Backup, and the idea behind them is to offload the ESX server from the backup process. This involves taking VM snapshots at any point in time to facilitate the backup and recovery process, so an entire .VMDK image doesn’t have to be backed up from scratch. It also shortens recovery time.
Enhanced VM storage management also includes the ability to perform live VM migrations without having to shut down the underlying OS. Citrix Systems XenServer offers this feature in version 5.5, and VMware has several tools including vMotion and vSphere that can make it easier to add additional RAM and disk storage to a running VM.
Finally, vendors are getting wise to the fact that many IT engineers are carrying smartphones and developing specific software to help them manage their virtualization products. VMware has responded to this trend with vCenter Mobile Access, which allows users to start, stop, copy and manage their VMs from their BlackBerry devices. Citrix also has its Receiver for iPhone client, which makes it possible to remotely control a desktop from an iPhone and run any Windows apps on XenApp 5- or Presentation Server 4.5-hosted servers. While looking at a Windows desktop from the tiny iPhone and BlackBerry screens can be frustrating — and a real scrolling workout — it can also be helpful in emergency situations when you can’t get to a full desktop and need to fix something quickly on the fly.
5. Thin and dynamic provisioning of storage to help moderate storage growth.
Product picks: Symantec/Veritas Storage Foundation Manager, Compellent Dynamic Capacity, Citrix XenServer Essentials, 3Par Inserv
There are probably more than a dozen different products in this segment that are getting better at detecting and managing storage needs. A lot of space can be wasted setting up new VMs on SAN arrays, and these products can reduce that waste substantially. This happens because, when provisioning SANs, users generally don’t know exactly how much storage they’ll need, so they tend to err on the high side by creating volumes that are large enough to meet their needs for the life of the server. The same thing happens when they create individual VMs on each virtual disk partition.
With dynamic-provisioning applications, as application needs grow, SANs automatically extend the volume until it reaches the configured maximum size. This allows users to over-provision disk space, which is fine if their storage needs grow slowly. However, because VMs can create a lot of space in a short period of time, this can also lead to problems. Savvy users will deal with this situation by monitoring their storage requirements with Storage Resource Management tools and staying on top of what has been provisioned and used.
Savvis is using the 3Par InServ Storage Servers for thin provisioning. “We don’t have to worry about mapping individual logical units to specific physical drives — we just put the physical drives in the array and 3Par will carve them up into usable chunks of storage. This gives us much higher storage densities and less wasted space,” says Doerr.
Citrix XenServer Essentials includes both thin- and dynamic-provisioning capabilities, encoding differentials between the virtual disk images so that multiple VMs consume a fraction of the space required because the same files aren’t duplicated. Dynamic workload streaming can be used to rapidly deploy server workloads to the most appropriate server resources — physical or virtual — at any time during the week, month, quarter or year. This is particularly useful for applications that may be regularly migrated between testing and production environments or for systems that require physical deployments for peak user activity during the business cycle.
Compellent has another unique feature, which is the ability to reclaim unused space. Their software searches for unused storage memory blocks that are part of deleted files and marks them as unused so that Windows OSes can overwrite them.
6. Greater VM densities per host will improve storage performance and management.
Product pick: Cisco Unified Communications Server
As corporations make use of virtualization, they find that it can have many applications in a variety of areas. And nothing — other than video — stretches storage faster than duplicating a VM image or setting up a bunch of virtual desktops. With these greater VM densities comes a challenge to keep up with the RAM requirements needed to support them.
In this environment, we’re beginning to see new classes of servers that can handle hundreds of gigabytes of RAM. For example, the Cisco Systems Unified Communications Server (UCS) supports large amounts of memory and VM density (see Figure 2): In one demonstration from VirtualStorm last fall at VMworld, there were more than 400 VMs running Windows XP on each of six blades on one Cisco UCS. Each XP instance had more than 90GB of applications contained in its Virtual Desktop Infrastructure image, which was very impressive.
“It required a perfect balance between the desktops, the infrastructure, the virtualization and the management of the desktops and their applications in order to scale to thousands of desktops in a single environment,” says Erik Westhovens, one of the engineers from VirtualStorm writing on a blog entry about the demonstration.
Savvis is an early UCS customer. “I like where Cisco is taking this platform; combining more functionality within the data center inside the box itself,” Doerr says. “Having the switching and management under the hood, along with native virtualization support, helps us to save money and offer different classes of service to our Symphony cloud customers and ultimately a better cloud-computing experience.”
“If you don’t buy enough RAM for your servers, it doesn’t pay to have the higher-priced VMware licenses,” says an IT manager for a major New York City-based law firm that uses EMC SANs. “We now have five VMware boxes running 40 VMs a piece, and bought new servers specifically to handle this.”
As users run more guest VMs on a single physical server, they’ll find they need to have more RAM installed on the server to maintain performance. This may mean they need to move to a more expensive, multiple-CPU server to handle the larger RAM requirements. Cisco has recognized that many IT shops are over-buying multiple-CPU servers just so they can get enough dual in-line memory module slots to install more RAM. The Cisco UCS hardware will handle 384GB of RAM and not require the purchase of multiple processor licenses for VMware hypervisors, which saves money in the long run.
James Sokol, the CTO for a benefits consultancy in New York City, points out that good hypervisor planning means balancing the number of guest VMs with the expanded RAM required to best provision each guest VM. “You want to run as many guests per host [as possible] to control the number of host licenses you need to purchase and maintain,” Sokol says. “We utilize servers with dual quad-core CPUs and 32GB of RAM to meet our hosted-server requirements.”
A good rule of thumb for Windows guest VMs is to use a gigabyte of RAM for every guest VM that you run.
7. Better high-availability integration and more fault-tolerant operations.
Product picks: VMware vSphere 4 and Citrix XenServer 5.5
The latest hypervisors from VMware and Citrix include features that expedite failover to a backup server and enable fault-tolerant operations. This makes it easier for VMs to be kept in sync when they’re running on different physical hosts, and enhances the ability to move the data stored on one host to another without impacting production applications or user computing. The goal is to provide mainframe-class reliability and operations to virtual resources.
One area where virtualized resources are still playing catch-up to the mainframe computing world is security policies and access controls. Citrix still lacks role-based access controls, and VMware has only recently added this to its vSphere line. This means that in many shops, just about any user can start and stop a VM instance without facing difficult authentication hurdles. There are third-party security tools — such as the HyTrust Appliance for VMware — that allow more granularity over which users have what kind of access to particular VMs. Expect other third-party virtualization management vendors to enter this market in the coming year. (To get an idea of how HyTrust’s software operates, check out the screencast I prepared for them here.)
8. Private cloud creation and virtualized networks — including vendor solutions that offer ways to virtualize your data center entirely in the cloud.
Product picks: Amazon Virtual Private Cloud, VMware vSphere vShield Zones, ReliaCloud, Hexagrid VxDataCenter
Vendors are virtualizing more and more pieces of the data center and using virtual network switches — what VMware calles vShield Zones — to ensure that your network traffic never leaves the virtualized world but still retains nearly the same level of security found in your physical network. For example, you can set up firewalls that stay with the VMs as they migrate between hypervisors, create security policies and set up virtual LANs. Think of it as setting up a security perimeter around your virtual data center.
Amazon has been hard at work with Elastic Computing — its cloud-based, virtualization-hosted storage — and last summer added Virtual Private Cloud to its offerings (see Figure 3). This enables users to extend their VPNs to include the Amazon cloud, further mixing the physical and virtual network infrastructures. It’s also possible to extend any security device on your physical network to cover the Amazon cloud-based servers. The same is true with Amazon Web Services, where customers pay on a usage-only basis with no long-term contracts or commitments.
Microsoft has a series of new projects to extend its Windows Azure cloud-based computing to private clouds. They can be found at here and include ventures such as “Project Sydney,” which enables customers to securely link their on premises-based and cloud servers; AppFabric, which is a collection of existing Windows Azure developer components; and updates to Visual Studio 2010.
Some of these are, or soon will be, available in beta. But like other efforts, more federated security between the cloud and in-house servers will require improvements before these new offerings can be dependably used by most enterprises.
Two new entrants to the cloud computing services arena are Hexagrid Inc. and ReliaCloud, both of which offer a wide range of infrastructure services, including high availability, hardware firewalls and load balancing. With these companies, all cloud servers are assigned private IP addresses and have persistence, meaning that users treat them as real servers even though they’re residing in the cloud. Expect more vendors to offer these and other features that allow IT managers to combine physical and cloud resources.
9. Better application awareness of cloud-based services.
Product picks: Exchange 2010, Sparxent MailShadow
It isn’t just about networks in the cloud, but actual applications too, such as Microsoft Exchange services. The days are coming when you’ll be able to run an Exchange server on a remote data center and failover without anyone noticing. Part of this has to do with improvements Microsoft is making to the upcoming 2010 release of its popular e-mail server software. This also has to do with how the virtualization and third-party vendors are incorporating and integrating disaster recovery into their software offerings. An example of the latter is MailShadow from Sparxent Inc. This cloud-based service makes a “shadow” copy of each user’s Exchange mailbox that’s kept in constant synchronization. There are numerous cloud-based Exchange hosting providers that have offered their services over the past few years, and Microsoft is working on its own cloud-based solutions as well.
10. Start learning the high-end, metric system measurements of storage.
If you thought you knew the difference between gigabytes and terabytes, start boning up on the higher end of the metric scale. SAN management vendor DataCore Software Corp. now supports arrays that can contain up to a petabyte — a thousand terabytes — of data. Savvis sells 50GB increments of its SAN utility storage to its co-location customers, which Doerr says has been very well received. “It’s for customers that don’t want to run their own SANs or just want to run the compute-selected functions,” he states. “There’s a lot of variation across our customers. You have to be flexible if you want to win their business.” Given that it wasn’t too long ago when no one could purchase a 50GB hard drive, he says this shows that, “we’re going to be talking exabytes when it comes to describing our storage needs before too long.” Next up: zettabytes and yottabytes.