Symantec Enables Customers to Virtualize Business Critical Applications with Confidence

2017-07-27T00:01:09+00:00 August 27th, 2010|Uncategorized|

ApplicationHA and VirtualStore to protect systems from downtime and optimize storage in VMware environments

 

Symantec Corp. (Nasdaq: SYMC) today announced that it will offer Symantec ApplicationHA and Symantec VirtualStore, two solutions that will provide customers the ability to confidently virtualize their business critical applications and minimize storage costs on the VMware platform. ApplicationHA and VirtualStore are the result of extensive collaboration between Symantec and VMware, as the companies work closely to help customers accelerate the adoption of virtualization to mainstream applications.

Symantec ApplicationHA, based on industry-leading Veritas Cluster Server technology, will provide high availability for business critical applications through application level visibility and control in VMware environments. Symantec VirtualStore, based on Veritas Storage Foundation technology, is a software-based storage management solution for VMware virtual machines that will provide rapid provisioning of servers and virtual desktops, efficient cloning and accelerated boot up of virtual machines. Both ApplicationHA and VirtualStore are seamlessly integrated with VMware management tools such as VMware vCenter Server, enabling customers to deploy these tools without impact to their operational model.

Customers have been aggressively virtualizing non-critical applications, and now that they are starting to move business critical applications like SAP and MS SQL Server databases to virtual platforms, they require high availability of the applications inside VMware virtual machines. Symantec ApplicationHA, for the first time, will ensure application high availability by providing visibility, control and integration with VMware vCenter Server and VMware High Availability (HA).

Symantec ApplicationHA:

  • Monitors applications’ health status and detects failures in the virtual machine
  • Restarts failed applications
  • Coordinates with VMware HA to restart the virtual machine, if needed

 

Administrators can fully manage all operations of Symantec ApplicationHA through VMware vCenter Server, avoiding the need for additional tools and associated training. Deep discovery and auto-configuration allows administrators to easily install, configure and administer Symantec ApplicationHA with a few clicks.

Built on the industry-leading Veritas Cluster Server technology, ApplicationHA supports a wide range of applications including MS SQL Server, Exchange, IIS, Oracle and SAP. It provides consistent functionality and usability across both Windows and Linux operating systems and is fully compatible with VMware vMotion and VMware Distributed Resource Scheduler (DRS).

Symantec VirtualStore: Address Storage Challenges in Virtual Infrastructures

As organizations scale their virtual environments, they find themselves challenged by the ever increasing storage requirements and performance bottlenecks that are associated with retaining hundreds or even thousands of virtual machine images on traditional hardware filers. VirtualStore will enable administrators to fully benefit from their virtualization investments with a software-based NAS solution that scales servers and storage independently, efficiently provisions virtual machines, and delivers advanced storage optimization capabilities for VMware environments.

Like ApplicationHA, VirtualStore fully integrates with VMware vCenter Server. Based on Symantec’s industry leading Veritas Storage Foundation technology, VirtualStore can:

  • Help customers reduce storage costs associated with virtual machine sprawl and improve the performance of virtual infrastructures
  • Enable administrators to dramatically reduce the cost per virtual machine by repurposing existing storage investments or using inexpensive or commodity storage
  • Help IT organizations reduce their storage footprint by storing only the differences between the parent virtual machine image and each clone
  • Significantly drive down total cost of ownership by taking advantage of the benefits of thin provisioning

 

Virtual Desktop Infrastructure (VDI) environments can also be managed more efficiently. VirtualStore’s ‘FileSnap’ feature lets administrators easily and rapidly clone and provision thousands of virtual machines in minutes through its VMware vCenter Server integration. Through innovative page caching, VirtualStore also eliminates the performance bottlenecks created when multiple users boot up their virtual machines (‘Bootstorm’).

Source: Symantec.com

Simplifying IT with Dell EqualLogic

2017-07-27T00:01:09+00:00 August 23rd, 2010|Uncategorized|

The Dell EqualLogic PS Series is a family of virtualized iSCSI storage arrays that combine intelligence and automation with fault tolerance to provide simplified administration, rapid deployment, enterprise performance and reliability, and seamless scalability. The PS Series high performance storage arrays deliver a range of capacity points from 400 GB to 48 TB in either a 3U or 4U chassis. PS Series arrays can be combined to create a virtualized SAN that scales up to 768 TB under a single management interface.

Ease of use • Intelligent, automated management helps minimize tedious administrative tasks

• From box to operating SAN in minutes

• Monitor petabytes of storage across dozens of SANs from a single console

  

  

Scalability

• Modular design allows growth when needed

• Online expansion between hardware generations

• Linear scalability — scale capacity and performance together

• Manage a growing pool of storage from one single user interface

• Thin provisioning to increase space efficiency for optimal capacity utilization

• Expand overall group capacity by mixing pools of 6000 and 6500 arrays

  

  

Enterprise efficiency

• Addition of 10GbE supports high-performance, high-bandwidth applications such as data warehouses and streaming media

• Maximize your IT investments with an end-to-end unified fabric data center encompassing servers, EqualLogic storage and Networking

• Enterprise level virtualized storage that matches virtualized server environments

• Support for multi-tiered application designs with automated tiering included in hybrid models  (6000XVS & 6010XVS)

  Enterprise performance • Exceptional performance for both sequential and transactional applications with linear scalability as arrays are added

• Automated, real-time load balancing across drives, RAID sets, connections, cache and controllers for optimized performance and resource utilization

• Pooling capability enables appropriate service levels for individual applications

  

  

Reliability

• Fault tolerant, fully redundant dual controller

• Designed for 99.999% availability

• Enterprise-class RAID protection

• Full hardware redundancy — hot swappable controllers, fans, power supplies, disks

  

  

Affordability

• All-inclusive enterprise features and functionality with no additional software licenses to purchase

• Easy connection via iSCSI

• Automated features help to eliminate highly specialized administrative costs

• Adopt 10GbE and run both 1GbE and 10GbE in the same environment without de-valuing legacy equipment

• EqualLogic SANs have the lowest TCO of common storage array architectures—fully 1/3 to 1/2 the total cost of competitors over a five year period2

 

Coretek Services is a Michigan based Systems Integration and IT consulting company that not only works with virtualization infrastructure such as VMware, Citrix XenServer, and Microsoft Hyper-V, but also is a Michigan based reseller of Dell EqualLogic SANs.  Please contact us today for any virtualization requirements, storage requirements, or specific Dell EqualLogic SAN needs.

Source:  Dell.com

Top 10 Storage Virtualization Trends of 2010

2017-07-27T00:01:09+00:00 August 4th, 2010|Uncategorized|

The storage area network (SAN) is now an essential technology for many large and midsize enterprises. Over the years SANs have become more sophisticated as vendors have rolled out systems that deliver better storage utilization and functionality. Based on these positive developments, 2010 should bring new and interesting products in several key areas. Here are our top 10 trends to keep an eye on in the coming year — along with the insights of key IT managers who are looking to optimize their existing storage and virtualization strategies.

1. Integration of solid state with rotating media for higher performance and lower energy costs.
Product picks: EMC FAST, Fusion-io, Compellent Storage Center

In an effort to provide the best possible storage solutions, many storage vendors are looking for ways to marry the high performance of solid-data memory to the lower cost of rotating media. As prices continue to drop for all storage technologies — and as hard drives get faster and cheaper — vendors are specifically working to incorporate the latest solid-state drive technologies into traditional SAN arrays. EMC Corp. and Compellent both offer fully automated storage tiering, which is the ability to store data depending on the needs of the application. More-frequently accessed files are stored on faster-performing disks, while less-frequently needed files are moved to tape.

“We’re using the Compellent product as part of our new Savvis Symphony cloud infrastructure service offering,” says Bryan Doerr, CTO of St. Louis-based services provider Savvis Inc. “We like how it has a policy that sits between the application and the array to control how each block of data is written to the physical media, based on frequency of usage.”

Doerr is pleased that these decisions are made automatically. “We don’t have to map tables or keep track of what files are stored where, and that’s a very powerful benefit to us,” he says. “Compellent can move individual blocks from a low-cost and low-performing SATA drive to a solid-state drive for the most-frequently updated data.”

One of the more interesting products is a hardware accelerator plug-in adapter card from Fusion-io that can pre-cache data using solid data memory for SAN arrays and other large-scale storage applications.

2. De-duplication technology — on storage and backups — can help open unused space.
Product picks: EMC Avamar, Symantec/Veritas Netbackup PureDisk, IBM/Tivoli Storage Manager, NetApp FlexClone

De-duplication technologies can provide a powerful way to quickly reclaim storage and minimize backup jobs. When users first start applying these technologies, they’re frequently surprised at how much duplication actually exists. As depicted in Figure 1, with PureDisk software from Symantec Corp., users can drill into a backup job and see that they could save more than 95 percent of their storage by getting rid of duplicate data. This capability offers huge potential savings, particularly when backing up virtual machine (VM) collections and remote offices.

Part of the challenge when using VMs is dealing with the fact that they share many common files inside each virtual image — the boot files for the operating system, the applications and so forth. A de-duplication product can leverage this by making only a single copy of common files.

PureDisk is typical of de-duplication products in that it operates in two different ways. For starters, you can use a PureDisk client or agent that runs on each VM and reports the unique files back to the central PureDisk backup server. And PureDisk can also back up the entire VMware VMDK image file without any agents on the separate VMs. This offloads backup from the ESX server and enables single-pass backups to protect all the files — whether they’re in use or not — that comprise the VM.

“De-duplication gives us big storage savings,” says Chuck Ballard, network and technical services manager at food manufacturer J&B Group, based in St. Michael, Minn. “We have 30 machines, each with a 20GB virtual hard drive, on our SAN. Rather than occupy 600GB, we have about a third of that, and we can grow and shrink our volumes as our needs dictate. We use the

[NetApp] LUN copy utility to replicate our workstation copies off of a master image.”

Ballard stores his images on NetApp’s SAN arrays that have their own utility — called FlexClone — to make virtual copies of the data. “We had EMC and also looked at IBM, but both of them had limited dynamic-provisioning features,” he says, adding that a VMware upgrade that required 4.5TB on J&B Group’s old SAN now uses just 1.5TB on the company’s new storage infrastructure.

3. More granularity in backup and restoration of virtual servers.
Product picks: Vizioncore vRanger Pro, Symantec Netbackup, Asigra Cloud Backup

When combined with de-duplication technologies, more granular backups make for efficient data protection — particularly in virtualized environments where storage requirements quickly balloon and it can take longer than overnight to make backups. Backup vendors are getting better at enabling recoveries that understand the data structure of VM images and can extract just the necessary files without having to restore an entire VM disk image. Symantec Netbackup and Vizioncore vRanger both have this feature, which makes them handy products to have in the case of accidentally deleted configuration or user files. For its part, Asigra Cloud Backup can protect server resources both inside the data center and the cloud.

4. Live migrations and better integration of VM snapshots make it easier to back up, copy and patch VMs.
Product picks: FalconStor FDS, VMware vMotion and vStorage APIs, Citrix XenServer

VMware vStorage API for Data Protection facilitates LAN-free backup of VMs from a central proxy server rather than directly from an ESX Server. Users can do centralized backups without the overhead and hassle of having to run separate backup tasks from inside each VM. These APIs were formerly known as the VMware Consolidated Backup, and the idea behind them is to offload the ESX server from the backup process. This involves taking VM snapshots at any point in time to facilitate the backup and recovery process, so an entire .VMDK image doesn’t have to be backed up from scratch. It also shortens recovery time.

Enhanced VM storage management also includes the ability to perform live VM migrations without having to shut down the underlying OS. Citrix Systems XenServer offers this feature in version 5.5, and VMware has several tools including vMotion and vSphere that can make it easier to add additional RAM and disk storage to a running VM.

Finally, vendors are getting wise to the fact that many IT engineers are carrying smartphones and developing specific software to help them manage their virtualization products. VMware has responded to this trend with vCenter Mobile Access, which allows users to start, stop, copy and manage their VMs from their BlackBerry devices. Citrix also has its Receiver for iPhone client, which makes it possible to remotely control a desktop from an iPhone and run any Windows apps on XenApp 5- or Presentation Server 4.5-hosted servers. While looking at a Windows desktop from the tiny iPhone and BlackBerry screens can be frustrating — and a real scrolling workout — it can also be helpful in emergency situations when you can’t get to a full desktop and need to fix something quickly on the fly.

5. Thin and dynamic provisioning of storage to help moderate storage growth.
Product picks: Symantec/Veritas Storage Foundation Manager, Compellent Dynamic Capacity, Citrix XenServer Essentials, 3Par Inserv

There are probably more than a dozen different products in this segment that are getting better at detecting and managing storage needs. A lot of space can be wasted setting up new VMs on SAN arrays, and these products can reduce that waste substantially. This happens because, when provisioning SANs, users generally don’t know exactly how much storage they’ll need, so they tend to err on the high side by creating volumes that are large enough to meet their needs for the life of the server. The same thing happens when they create individual VMs on each virtual disk partition.

With dynamic-provisioning applications, as application needs grow, SANs automatically extend the volume until it reaches the configured maximum size. This allows users to over-provision disk space, which is fine if their storage needs grow slowly. However, because VMs can create a lot of space in a short period of time, this can also lead to problems. Savvy users will deal with this situation by monitoring their storage requirements with Storage Resource Management tools and staying on top of what has been provisioned and used.

Savvis is using the 3Par InServ Storage Servers for thin provisioning. “We don’t have to worry about mapping individual logical units to specific physical drives — we just put the physical drives in the array and 3Par will carve them up into usable chunks of storage. This gives us much higher storage densities and less wasted space,” says Doerr.

Citrix XenServer Essentials includes both thin- and dynamic-provisioning capabilities, encoding differentials between the virtual disk images so that multiple VMs consume a fraction of the space required because the same files aren’t duplicated. Dynamic workload streaming can be used to rapidly deploy server workloads to the most appropriate server resources — physical or virtual — at any time during the week, month, quarter or year. This is particularly useful for applications that may be regularly migrated between testing and production environments or for systems that require physical deployments for peak user activity during the business cycle.

Compellent has another unique feature, which is the ability to reclaim unused space. Their software searches for unused storage memory blocks that are part of deleted files and marks them as unused so that Windows OSes can overwrite them.

6. Greater VM densities per host will improve storage performance and management.
Product pick: Cisco Unified Communications Server

As corporations make use of virtualization, they find that it can have many applications in a variety of areas. And nothing — other than video — stretches storage faster than duplicating a VM image or setting up a bunch of virtual desktops. With these greater VM densities comes a challenge to keep up with the RAM requirements needed to support them.

In this environment, we’re beginning to see new classes of servers that can handle hundreds of gigabytes of RAM. For example, the Cisco Systems Unified Communications Server (UCS) supports large amounts of memory and VM density (see Figure 2): In one demonstration from VirtualStorm last fall at VMworld, there were more than 400 VMs running Windows XP on each of six blades on one Cisco UCS. Each XP instance had more than 90GB of applications contained in its Virtual Desktop Infrastructure image, which was very impressive.

“It required a perfect balance between the desktops, the infrastructure, the virtualization and the management of the desktops and their applications in order to scale to thousands of desktops in a single environment,” says Erik Westhovens, one of the engineers from VirtualStorm writing on a blog entry about the demonstration.

Savvis is an early UCS customer. “I like where Cisco is taking this platform; combining more functionality within the data center inside the box itself,” Doerr says. “Having the switching and management under the hood, along with native virtualization support, helps us to save money and offer different classes of service to our Symphony cloud customers and ultimately a better cloud-computing experience.”

“If you don’t buy enough RAM for your servers, it doesn’t pay to have the higher-priced VMware licenses,” says an IT manager for a major New York City-based law firm that uses EMC SANs. “We now have five VMware boxes running 40 VMs a piece, and bought new servers specifically to handle this.”

As users run more guest VMs on a single physical server, they’ll find they need to have more RAM installed on the server to maintain performance. This may mean they need to move to a more expensive, multiple-CPU server to handle the larger RAM requirements. Cisco has recognized that many IT shops are over-buying multiple-CPU servers just so they can get enough dual in-line memory module slots to install more RAM. The Cisco UCS hardware will handle 384GB of RAM and not require the purchase of multiple processor licenses for VMware hypervisors, which saves money in the long run.

James Sokol, the CTO for a benefits consultancy in New York City, points out that good hypervisor planning means balancing the number of guest VMs with the expanded RAM required to best provision each guest VM. “You want to run as many guests per host [as possible] to control the number of host licenses you need to purchase and maintain,” Sokol says. “We utilize servers with dual quad-core CPUs and 32GB of RAM to meet our hosted-server requirements.”

A good rule of thumb for Windows guest VMs is to use a gigabyte of RAM for every guest VM that you run.

7. Better high-availability integration and more fault-tolerant operations.
Product picks: VMware vSphere 4 and Citrix XenServer 5.5

The latest hypervisors from VMware and Citrix include features that expedite failover to a backup server and enable fault-tolerant operations. This makes it easier for VMs to be kept in sync when they’re running on different physical hosts, and enhances the ability to move the data stored on one host to another without impacting production applications or user computing. The goal is to provide mainframe-class reliability and operations to virtual resources.

One area where virtualized resources are still playing catch-up to the mainframe computing world is security policies and access controls. Citrix still lacks role-based access controls, and VMware has only recently added this to its vSphere line. This means that in many shops, just about any user can start and stop a VM instance without facing difficult authentication hurdles. There are third-party security tools — such as the HyTrust Appliance for VMware — that allow more granularity over which users have what kind of access to particular VMs. Expect other third-party virtualization management vendors to enter this market in the coming year. (To get an idea of how HyTrust’s software operates, check out the screencast I prepared for them here.)

8. Private cloud creation and virtualized networks — including vendor solutions that offer ways to virtualize your data center entirely in the cloud.
Product picks: Amazon Virtual Private Cloud, VMware vSphere vShield Zones, ReliaCloud, Hexagrid VxDataCenter

Vendors are virtualizing more and more pieces of the data center and using virtual network switches — what VMware calles vShield Zones — to ensure that your network traffic never leaves the virtualized world but still retains nearly the same level of security found in your physical network. For example, you can set up firewalls that stay with the VMs as they migrate between hypervisors, create security policies and set up virtual LANs. Think of it as setting up a security perimeter around your virtual data center.

Amazon has been hard at work with Elastic Computing — its cloud-based, virtualization-hosted storage — and last summer added Virtual Private Cloud to its offerings (see Figure 3). This enables users to extend their VPNs to include the Amazon cloud, further mixing the physical and virtual network infrastructures. It’s also possible to extend any security device on your physical network to cover the Amazon cloud-based servers. The same is true with Amazon Web Services, where customers pay on a usage-only basis with no long-term contracts or commitments.

Microsoft has a series of new projects to extend its Windows Azure cloud-based computing to private clouds. They can be found at here and include ventures such as “Project Sydney,” which enables customers to securely link their on premises-based and cloud servers; AppFabric, which is a collection of existing Windows Azure developer components; and updates to Visual Studio 2010.

Some of these are, or soon will be, available in beta. But like other efforts, more federated security between the cloud and in-house servers will require improvements before these new offerings can be dependably used by most enterprises.

Two new entrants to the cloud computing services arena are Hexagrid Inc. and ReliaCloud, both of which offer a wide range of infrastructure services, including high availability, hardware firewalls and load balancing. With these companies, all cloud servers are assigned private IP addresses and have persistence, meaning that users treat them as real servers even though they’re residing in the cloud. Expect more vendors to offer these and other features that allow IT managers to combine physical and cloud resources.

9. Better application awareness of cloud-based services.
Product picks: Exchange 2010, Sparxent MailShadow
It isn’t just about networks in the cloud, but actual applications too, such as Microsoft Exchange services. The days are coming when you’ll be able to run an Exchange server on a remote data center and failover without anyone noticing. Part of this has to do with improvements Microsoft is making to the upcoming 2010 release of its popular e-mail server software. This also has to do with how the virtualization and third-party vendors are incorporating and integrating disaster recovery into their software offerings. An example of the latter is MailShadow from Sparxent Inc. This cloud-based service makes a “shadow” copy of each user’s Exchange mailbox that’s kept in constant synchronization. There are numerous cloud-based Exchange hosting providers that have offered their services over the past few years, and Microsoft is working on its own cloud-based solutions as well.

10. Start learning the high-end, metric system measurements of storage.
If you thought you knew the difference between gigabytes and terabytes, start boning up on the higher end of the metric scale. SAN management vendor DataCore Software Corp. now supports arrays that can contain up to a petabyte — a thousand terabytes — of data. Savvis sells 50GB increments of its SAN utility storage to its co-location customers, which Doerr says has been very well received. “It’s for customers that don’t want to run their own SANs or just want to run the compute-selected functions,” he states. “There’s a lot of variation across our customers. You have to be flexible if you want to win their business.” Given that it wasn’t too long ago when no one could purchase a 50GB hard drive, he says this shows that, “we’re going to be talking exabytes when it comes to describing our storage needs before too long.” Next up: zettabytes and yottabytes.

Source: Redmondmag.com

Virtual Servers, Real Growth

2017-07-27T00:01:09+00:00 July 12th, 2010|Uncategorized|

 

If you follow tech industry trends, you’ve probably heard of cloud computing, an increasingly popular approach of delivering technology resources over the Internet rather than from on-site computer systems.

Chances are, you’re less familiar with virtualization — the obscure software that makes it all possible.

The concept is simple: rather than having computers run a single business application — and sit idle most of the time — virtualization software divides a system into several “virtual” machines, all running software in parallel.

The technology not only squeezes more work out of each computer, but makes large systems much more flexible, letting data-center techies easily deploy computing horsepower where it’s needed at a moment’s notice.

The approach cuts costs, reducing the amount of hardware, space and energy needed to power up large data centers. Maintaining these flexible systems is easier, too, because managing software and hardware centrally requires less tech support.

The benefits of virtualization have made cloud computing an economical alternative to traditional data centers.

“Without virtualization, there is no cloud,” said Charles King, principal analyst of Pund-IT.

That’s transforming the technology industry and boosting the fortunes of virtualization pioneers such as VMware (NYSE:VMW – News), Citrix Systems (NMS:CTXS), two of the best-performing stocks in IBD’s specialty enterprise software group. As of Friday, the group ranked No. 24 among IBD’s 197 Industry Groups, up from No. 121 three months ago.

1. Business

Specialty enterprise software represents a small but fast-growing segment of the overall software enterprise market, which according to market research firm Gartner is set to hit $229 billion this year.

As with most software, the segment is a high-margin business. With high upfront development costs but negligible manufacturing and distribution expenses, specialty software companies strive for mass-market appeal. Once developers recoup their initial development costs, additional sales represent pure profit.

Software developers also make money helping customers install and run their software, another high-margin business.

But competition is fierce. Unlike capital-intensive businesses, software companies require no factory, heavy equipment, storefront or inventory to launch. Low barriers to entry mean a constant stream of new competitors looking to out-innovate incumbents.

In addition to the virtualization firms, notable names in the group include CA Technologies (NMS:CA) and Compuware (NMS:CPWR).

All offer infrastructure software to manage data centers.

“Big-iron” mainframe computers began using virtualization in the 1970s, around the time when CA and Compuware were founded.

In the late 1990s, VMware brought the technology to low-cost systems running ordinary Intel (NMS:INTC) chips. VMware has since emerged as the dominant player in virtualization.

Citrix has added a twist to the concept, virtualizing desktop computers. Rather than installing workers’ operating system and applications on hundreds of PCs spread across the globe, companies can use the technology to run PCs from a bank of central servers. Workers, who access their virtual PCs over the Internet, don’t know the difference.

Microsoft (NMS:MSFT) has jumped in with its own virtualization product, HyperV, which it bundles free into Windows Server software packages. Oracle (NMS:ORCL) and Red Hat (NYSE:RHT – News) have launched virtualization products as well.

Meanwhile, CA and Compuware are racing to move beyond their mainframe roots to support virtualization and cloud-computing-enabled data centers. In February, CA said it would buy 3Tera to build services and deploy applications aimed at the cloud-computing market.

And Compuware bought privately held Gomez, Inc. last fall to manage cloud application performance.

Name Of The Game: Innovate. With a fast-moving market and steady influx of new competitors, keeping customers happy with good service and money-saving breakthroughs is vital.

2. Market

Nearly everyone who runs a corporate computer system is a potential buyer of virtualization software. Companies ramping up their information-technology purchases use the software to manage their sprawling infrastructure; others with limited budgets use it to squeeze more out of their existing systems.

Sales of server-virtualization software are set to grow 14% this year to $1.28 billion, according to a report by Lazard Capital Markets. Sales of software to manage virtual environments will grow 44% in 2010 to $1.88 billion.

Desktop virtualization revenue will rise 184% this year to $847.8 million. Citrix has the edge in this budding market with its XenDesktop product.

VMware is dominant among large enterprises, controlling about 85% of the server virtualization market. Microsoft is favored by small and midsize companies.

Virtualization is seen as “a strategic asset” for enabling cloud computing, and continues to gain momentum, says Lazard analyst Joel Fishbein.

VMware has the early-mover advantage in this market with its vSphere platform and has stayed ahead by adding new features such as data security and disaster recovery, analysts say.

But Citrix is partnering closely with Microsoft to take on VMware in virtualization.

3. Climate

Competition is heating up as companies scramble to adopt virtualization. Before 2009, just 30% of companies used virtualization, says analyst Fishbein. This year, that will double to 60%. Most of the gain is coming from small and midsize customers.

In addition, virtual servers are soon expected to more than double as a percentage of the overall server workload, from 18% today to 48% by 2012.

VMware says it can stay a step ahead of the pack by building new features into its products, says Dan Chu, VMware’s vice president of cloud infrastructure and services.

“We have a large technology lead with what we enable for our customers,” Chu said. “We are several years ahead of what the others are doing.”

Citrix CEO Mark Templeton says his firm’s broadening strategy — offering a variety of products with multiple licensing options and distribution channels — will grow sales.

“What’s going on is a massive shift in how computing gets delivered,” Templeton said. “In an environment that’s changing so dramatically, the highest-risk thing you can do is not act.”

4. Technology

The first virtualization boom stemmed from a shift over the last decade away from big expensive mainframes and minicomputers to massive banks of cheap Intel-powered machines. Virtualization gave these low-cost systems some of the high-end features of their pricier counterparts.

Virtualization software makers are betting on a second wave of growth fueled by the industrywide shift to cloud computing.

Technology managers use virtualization to run cloud computing in their own data centers. And large tech vendors such as Microsoft use the technology for cloud-computing services they sell to customers.

Dividing computers into isolated virtual machines gives cloud service providers the benefits of shared computing resources without the security downsides.

VMware has the early lead in virtualization. But the technology is quickly becoming a commodity as Microsoft and others bundle it into their broader platforms.

“VMware is known as a virtualization company, and Microsoft is a platform company,” said David Greschler, who heads up Microsoft’s virtualization efforts. “Their strategy is to sell virtualization, but our strategy is to make virtualization available as part of a larger platform at no extra cost.”

At the same time, a shift toward a world of cloud-computing services hosted by the likes of Microsoft, Amazon.com (NMS:AMZN) and Google (NMS:GOOG) could lead to fewer companies purchasing virtualization software themselves.

Source: Investor’s Business Daily

Avoiding the Pitfalls of Virtualization

2017-07-27T00:01:09+00:00 July 8th, 2010|Uncategorized|

Virtualization is rarely as simple to implement and manage as it has been made out to be. Here’s what to look out for when planning your organization’s next virtualization project.

No technology in recent memory has come with as many promises as server virtualization. As I’m sure you know, all of these promises can be broken down into one simple concept: Virtualization allows you to consolidate a bunch of underutilized servers into a single server, which allows the organization to save a bundle on maintenance costs.

So with server virtualization promising such a dramatic boost to an organization’s return on investment (ROI), even in a bad economy, what’s not to like? What many organizations are finding out is that in practice, virtualization is rarely as simple to implement and manage as it has been made out to be. In fact, there are numerous potential pitfalls associated with the virtualization process. In this article, I want to take a look at some of these pitfalls, and at how they can impact an organization.

Subpar Performance
While it’s true that virtualizing your data center has the potential to make better use of server resources, any increase in ROI can quickly be consumed by decreased user productivity if virtual servers fail to perform as they did prior to being virtualized. In fact, it has been said that subpar performance is the kiss of death for a virtual data center.

So how do you make sure that your servers are going to perform as well as they do now when virtualized? One common solution is to work through some capacity planning estimates, and then attempt to virtualize the server in an isolated lab environment. But this approach will only get you so far. Lab environments do not experience the same loads as production environments, and while there are load simulation tools available, the reliability of these tools decreases dramatically when multiple virtual servers are being tested simultaneously.

While proper capacity planning and testing are important, you must be prepared to optimize your servers once they have been virtualized. Optimization means being aware of what hardware resources are being used by each virtual server, and making any necessary adjustments to the way hardware resources are distributed among the virtual machines (VMs) in an effort to improve performance across the board for all of the guest servers on a given host.

Network Management Difficulties
When organizations initially begin to virtualize their servers, they’re often surprised by how difficult it can be to manage those virtual servers using their legacy network management software. While any decent network management application will perform application metering, compile a software inventory, and allow remote control sessions for both physical and virtual servers, there are some areas in which traditional network management software is not well-equipped to deal with VMs.

One example of such a problem is that most of the network management products on the market are designed to compile a hardware inventory of all managed computers. If such an application is not virtualization-aware, then the hardware inventory will be misreported.

Likewise, some of the network management applications on the market track server performance, but performance metrics can be greatly skewed in a virtual server environment. While the skewed data may not be a problem in and of itself, it is important to remember that some network management products contain elaborate alerting and automated remediation mechanisms that engage when certain performance problems are detected. These types of mechanisms can wreak havoc on virtual servers.

Finally, legacy network management software is not able to tell you on which host machine a virtual server is currently running. It also lacks the ability to move virtual servers between hosts. While most virtualization products come with their own management consoles, it’s far more efficient to manage physical and virtual servers through a single console.

Virtual Server Sprawl
So far I have talked about various logistical and performance issues associated with managing a virtual data center. Believe it or not, though, a virtual server deployment that works a little too well can be just as big a problem. Some organizations find virtualization to be so effective, that virtual server sprawl ends up becoming an issue.

One organization ended up deploying so many virtual servers that it ended up with more server hardware than it had before it decided to consolidate its servers. This completely undermined its stated goal of reducing hardware costs.

For other organizations, virtual machine sprawl has become a logistical nightmare, as virtual servers are created so rapidly that it becomes difficult to keep track of each one’s purpose, and of which ones are currently in use.

There are some key practices to help avoid virtual server sprawl. One of them is helping management and administrative staff to understand that there are costs associated with deploying virtual servers. Many people I have talked to think of virtual servers as being free because there are no direct hardware costs, and in some cases there’s no cost for licensing the server’s OS. However, most virtual servers do incur licensing costs in the form of anti-virus software, backup agents and network management software. These are in addition to the cost of the license for whatever application the virtual server is running. There are also indirect costs associated with things like system maintenance and hardware resource consumption.

Another way to reduce the potential for VM sprawl is educating the administrative staff on some of the dangers of excessive VM deployments. By its very nature, IT tends to be reactive. I have lost count of the number of times when I have seen a virtual server quickly provisioned in response to a manager’s demands. Such deployments tend to be performed in a haphazard manner because of the pressure to bring a new virtual server online quickly. These types of deployments can undermine security, and may impact an organization’s regulatory compliance status.

Learning New Skills
One potential virtualization pitfall often overlooked is the requirement for the IT staff to learn new skills.

“Before deploying virtualization solutions, we encourage our customers to include storage and networking disciplines into the design process,” says Bill Carovano, technical director for the Datacenter and Cloud Division at Citrix Systems Inc. “We’ve found that a majority of our support calls for XenServer tend to deal with storage and networking integration.”

Virtualization administrators frequently find themselves having to learn about storage and networking technologies, such as Fibre Channel, that connect VMs to networked storage. The issue of learning new skill sets is particularly problematic in siloed organizations where there’s a dedicated storage team, a dedicated networking team and a dedicated virtualization team.

One way Citrix is trying to help customers with such issues is through the introduction of a feature in XenServer Essentials called StorageLink. StorageLink is designed to reduce the degree to which virtualization and storage administrators must work together. It allows the storage admins to provide virtualization admins with disk space that can be sub-divided and used on an as-needed basis.

In spite of features such as StorageLink, administrators in siloed environments must frequently work together if an organization’s virtualization initiative is to succeed. “A virtualization administrator with one of our customers was using XenServer with a Fibre Channel storage array, and was experiencing performance problems with some of the virtual machines,” explains Carovano.

He continues: “After working with the storage admin, it turned out that the root of the problem was that the VMs were located on a LUN cut from relatively slow SATA disks. A virtualization administrator who just looked at an array as a ‘black box’ would have had more difficulty tracking down the root cause.”

Underestimating the Required Number of Hosts
Part of the capacity planning process involves determining how many host servers are going to be required. However, administrators who are new to virtualization often fail to realize that hardware resources are not the only factor in determining the number of required host servers. There are some types of virtual servers that simply should not be grouped together. For example, I once saw an organization place all of its domain controllers (DCs) on a single host. If that host failed, there would be no DCs remaining on the network.

One of the more comical examples of poor planning that I have seen was an organization that created a virtual failover cluster. The problem was that all of the cluster nodes were on the same host, which meant that the cluster was not fault tolerant.

My point is that virtual server placement is an important part of the capacity planning process. It isn’t enough to consider whether or not a host has the hardware resources to host a particular VM. You must also consider whether placing a virtual server on a given host eliminates any of the redundancy that has intentionally been built into the network.

Multiple Eggs into a Single Basket
On a similar note, another common virtualization pitfall is the increasingly high-stakes game of server management. A server failure in a non-virtualized data center is inconvenient, but not typically catastrophic. The failure of a host server in a virtual data center can be a different issue altogether, because the failure of a single host can mean the unavailability of multiple virtual servers.

I’ll concede that both VMware Inc. and Microsoft offer high-availability solutions for virtual data centers, but it’s worth noting that not all organizations are taking advantage of these solutions. Besides, sometimes it’s the virtualization software that ends up causing the problem. Take for instance a situation that recently faced Troy Thompson, of the Department of Defense Education Activity division.

Thompson was running VMware ESX version 3.5, and decided to upgrade his host servers to version 4.0. While the upgrade itself went smoothly, there were nine patches that needed to be applied to the servers when the upgrade was complete. Unfortunately, the servers crashed after roughly a third of the patches had been applied. Although the virtual servers themselves were unharmed, the crash left the host servers in an unbootable state. Ultimately, VMware ESX 4.0 had to be reinstalled from scratch.

My point is that in this particular situation, a routine upgrade caused a crash that resulted in an extended amount of downtime for three virtual servers. In this case, all three of the virtual servers were running mission-critical applications: a Unity voice mail system, and two Cisco call managers. Granted, these servers were scheduled to be taken offline for maintenance, but because of the problems with the upgrade, the servers were offline for much longer than planned. This situation might have been avoided had the upgrade been tested in a lab.

Best Practice Recommendations
I do not claim to have all of the answers to creating a standardized set of best practices for virtualization. Even so, here are a few of my own recommended best practices.

Test Everything Ahead of Time
I’ve always been a big believer in testing upgrades and configuration changes in a lab environment prior to making modifications to production servers. Using this approach helps to spot potential problems ahead of time.

Although lab testing works more often than not, experience has shown me that sometimes lab servers do not behave identically to their production counterparts. There are several reasons why this occurs. Sometimes an earlier modification might have been made to a lab server, but not to a production box, or vice versa. Likewise, lab servers do not handle the same workload as a production server, and they usually run on less-powerful hardware.

When it comes to your virtual data center, though, there may be a better way of testing host server configuration changes. Most larger organizations today seem to think of virtualization hosts less as servers, and more as a pool of resources that can be allocated to VMs. As such, it’s becoming increasingly common to have a few well-equipped but currently unused host servers online. These servers make excellent candidates for testing host-level configuration changes because they should be configured identically to the other host servers on the network, and are usually equipped with comparable hardware.

Some Servers Not Good for Virtualization
Recently, I’ve seen a couple of different organizations working toward trying to virtualize every server in their entire data center. The idea behind this approach isn’t so much about server consolidation as it is about fault tolerance and overall flexibility.

Consider, for example, a database server that typically caries a heavy workload. Such a server would not be a good candidate for consolidation, because the server’s hardware is not being underutilized. If such a server were virtualized, it would probably have to occupy an entire host all by itself in order to maintain the required level of performance. Even so, virtualizing the server may not be a bad idea because doing so may allow it to be easily migrated to more powerful hardware as the server’s workload increases in the future.

At the same time, there are some servers in the data center that are poor candidates for virtualization. For example, some software vendors copy-protect their applications by requiring USB-based hardware keys. Such keys typically won’t work with a virtual server. Generally speaking, any server that makes use of specialized hardware is probably going to be a poor virtualization candidate. Likewise, servers with complex storage architecture requirements may also make poor virtualization candidates because moving such a server from one host to another may cause drive mapping problems.

Virtualization technology continues to improve, so I expect that in a few years fully virtualized data centers will be the norm. For right now, though, it’s important to accept that some servers should not be virtualized.

Consider Replacing Your Network Management Software
As I stated earlier, legacy network management software is often ill-equipped to manage both physical and virtual servers. As such, virtual server-aware management software is usually a wise investment.

Avoid Over-Allocating Server Resources
It’s important to keep in mind that each host server contains a finite set of hardware resources. Some of the virtualization products on the market will allow you to over-commit the host server’s resources, but doing so is almost always a bad idea. Microsoft Hyper-V Server, for example, has a layer of abstraction between virtual CPUs and Logical CPUs (which map directly to the number of physical CPU cores installed in the server). Because of this abstraction, it’s possible to allocate more virtual CPUs than the server has logical CPUs.

Choosing not to over-commit hardware resources is about more than just avoiding performance problems; it’s about avoiding surprises. For example, imagine that a virtual server has been allocated two virtual CPUs, and that both of those virtual CPUs correspond to physical CPU cores. If you move that virtual server to a different host, you can be relatively sure that its performance will be similar to what it was on its previous host so long as the same hardware resources are available on the new server. Once moved, the virtual server might be a little bit faster or a little bit slower, but there shouldn’t be a major difference in the way that it performs, assuming that the underlying hardware is comparable.

Now, imagine what would happen if you moved the virtual server to a host whose processor cores were already spoken for. The virtualization software would still allocate CPU resources to the recently moved server, but now its performance is directly tied to other virtual servers’ workloads, making it impossible to predict how any of the virtual servers will perform at a given moment.

As you can see, there is much more to server virtualization than meets the eye. Virtualization is an inexact science with numerous potential pitfalls that can only be avoided through proper planning and testing.

  • Source: Redmondmag.com  By Brien Posey

Why You Need User Virtualization

2017-07-27T00:01:11+00:00 June 30th, 2010|Uncategorized|

User virtualization offers IT shops more control, makes remote management easier and improves the overall experience for end-users.

Think about how you treat your users’ workspaces. When a problem occurs with their desktops or laptops, what do you do? Do you “rebuild” the computer to return it to a base configuration? When you’re forced to do that rebuild, do you save their personal items? Is that preservation a painful process, involving manually locating their personal data and uploading it to a remote file server?

Expand this situation beyond just the simple desktop. Do users’ workspaces automatically transfer to their laptops when they go out on the road? Are they present when users connect to a RemoteApp or a published desktop via Remote Desktop Services? Do the workspaces synchronize themselves into their virtual desktop infrastructure (VDI)-hosted virtual desktops?

For most of us, the unfortunate answer is no. Today’s technologies with Windows profiles give IT environments a limited set of tools for maintaining a user’s access mechanisms on roaming workspaces. Users find themselves repeatedly wasting productive time simply getting their workspaces arranged to their liking.

Personality and Control
The primary problem here is that the combination of local and roaming profiles no longer serves the needs of today’s business climate.

That’s why third-party companies are creating solutions for managing user states. Different than Windows profiles, these tools take a vastly different approach to delivering the users’ workspaces to whatever access mechanism they need. With buzzwords like “user state management,” “user workspace management” and “user virtualization,” among others, these add-on solution sets make sure users are always presented with their comfortable workspaces no matter how they connect.

User virtualization solutions often leverage an external database for storing workspace characteristics. By eliminating the transfer-the-entire-folder approach of Windows profiles in favor of a database-driven architecture, it’s possible to compose each workspace on-demand. Individual personality settings are delivered to the users’ connections as they’re demanded, as opposed to users waiting on profiles to download. Further, because the user state is virtualized outside the desktop, the solution can manage personality customizations across all simultaneous connections at once. Should a user make a desktop change in one session, that change can be seamlessly synchronized to every other session to maintain the comfortable workspace.

The ubiquity of user virtualization solutions also gives IT an incredible amount of control. When such a solution is always present at every user connection, you can centrally enforce IT and business policies at those connections. Access to applications can be limited, operating system functions can be locked down and network connections can be filtered. A user virtualization solution lets you control the workspace and, at the same time, enable a subset of settings to remain personalized — and, thus, comfortable — for each user.

Finally, user virtualization supports rapid desktop refresh. If you’ve ever been faced with replacing a user’s desktop, you know the pain of manually locating and storing the user’s personal data. With a user virtualization solution in place, refreshing that desktop requires little more than rebuilding it and asking the user to log in again.

You can find user virtualization solutions from a number of vendors today. AppSense, RES Software, Atlantis Computing, Tranxition and RingCube, among others, are all vendors with products in this space. While their services come at a cost, their productivity benefits and enhanced control often greatly outweigh their prices. And who wouldn’t mind a little extra comfort as they sit down to do their jobs?

  • By Greg Shields
    • 06/01/2010

 

Source: Redmondmag.com

VDI Performance Acceleration – Atlantis Computing’s ILIO

2017-07-27T00:01:11+00:00 June 21st, 2010|Uncategorized|

VDI platforms use shared storage located centrally for VDI desktop images. However, Windows operating systems were designed to operate with a low latency dedicated local disk for every desktop. The Microsoft Windows family of operating systems is dependent on performing input/output (IO) intensive tasks such as file layout optimization, background defragmentation, antivirus scanning and virtual memory paging. However, in a VDI environment, these tasks result in placing a heavy tax on shared storage infrastructure as each user, application and desktop compete for limited IO capacity (measured in input/output per second-IOPS). Without adequate storage IOPS, applications and virtual machines take longer to boot and applications respond sluggishly, leaving users frustrated.

Atlantis ILIO is a revolutionary approach to deploying VDI that makes the Windows operating system perform well without massive investments in storage infrastructure. Atlantis ILIO boosts VDI desktop performance by transparently offloading IO intensive Windows operations from VDI shared storage. ILIO terminates operating system and application traffic on the same rack as the VDI servers before traffic hits the storage fabric. The result is a 10x performance increase for VDI desktops, which translates into faster VM boot times, logon, and overall application performance. Atlantis ILIO also eliminates VDI IO bottlenecks caused by boot storms, logon storms and antivirus scanning.

Source: Atlantis Computing

15 Reasons to Consider Virtualization

2017-07-27T00:01:11+00:00 June 17th, 2010|Uncategorized|

Virtualization is a computer environment, which allows multiple “virtual machines” to reside and run concurrently on a single computer hardware platform.

A virtual machine is similar to a server, but instead of additional hardware it is software. In essence, it is the ability to separate hardware from a single operating system, thus providing better IT resource utilization, greater application flexibility and hardware independence.

By allowing several virtual machines with multiple operating systems to run in isolation — basically adjacent to each other on the same physical hardware — each individual “virtual machine” has in essence its own set of virtual hardware. The virtual operating system detects a controlled, normal, consistent group of hardware regardless of the tangible hardware components.

In addition, virtualization will control the CPU usage, memory and storage of the “virtual machines” and allow one operating system to migrate from one machine to another. These “virtual machines” are encompassed into files; these files are quickly saved, copied and migrated to another “virtual machine” thus providing zero downtime maintenance and controlled workload consolidation.

Saving Time and Cutting Down Costs Are Just the Beginning

Horizontal scaling or decentralization of data centers over the past several years have been mission intensive because centralized servers were viewed as too expensive to acquire and maintain. Subsequently, applications were moved from a large shared server to their own individual machine.

Although, decentralization aided in the constant maintenance of each application and improved security by isolating one system from another on the network, it also increased the expense of power consumption, large footprint requirements and higher management efforts.

According to xensource.com “… these areas have been known to account for up to $10,000 in annual maintenance cost per machine and decrease the efficiency of each machine by 85% due to idle time.”

The long and the short of it is: virtualization is a mid-point between centralized and decentralized environments. You no longer need to purchase a separate piece of hardware for one application. If each application is provided its own operating environment on a single piece of hardware you reap the benefits of security and stability, while taking advantage of the hardware resource.

Also, virtual machines are isolated from the host; so if one virtual machine crashes, all the other environments remain unaffected. Data does not leak across virtual machines and applications can communicate, provided there is a configured network connection. The virtual machine is saved as a single entity or file, which provides easy backup, copies and moves.

Why Use Virtualization? Let Me Count The Ways …

Since virtualization detangles the operating system from the hardware, there are several important reasons to take into account as to why you would want to use virtualization:

  1. Data center consolidation and decreased power consumption
  2.  

  3. Simplified disaster recovery solutions
  4.  

  5. The ability to run Windows, Solaris, Linux and Netware operating systems and applications concurrently on the same server
  6.  

  7. Increased CPU utilization from 5-15% to 60-80%
  8.  

  9. The ability to move a “virtual machine” from one physical server to another without reconfiguring, which is beneficial when migrating to new hardware when the existing hardware is out-of-date or just fails
  10.  

  11. The isolation of each “virtual machine” provides better security by isolating one system from another on the network; if one “virtual machine” crashes it does not affect the other environments
  12.  

  13. The ability to capture (take a snapshot) the entire state of a “virtual machine” and rollback to that configuration, this is ideal for testing and training environments
  14.  

  15. The ability to obtain centralized management of IT infrastructure
  16.  

  17. A “virtual machine” can run on any x86 server
  18.  

  19. It can access all physical host hardware
  20.  

  21. Re-host legacy operating systems, Windows NT server 4.0 and Windows 2000 on new hardware and operating system
  22.  

  23. The ability to designate multiple “virtual machines” as a team where administrators can power on and off, suspend or resume as a single object
  24.  

  25. Provides the ability to simulate hardware; it can mount an ISO file as a CD-ROM and .vmdk files as hard disks
  26.  

  27. It can configure network adaptor drivers to use NAT through the host machine as opposed to bridging which would require an IP address for each machine on the network
  28.  

  29. Allow the testing of live CD’s without first burning them onto disks or having to reboot the computer

The time for virtualization has come and the possibilities are endless.

From server consolidation and containment, minimized downtime, ease of recovery, elegant solutions to many security problems — in a testing and development environment each developer can have their own virtual machine, isolated from other developers’ codes and production environments.

For the new age data centers virtualization is definitely to be the way to go.

Source: trainsignaltraining.com

Deliver Desktops as a Managed Service with VMware

2017-07-27T00:01:11+00:00 June 9th, 2010|Uncategorized|

Streamline Desktop Management and Deployment

Instantly provision desktops to local and remote users from the datacenter with VMware View. Standardize desktop deployments by creating images of approved desktops, then automatically provision as many as you need. Easily manage groups of users from a single desktop image. Delivering desktops as a managed service with VMware View typically lets you reduce your overall desktops costs by 50% by centralizing management and resources and removing IT infrastructure from remote offices.

Improve User Satisfaction

Provide a superior end-user desktop experience over any network, with VMware View with PCoIP, a high performance display protocol. Virtual desktop users can instantly access their personalized desktops, including data, applications and settings, from anywhere, without suffering the lag times of earlier technologies. Play rich media content, use multiple monitor configurations and seamlessly access attached peripherals.

Standardize Your Virtualization Platform

Enjoy a unified management experience through VMware View’s integration with VMware vSphere, the only virtualization platform tuned and optimized for desktops. Dynamically load balance virtual desktop machines to optimize computing resources. Power on thousands of desktops at once, without any performance degradation. Bring the power of the datacenter to the desktop and use a common platform to manage both servers and desktops from the datacenter to the cloud.

Easily Deliver Desktops to Remote Users

Deliver cost-effective virtual desktops and applications securely to call centers, government agencies, healthcare providers, branch campuses or offshore facilities. Give partner organizations and independent service providers temporary access to corporate desktops through a secure network connection. Perform maintenance and troubleshoot user issues without engaging in expensive on-site visits or maintaining remote IT staff and infrastructure. Deliver a high performance desktop experience to users—wherever they are.

Deliver Built-in Business Continuity for the Desktop

Meet work-from-home mandates in the event of quarantine or natural disaster with an emergency preparedness policy and VMware View 4, the only desktop virtualization solution that provides built-in business continuity and disaster recovery for the desktop at no additional cost.

Provide high availability for your desktops within your virtualized environment without the cost or complexity of traditional clustering solutions and back them up nightly in the datacenter as a business process. VMware View with VMware vSphere for Desktops lets you extend enterprise features such as VMotion, High Availability and Distributed Resources Scheduler to the desktop, ensuring an “always on” desktop. With VMware View you can:

  • Let your desktops take advantage of key features in the vSphere platform such as VMotion, High Availability, Dynamic Resource Scheduler and Consolidated Backup.
  • Access virtual desktop from a wide variety of devices over any network connection to ensure business continuity.
  • Quickly provision virtual desktops to remote users or groups of users in the event of a pandemic or catastrophic event when it is difficult for workers to access the office.
  • Keep desktops running even when server hardware goes down with automated failover and recovery.
  • Shift desktop computing resources automatically as user needs and application loads change with dynamic load balancing.
  • Give remote users a high performance desktop experience with VMware View with PCoIP while protecting access to sensitive data.

Ensure Security with Centralized Control and Management

Protect your organization’s information assets and ensure compliance with industry and government regulations such as HIPAA, SOX, government mandates on settings for PCs and more with desktop virtualization. VMware desktop virtualization solutions enable enterprises to secure data, centralize control of desktops and software access and maintain compliance without sacrificing end user needs.

Ensure Corporate Security Compliance with End User Restricted Entitlements

Ensure appropriate access for end users with specified credentials by restricting access to desktops based on the authentication method. For some business environments, this may mean, for example, you can allow access only to users who have authenticated with a smart card. VMware desktop virtualization solutions are modular by design, enabling integration with 3rd party software to allow for further extension of security requirements.

Maintain Locked-down Desktops without Restricting Application Access

VMware virtualized applications run entirely in user mode and can be deployed to fully locked down desktops. No administrative rights are needed to execute these virtualized applications. Additionally, applications can be encapsulated in a fully encrypted file for deployment. IT staff can be assured security compliance needs are met while enabling end users to easily and safely access business critical applications.

Manage and Track Software Application Usage & Deployment

Monitor software license usage and ensure compliance with vendors without imposing new process or enforcement tools. VMware desktop virtualization solutions provide out-of-the-box integrations with existing policy-based usage mechanisms such as Active Directory and LDAP. VMware View seamlessly plugs into existing management frameworks for discovery of software assets and systems, so you can use your existing monitoring and distribution tools to maintain single-pane-of-glass control of software license compliance.

Decouple hardware, applications and operating systems to eliminate compatibility issues during OS migrations and upgrades. Get the benefits of newer platforms without paying heavy support costs to test and troubleshoot integration issues. Run a single image of a new operating system across a variety of hardware types. Encapsulate older applications and even run multiple versions of the same application side by side.

Source: VMware

Load More Posts