Application Strategy in the New Enterprise…

2017-07-27T00:01:08+00:00 December 14th, 2011|Uncategorized|

Why is the right application strategy important?

Whether it is physical or virtual, the endpoint device won’t matter if you can’t get to your data; and it’s through applications that you get to your critical data.  But management of applications can be an administrative burden.  How can you take applications administration to the next level?

 The right application virtualization tool will:

  • Decrease your time to market by 20-40%
  • Decrease your software license spend by 30-50%
  • Reduce or eliminate your need to rewrite legacy applications
  • Allow central management of all your apps
  • Increase Software license management and compliance

For example, in the common case of having to reset a hung application, the average cost of a help desk ticket to reset an app is $345 without an application virtualization tool.  With the proper tool, an app reset can be done in 18 seconds; virtually eliminating that cost.  This has a two-fold benefit – decreased end user downtime, and decreased IT support costs.  But just having a tool to handle these situations does not — by itself — solve all your problems; you must have a strategy.

The right application strategy requires a 3-pronged approach

To arrive at an optimized virtual user-centric experience requires a three-part strategic focus that encompasses the following:

  1. Desktop Strategy
  2. Application Strategy
  3. User Strategy 

Each of these pieces is equally important.  While in some cases you can have an application strategy without a desktop strategy, you should never have a desktop strategy without an application strategy.  From this perspective, it becomes clear that an application strategy can actually be more important than a desktop strategy.  

How it can go wrong

My thoughts based on what I see from a sales and trending perspective:

  • Over time, server virtualization created such a positive ROI for both capex and opex, that it was assumed that desktop virtualization would be another no-brainer to implement.  Companies who have embarked on VDI pilots and initiatives have quickly become disillusioned; realizing that the same efficiencies that were gained at the server level do not necessarily apply at the desktop. Eventually, they are forced to rethink their strategy.
  • Companies that embark on Win7 migrations – and do not take the time to make a strategic decision about how they will manage their applications – may become disillusioned as well, as they are feeling the pain of long cycles to virtualize their applications for a new OS and new endpoint device.  And in addition to the long cycles to prepare the applications, there are the inevitable challenges with legacy apps and conflicting apps.
  • Aging infrastructures and desktop devices create projects driven by choosing an “endpoint strategy” (translated as endpoint device only) where the only thing taken into consideration is the device.  Their whole strategy is around making decisions about thin client, zero client, fat client, etc.; all without thinking about the delivery method or the user profile.

Overall, as companies make strategic decisions about their Virtual Desktop Strategy, there can be tunnel vision about the desktop piece as the only strategic piece, with applications and users being an afterthought.

How to make it right

Herein lies my mission:  To educate those embarking on a VDI initiative about the importance of choosing the right application strategy.

 

Citrix XenClient – What is XenClient?

2017-07-27T00:01:08+00:00 December 8th, 2010|Uncategorized|

What is Citrix XenClient?

Virtual desktops… to go

XenClient is a client-side hypervisor that enables virtual desktops to run directly on client devices. By separating the operating system from the underlying hardware, desktop images can now be created, secured, deployed and moved across any supported hardware, greatly reducing the maintenance burden on IT and simplifying disaster recovery for laptop users. Optimized for Intel vPro XenClient delivers the high definition experience that users expect.

XenClient Video

Learn how XenClient can bring security performance and flexibility to both IT and your laptop users extending the benefits of desktop virtualization to users that need to work from anywhere at anytime.

Citrix XenDesktop

2017-07-27T00:01:08+00:00 December 8th, 2010|Uncategorized|

What is Citrix XenDesktop?

Citrix XenDesktop transforms Windows desktops into an on-demand service that can be accessed by any user, on any device, anywhere, with unparalleled simplicity and scalability. Whether workers are using the latest tablets, smartphones, laptops or thin clients, XenDesktop can quickly and securely deliver virtual desktops and applications to them with a high-definition user experience.

Why use XenDesktop to transform your desktop computing environment?

Innovative technologies in XenDesktop enable you to turn your vision for a more flexible, mobile and agile desktop computing environment into a reality.

Citrix Receiver, a lightweight universal client, enables any PC, Mac, smartphone, tablet or thin client to access corporate applications and desktops—easily and securely.

Citrix HDX technology delivers a rich, complete user experience that rivals a local PC, from optimized graphics and multimedia, to high-definition webcam, broad USB device support and high-speed printing.

Going far beyond the limitations of VDI-only solutions, Citrix FlexCast delivery technology gives each type of worker in your enterprise the virtual desktop that’s right for them—hosted or local, online or offline, standardized or personalized—through a single solution.

Members of your workforce can access any Windows, web or SaaS application on demand through a single interface. Simple, self-service provisioning reduces desktop management costs and complexity.

Built on an open, scalable, proven architecture, XenDesktop gives you the simplicity, flexibility and scalability to meet any requirement, while fully leveraging current and future investments.

XenDesktop Video

Citrix Extends Client Virtualization

2017-07-27T00:01:09+00:00 August 30th, 2010|Uncategorized|

Virtualization software maker Citrix Systems last week unveiled the word’s first bare-metal client hypervisor, announced a new version of its server virtualization platform and welcomed news from several partners.

Citrix used its annual Synergy show, held this year in San Francisco, to let partners and customers know that it is aiming to extend its ecosystem.

The new XenClient product is a “super fast, 64-bit, bad-to-the bone hypervisor — a true Type 1 hypervisor that bonds to the laptop and delivers a bare metal experience to the apps, OS and things that run on top of it,” said Citrix CEO Mark Templeton, speaking in his keynote address. The company made an “express kit” trial version available for download and promised general availability later this year.

“Desktop virtualization is going mainstream,” Templeton said. “It’s becoming more and more of the fabric of enterprise computing.” Computer makers Dell and Hewlett-Packard disclosed plans at the show to roll out new laptops designed to support the new XenClient hypervisor. The bare-metal client hypervisor is essentially the same technology used on servers, but designed for a client machine.

Although it’s possible to use a server hypervisor on a client machine, it’s not made for that hardware, hence it lacks support for USB devices, graphics accelerators and other features essential to the client. Templeton declared that XenClient would “change the game” by adding a local hypervisor to the laptop, allowing a single-client box to run multiple VMs.

The advantages of running multiple VMs on a single corporate laptop are myriad: A user can, for example, keep personal computing files and apps on a corporate laptop securely isolated in a separate VM. IT can provide a temporary employee or contractor with VM loaded with corporate apps.

And client-side hypervisors make provisioning to mobile client machines much simpler. “People forget that last

[point],” said Ovum senior analyst Tim Stammers. “But if you talk to IT departments, they’ll tell you making images for machines is a real pain. The local hypervisor solves that problem.”

Both Citrix and rival virtualization company VMware promised in 2008 to deliver a client-side hypervisor in 2009. “The fact that they were both late shows that this is very hard stuff,” Stammers said.

Native Bare Metal Hypervisor

XenClient is a Type 1 hypervisor, a native hypervisor that runs on bare metal. Existing Type 2 hypervisors, which have been around for a long time and allow users to do things like run Windows on a Mac (such as Player and Parallels), aren’t as secure as the native versions, Stammers said. Type 2s run on an operating system that can be hacked.

The XenClient was developed in collaboration with chip maker Intel, and optimized for Intel Core 2 desktops and laptops with its vPro technology. The hypervisor serves as “a foundation for centrally managed OS/user environments to be streamed, cached and executed locally on desktop/laptop devices, including off-network mobility,” the two companies said in a statement.

According to sources close to the company, VMware is concentrating on refining its Type 2 virtualization technology, rather than pursing a bare-metal client strategy. VMware had not returned calls for comment at press time. But Stammers believes that VMware will probably come out with a native client hypervisor later this year.

Conference attendee Larry Cohen, a systems administrator for a Silicon Valley manufacturer he preferred not to name, was impressed by the XenClient technology, but said he wished the company would focus more on XenCenter, the company’s XenServer management console. In particular, he’d like to see a better event viewer and logging capabilities. “It would make troubleshooting issues on the physical hardware a lot easier,” Cohen said.

Server Upgrade

Citrix also launched XenServer 5.6 at the show. The latest version of its server virtualization platform mainly fills in some gaps in the previous version, Stammer said. Memory management was one of the key enhancements, he said, but also pointed to new features in the Enterprise and Platinum editions, including automatic work-load balancing, power management and storage integration with StorageLink, Citrix’s platform for providing linking server virtualization to storage resources.

“This market has become a constant race to add tools,” Stammer said. “I often say that server virtualization gives you great flexibility, but flexibility can tie you in knots. So we do need these tools, and different shops need different tools.”

XenServer 5.6 comes in four editions: Free, Advanced, Enterprise and Platinum. Each edition provides additional features.

The free version of XenServer has become an “entryway for new virtualization customers” for Citrix, said IDC analayst Al Gillen. IDC is seeing a growing number of infrastructure vendors using the “free-plus-premium”offering strategy (sometimes called “freemium”) to build market share, Gillen said. 

Stammer applauded both Citrix releases, but said that the future of XenServer is uncertain. Increasingly, this market looks like it’s going to come down to Microsoft’s Hyper-V and VMware ESX, he said. He points to statements by Citrix executives, who as recently as 18 months ago, said that in the future most of Citrix’s business will come from the sale of tools used to manage Hyper-V.

HP Readies XenClient Notebooks

HP made a splash at the show with demos of the industry’s first Citrix-ready XenClient platforms. “Using a local hypervisor, the ability to bring the virtual machine down and run it locally, allows you to be productive whether you’re connected or not,” said Jeff Groudan, director of thin client solutions for HP’s person systems group. “So you have the mobility, but also a lot of the management capabilities inherent of VDI (virtual desktop infrastructure), such as being able to manage the image centrally.”

HP also gave a nod to Adobe’s recently beleaguered Flash technology with an enhancement of the Remote Desktop Protocol (RDP) 6. RDP 6 is one of the most common VDI protocols used by VMware View and Microsoft Remote Desktop Services environments, but it doesn’t Flash natively. The RDP Enhancements for Flash is a component that runs on the thin client machine and allows the server to redirect the Flash content down to the client, which also decompresses the file.

“One of the challenges of client virtualization, whether it’s Citrix or someone else’s VDI environment, is they don’t handle Flash very elegantly,” Groudan said. “The experience may not be very good, or it may overly load down the server when they do the decompression for the thin client. The RDP Enhancements fix that problem.

“It was clear to us that complexity of client virtualization has been an inhibitor of growth in this area, Groudan added. “So we have a laser focus on simplifying the process, but also on optimizing the end-user experience.”

HP also unveiled VDI reference architectures for XenDesktop and XenServer at the Synergy event. Joseph George, client virtualization business lead for HP’s infrastructure software and blades division, said the reference architectures are the fruit of his company’s ongoing strategy of “converged infrastructure.” HP believes that that strategy can accelerate the delivery of client virtualization.

“We’ve got the best portfolio out there when it comes to converged infrastructure and client virtualization,” he said. “And the expertise we have in our ranks has allowed us to put together these new reference architectures.”

The new HP and Citrix VDI reference architectures provide the functionality of a stand-alone desktop, George said, while enabling unified management of both physical and virtual infrastructures from the same centralized console.

The HP/Citrix VDI solution supports more than 1,000 users of XenDesktop 4.0, XenServer 5.5 or Provisioning Server 5.1, George said. It leverages HP BladeSystem’s c-Class or HP ProLiant servers with HP Flex-10 technology, HP storage and networking and a choice of HP t5740 or HP t5325 thin client machines.

The big gadget news at the event came from Dell CEO and founder Michael Dell, who surprised conference attendees by officially unveiling his company’s new mini-tablet PC during his keynote. It was actually more of a teaser than an unveiling of the device a MID (mobile Internet device) dubbed The Streak, which Dell casually pulled from his pocket while onstage.

“The device we use to access our information shouldn’t matter anymore,” Dell said. “Whether it’s a phone, or a notebook, a netbook or a desktop PC, your client image can follow you everywhere.” Dell then took the wraps off The Streak, which was loaded with the Android OS and Citrix’s virtual desktop software. Dell said The Streak would be available first in Europe in June, with a U.S. launched planned for later this summer. The carrier will be AT&T.

  • By John K. Waters
    • 05/17/2010

 

Source: Redmondmag.com

Running IE v6 on Windows 7 with Symantec Workspace Virtualization

2017-07-27T00:01:09+00:00 August 6th, 2010|Uncategorized|

Wednesday, August 4, 2010

If you are looking for a way to run IE v6 on Windows 7 desktops, take a look at Symantec’s Endpoint Virtualization technologies. The software includes two different modules that are free of charge, including Workspace Virtualization administration and the Browser Selector tools. They are fairly simple to setup and flexible enough to isolate individual applications from the host Windows 7 OS.
Symantec Endpoint Virtualization
http://www.symantec.com/connect/endpoint-virtualization
Entry level pricing starts at $45 per node

Source:  WebInformant.blogspot.com  By: David Strom

Virtual Servers, Real Growth

2017-07-27T00:01:09+00:00 July 12th, 2010|Uncategorized|

 

If you follow tech industry trends, you’ve probably heard of cloud computing, an increasingly popular approach of delivering technology resources over the Internet rather than from on-site computer systems.

Chances are, you’re less familiar with virtualization — the obscure software that makes it all possible.

The concept is simple: rather than having computers run a single business application — and sit idle most of the time — virtualization software divides a system into several “virtual” machines, all running software in parallel.

The technology not only squeezes more work out of each computer, but makes large systems much more flexible, letting data-center techies easily deploy computing horsepower where it’s needed at a moment’s notice.

The approach cuts costs, reducing the amount of hardware, space and energy needed to power up large data centers. Maintaining these flexible systems is easier, too, because managing software and hardware centrally requires less tech support.

The benefits of virtualization have made cloud computing an economical alternative to traditional data centers.

“Without virtualization, there is no cloud,” said Charles King, principal analyst of Pund-IT.

That’s transforming the technology industry and boosting the fortunes of virtualization pioneers such as VMware (NYSE:VMW – News), Citrix Systems (NMS:CTXS), two of the best-performing stocks in IBD’s specialty enterprise software group. As of Friday, the group ranked No. 24 among IBD’s 197 Industry Groups, up from No. 121 three months ago.

1. Business

Specialty enterprise software represents a small but fast-growing segment of the overall software enterprise market, which according to market research firm Gartner is set to hit $229 billion this year.

As with most software, the segment is a high-margin business. With high upfront development costs but negligible manufacturing and distribution expenses, specialty software companies strive for mass-market appeal. Once developers recoup their initial development costs, additional sales represent pure profit.

Software developers also make money helping customers install and run their software, another high-margin business.

But competition is fierce. Unlike capital-intensive businesses, software companies require no factory, heavy equipment, storefront or inventory to launch. Low barriers to entry mean a constant stream of new competitors looking to out-innovate incumbents.

In addition to the virtualization firms, notable names in the group include CA Technologies (NMS:CA) and Compuware (NMS:CPWR).

All offer infrastructure software to manage data centers.

“Big-iron” mainframe computers began using virtualization in the 1970s, around the time when CA and Compuware were founded.

In the late 1990s, VMware brought the technology to low-cost systems running ordinary Intel (NMS:INTC) chips. VMware has since emerged as the dominant player in virtualization.

Citrix has added a twist to the concept, virtualizing desktop computers. Rather than installing workers’ operating system and applications on hundreds of PCs spread across the globe, companies can use the technology to run PCs from a bank of central servers. Workers, who access their virtual PCs over the Internet, don’t know the difference.

Microsoft (NMS:MSFT) has jumped in with its own virtualization product, HyperV, which it bundles free into Windows Server software packages. Oracle (NMS:ORCL) and Red Hat (NYSE:RHT – News) have launched virtualization products as well.

Meanwhile, CA and Compuware are racing to move beyond their mainframe roots to support virtualization and cloud-computing-enabled data centers. In February, CA said it would buy 3Tera to build services and deploy applications aimed at the cloud-computing market.

And Compuware bought privately held Gomez, Inc. last fall to manage cloud application performance.

Name Of The Game: Innovate. With a fast-moving market and steady influx of new competitors, keeping customers happy with good service and money-saving breakthroughs is vital.

2. Market

Nearly everyone who runs a corporate computer system is a potential buyer of virtualization software. Companies ramping up their information-technology purchases use the software to manage their sprawling infrastructure; others with limited budgets use it to squeeze more out of their existing systems.

Sales of server-virtualization software are set to grow 14% this year to $1.28 billion, according to a report by Lazard Capital Markets. Sales of software to manage virtual environments will grow 44% in 2010 to $1.88 billion.

Desktop virtualization revenue will rise 184% this year to $847.8 million. Citrix has the edge in this budding market with its XenDesktop product.

VMware is dominant among large enterprises, controlling about 85% of the server virtualization market. Microsoft is favored by small and midsize companies.

Virtualization is seen as “a strategic asset” for enabling cloud computing, and continues to gain momentum, says Lazard analyst Joel Fishbein.

VMware has the early-mover advantage in this market with its vSphere platform and has stayed ahead by adding new features such as data security and disaster recovery, analysts say.

But Citrix is partnering closely with Microsoft to take on VMware in virtualization.

3. Climate

Competition is heating up as companies scramble to adopt virtualization. Before 2009, just 30% of companies used virtualization, says analyst Fishbein. This year, that will double to 60%. Most of the gain is coming from small and midsize customers.

In addition, virtual servers are soon expected to more than double as a percentage of the overall server workload, from 18% today to 48% by 2012.

VMware says it can stay a step ahead of the pack by building new features into its products, says Dan Chu, VMware’s vice president of cloud infrastructure and services.

“We have a large technology lead with what we enable for our customers,” Chu said. “We are several years ahead of what the others are doing.”

Citrix CEO Mark Templeton says his firm’s broadening strategy — offering a variety of products with multiple licensing options and distribution channels — will grow sales.

“What’s going on is a massive shift in how computing gets delivered,” Templeton said. “In an environment that’s changing so dramatically, the highest-risk thing you can do is not act.”

4. Technology

The first virtualization boom stemmed from a shift over the last decade away from big expensive mainframes and minicomputers to massive banks of cheap Intel-powered machines. Virtualization gave these low-cost systems some of the high-end features of their pricier counterparts.

Virtualization software makers are betting on a second wave of growth fueled by the industrywide shift to cloud computing.

Technology managers use virtualization to run cloud computing in their own data centers. And large tech vendors such as Microsoft use the technology for cloud-computing services they sell to customers.

Dividing computers into isolated virtual machines gives cloud service providers the benefits of shared computing resources without the security downsides.

VMware has the early lead in virtualization. But the technology is quickly becoming a commodity as Microsoft and others bundle it into their broader platforms.

“VMware is known as a virtualization company, and Microsoft is a platform company,” said David Greschler, who heads up Microsoft’s virtualization efforts. “Their strategy is to sell virtualization, but our strategy is to make virtualization available as part of a larger platform at no extra cost.”

At the same time, a shift toward a world of cloud-computing services hosted by the likes of Microsoft, Amazon.com (NMS:AMZN) and Google (NMS:GOOG) could lead to fewer companies purchasing virtualization software themselves.

Source: Investor’s Business Daily

Avoiding the Pitfalls of Virtualization

2017-07-27T00:01:09+00:00 July 8th, 2010|Uncategorized|

Virtualization is rarely as simple to implement and manage as it has been made out to be. Here’s what to look out for when planning your organization’s next virtualization project.

No technology in recent memory has come with as many promises as server virtualization. As I’m sure you know, all of these promises can be broken down into one simple concept: Virtualization allows you to consolidate a bunch of underutilized servers into a single server, which allows the organization to save a bundle on maintenance costs.

So with server virtualization promising such a dramatic boost to an organization’s return on investment (ROI), even in a bad economy, what’s not to like? What many organizations are finding out is that in practice, virtualization is rarely as simple to implement and manage as it has been made out to be. In fact, there are numerous potential pitfalls associated with the virtualization process. In this article, I want to take a look at some of these pitfalls, and at how they can impact an organization.

Subpar Performance
While it’s true that virtualizing your data center has the potential to make better use of server resources, any increase in ROI can quickly be consumed by decreased user productivity if virtual servers fail to perform as they did prior to being virtualized. In fact, it has been said that subpar performance is the kiss of death for a virtual data center.

So how do you make sure that your servers are going to perform as well as they do now when virtualized? One common solution is to work through some capacity planning estimates, and then attempt to virtualize the server in an isolated lab environment. But this approach will only get you so far. Lab environments do not experience the same loads as production environments, and while there are load simulation tools available, the reliability of these tools decreases dramatically when multiple virtual servers are being tested simultaneously.

While proper capacity planning and testing are important, you must be prepared to optimize your servers once they have been virtualized. Optimization means being aware of what hardware resources are being used by each virtual server, and making any necessary adjustments to the way hardware resources are distributed among the virtual machines (VMs) in an effort to improve performance across the board for all of the guest servers on a given host.

Network Management Difficulties
When organizations initially begin to virtualize their servers, they’re often surprised by how difficult it can be to manage those virtual servers using their legacy network management software. While any decent network management application will perform application metering, compile a software inventory, and allow remote control sessions for both physical and virtual servers, there are some areas in which traditional network management software is not well-equipped to deal with VMs.

One example of such a problem is that most of the network management products on the market are designed to compile a hardware inventory of all managed computers. If such an application is not virtualization-aware, then the hardware inventory will be misreported.

Likewise, some of the network management applications on the market track server performance, but performance metrics can be greatly skewed in a virtual server environment. While the skewed data may not be a problem in and of itself, it is important to remember that some network management products contain elaborate alerting and automated remediation mechanisms that engage when certain performance problems are detected. These types of mechanisms can wreak havoc on virtual servers.

Finally, legacy network management software is not able to tell you on which host machine a virtual server is currently running. It also lacks the ability to move virtual servers between hosts. While most virtualization products come with their own management consoles, it’s far more efficient to manage physical and virtual servers through a single console.

Virtual Server Sprawl
So far I have talked about various logistical and performance issues associated with managing a virtual data center. Believe it or not, though, a virtual server deployment that works a little too well can be just as big a problem. Some organizations find virtualization to be so effective, that virtual server sprawl ends up becoming an issue.

One organization ended up deploying so many virtual servers that it ended up with more server hardware than it had before it decided to consolidate its servers. This completely undermined its stated goal of reducing hardware costs.

For other organizations, virtual machine sprawl has become a logistical nightmare, as virtual servers are created so rapidly that it becomes difficult to keep track of each one’s purpose, and of which ones are currently in use.

There are some key practices to help avoid virtual server sprawl. One of them is helping management and administrative staff to understand that there are costs associated with deploying virtual servers. Many people I have talked to think of virtual servers as being free because there are no direct hardware costs, and in some cases there’s no cost for licensing the server’s OS. However, most virtual servers do incur licensing costs in the form of anti-virus software, backup agents and network management software. These are in addition to the cost of the license for whatever application the virtual server is running. There are also indirect costs associated with things like system maintenance and hardware resource consumption.

Another way to reduce the potential for VM sprawl is educating the administrative staff on some of the dangers of excessive VM deployments. By its very nature, IT tends to be reactive. I have lost count of the number of times when I have seen a virtual server quickly provisioned in response to a manager’s demands. Such deployments tend to be performed in a haphazard manner because of the pressure to bring a new virtual server online quickly. These types of deployments can undermine security, and may impact an organization’s regulatory compliance status.

Learning New Skills
One potential virtualization pitfall often overlooked is the requirement for the IT staff to learn new skills.

“Before deploying virtualization solutions, we encourage our customers to include storage and networking disciplines into the design process,” says Bill Carovano, technical director for the Datacenter and Cloud Division at Citrix Systems Inc. “We’ve found that a majority of our support calls for XenServer tend to deal with storage and networking integration.”

Virtualization administrators frequently find themselves having to learn about storage and networking technologies, such as Fibre Channel, that connect VMs to networked storage. The issue of learning new skill sets is particularly problematic in siloed organizations where there’s a dedicated storage team, a dedicated networking team and a dedicated virtualization team.

One way Citrix is trying to help customers with such issues is through the introduction of a feature in XenServer Essentials called StorageLink. StorageLink is designed to reduce the degree to which virtualization and storage administrators must work together. It allows the storage admins to provide virtualization admins with disk space that can be sub-divided and used on an as-needed basis.

In spite of features such as StorageLink, administrators in siloed environments must frequently work together if an organization’s virtualization initiative is to succeed. “A virtualization administrator with one of our customers was using XenServer with a Fibre Channel storage array, and was experiencing performance problems with some of the virtual machines,” explains Carovano.

He continues: “After working with the storage admin, it turned out that the root of the problem was that the VMs were located on a LUN cut from relatively slow SATA disks. A virtualization administrator who just looked at an array as a ‘black box’ would have had more difficulty tracking down the root cause.”

Underestimating the Required Number of Hosts
Part of the capacity planning process involves determining how many host servers are going to be required. However, administrators who are new to virtualization often fail to realize that hardware resources are not the only factor in determining the number of required host servers. There are some types of virtual servers that simply should not be grouped together. For example, I once saw an organization place all of its domain controllers (DCs) on a single host. If that host failed, there would be no DCs remaining on the network.

One of the more comical examples of poor planning that I have seen was an organization that created a virtual failover cluster. The problem was that all of the cluster nodes were on the same host, which meant that the cluster was not fault tolerant.

My point is that virtual server placement is an important part of the capacity planning process. It isn’t enough to consider whether or not a host has the hardware resources to host a particular VM. You must also consider whether placing a virtual server on a given host eliminates any of the redundancy that has intentionally been built into the network.

Multiple Eggs into a Single Basket
On a similar note, another common virtualization pitfall is the increasingly high-stakes game of server management. A server failure in a non-virtualized data center is inconvenient, but not typically catastrophic. The failure of a host server in a virtual data center can be a different issue altogether, because the failure of a single host can mean the unavailability of multiple virtual servers.

I’ll concede that both VMware Inc. and Microsoft offer high-availability solutions for virtual data centers, but it’s worth noting that not all organizations are taking advantage of these solutions. Besides, sometimes it’s the virtualization software that ends up causing the problem. Take for instance a situation that recently faced Troy Thompson, of the Department of Defense Education Activity division.

Thompson was running VMware ESX version 3.5, and decided to upgrade his host servers to version 4.0. While the upgrade itself went smoothly, there were nine patches that needed to be applied to the servers when the upgrade was complete. Unfortunately, the servers crashed after roughly a third of the patches had been applied. Although the virtual servers themselves were unharmed, the crash left the host servers in an unbootable state. Ultimately, VMware ESX 4.0 had to be reinstalled from scratch.

My point is that in this particular situation, a routine upgrade caused a crash that resulted in an extended amount of downtime for three virtual servers. In this case, all three of the virtual servers were running mission-critical applications: a Unity voice mail system, and two Cisco call managers. Granted, these servers were scheduled to be taken offline for maintenance, but because of the problems with the upgrade, the servers were offline for much longer than planned. This situation might have been avoided had the upgrade been tested in a lab.

Best Practice Recommendations
I do not claim to have all of the answers to creating a standardized set of best practices for virtualization. Even so, here are a few of my own recommended best practices.

Test Everything Ahead of Time
I’ve always been a big believer in testing upgrades and configuration changes in a lab environment prior to making modifications to production servers. Using this approach helps to spot potential problems ahead of time.

Although lab testing works more often than not, experience has shown me that sometimes lab servers do not behave identically to their production counterparts. There are several reasons why this occurs. Sometimes an earlier modification might have been made to a lab server, but not to a production box, or vice versa. Likewise, lab servers do not handle the same workload as a production server, and they usually run on less-powerful hardware.

When it comes to your virtual data center, though, there may be a better way of testing host server configuration changes. Most larger organizations today seem to think of virtualization hosts less as servers, and more as a pool of resources that can be allocated to VMs. As such, it’s becoming increasingly common to have a few well-equipped but currently unused host servers online. These servers make excellent candidates for testing host-level configuration changes because they should be configured identically to the other host servers on the network, and are usually equipped with comparable hardware.

Some Servers Not Good for Virtualization
Recently, I’ve seen a couple of different organizations working toward trying to virtualize every server in their entire data center. The idea behind this approach isn’t so much about server consolidation as it is about fault tolerance and overall flexibility.

Consider, for example, a database server that typically caries a heavy workload. Such a server would not be a good candidate for consolidation, because the server’s hardware is not being underutilized. If such a server were virtualized, it would probably have to occupy an entire host all by itself in order to maintain the required level of performance. Even so, virtualizing the server may not be a bad idea because doing so may allow it to be easily migrated to more powerful hardware as the server’s workload increases in the future.

At the same time, there are some servers in the data center that are poor candidates for virtualization. For example, some software vendors copy-protect their applications by requiring USB-based hardware keys. Such keys typically won’t work with a virtual server. Generally speaking, any server that makes use of specialized hardware is probably going to be a poor virtualization candidate. Likewise, servers with complex storage architecture requirements may also make poor virtualization candidates because moving such a server from one host to another may cause drive mapping problems.

Virtualization technology continues to improve, so I expect that in a few years fully virtualized data centers will be the norm. For right now, though, it’s important to accept that some servers should not be virtualized.

Consider Replacing Your Network Management Software
As I stated earlier, legacy network management software is often ill-equipped to manage both physical and virtual servers. As such, virtual server-aware management software is usually a wise investment.

Avoid Over-Allocating Server Resources
It’s important to keep in mind that each host server contains a finite set of hardware resources. Some of the virtualization products on the market will allow you to over-commit the host server’s resources, but doing so is almost always a bad idea. Microsoft Hyper-V Server, for example, has a layer of abstraction between virtual CPUs and Logical CPUs (which map directly to the number of physical CPU cores installed in the server). Because of this abstraction, it’s possible to allocate more virtual CPUs than the server has logical CPUs.

Choosing not to over-commit hardware resources is about more than just avoiding performance problems; it’s about avoiding surprises. For example, imagine that a virtual server has been allocated two virtual CPUs, and that both of those virtual CPUs correspond to physical CPU cores. If you move that virtual server to a different host, you can be relatively sure that its performance will be similar to what it was on its previous host so long as the same hardware resources are available on the new server. Once moved, the virtual server might be a little bit faster or a little bit slower, but there shouldn’t be a major difference in the way that it performs, assuming that the underlying hardware is comparable.

Now, imagine what would happen if you moved the virtual server to a host whose processor cores were already spoken for. The virtualization software would still allocate CPU resources to the recently moved server, but now its performance is directly tied to other virtual servers’ workloads, making it impossible to predict how any of the virtual servers will perform at a given moment.

As you can see, there is much more to server virtualization than meets the eye. Virtualization is an inexact science with numerous potential pitfalls that can only be avoided through proper planning and testing.

  • Source: Redmondmag.com  By Brien Posey

Why You Need User Virtualization

2017-07-27T00:01:11+00:00 June 30th, 2010|Uncategorized|

User virtualization offers IT shops more control, makes remote management easier and improves the overall experience for end-users.

Think about how you treat your users’ workspaces. When a problem occurs with their desktops or laptops, what do you do? Do you “rebuild” the computer to return it to a base configuration? When you’re forced to do that rebuild, do you save their personal items? Is that preservation a painful process, involving manually locating their personal data and uploading it to a remote file server?

Expand this situation beyond just the simple desktop. Do users’ workspaces automatically transfer to their laptops when they go out on the road? Are they present when users connect to a RemoteApp or a published desktop via Remote Desktop Services? Do the workspaces synchronize themselves into their virtual desktop infrastructure (VDI)-hosted virtual desktops?

For most of us, the unfortunate answer is no. Today’s technologies with Windows profiles give IT environments a limited set of tools for maintaining a user’s access mechanisms on roaming workspaces. Users find themselves repeatedly wasting productive time simply getting their workspaces arranged to their liking.

Personality and Control
The primary problem here is that the combination of local and roaming profiles no longer serves the needs of today’s business climate.

That’s why third-party companies are creating solutions for managing user states. Different than Windows profiles, these tools take a vastly different approach to delivering the users’ workspaces to whatever access mechanism they need. With buzzwords like “user state management,” “user workspace management” and “user virtualization,” among others, these add-on solution sets make sure users are always presented with their comfortable workspaces no matter how they connect.

User virtualization solutions often leverage an external database for storing workspace characteristics. By eliminating the transfer-the-entire-folder approach of Windows profiles in favor of a database-driven architecture, it’s possible to compose each workspace on-demand. Individual personality settings are delivered to the users’ connections as they’re demanded, as opposed to users waiting on profiles to download. Further, because the user state is virtualized outside the desktop, the solution can manage personality customizations across all simultaneous connections at once. Should a user make a desktop change in one session, that change can be seamlessly synchronized to every other session to maintain the comfortable workspace.

The ubiquity of user virtualization solutions also gives IT an incredible amount of control. When such a solution is always present at every user connection, you can centrally enforce IT and business policies at those connections. Access to applications can be limited, operating system functions can be locked down and network connections can be filtered. A user virtualization solution lets you control the workspace and, at the same time, enable a subset of settings to remain personalized — and, thus, comfortable — for each user.

Finally, user virtualization supports rapid desktop refresh. If you’ve ever been faced with replacing a user’s desktop, you know the pain of manually locating and storing the user’s personal data. With a user virtualization solution in place, refreshing that desktop requires little more than rebuilding it and asking the user to log in again.

You can find user virtualization solutions from a number of vendors today. AppSense, RES Software, Atlantis Computing, Tranxition and RingCube, among others, are all vendors with products in this space. While their services come at a cost, their productivity benefits and enhanced control often greatly outweigh their prices. And who wouldn’t mind a little extra comfort as they sit down to do their jobs?

  • By Greg Shields
    • 06/01/2010

 

Source: Redmondmag.com

VMware Workstation 7 – What’s New

2017-07-27T00:01:11+00:00 June 21st, 2010|Uncategorized|

Introducing VMware Workstation 7

Winner of more than 50 industry awards, VMware Workstation transforms the way technical professionals develop, test, demo, and deploy software. Innovative features help software developers, QA engineers, sales professionals, and IT administrators to reduce hardware cost, save time, minimize risk, and streamline tasks that save time and improve productivity.

Optimized for Windows 7

Run Windows 7 in a virtual machine with the industry’s first support for Windows Aero 3D graphics. Install 32-bit and 64-bit versions of Windows 7 in a virtual machine even easier than on your physical PC. VMware Workstation 7 works with Flip 3D and Aero Peek to show live thumbnails of your virtual machines and is optimized for maximum performance when running on Windows 7 PCs.

 

Best 3D Graphics Just Got Better

 VMware Workstation was the first to support 3D graphics in virtualized environments and is now the first to support Windows Aero in Windows Vista and Windows 7 virtual machines. Run even more 3D applications with support for DirectX 9.0c Shader Model 3 and OpenGL 2.13D graphics in Windows virtual machines.

 

 

 

Most Advanced Virtualization Platform

Create virtual machines with up to 8 virtual processors or 8 virtual cores and up to 32GB of memory per virtual machine. Driverless printing makes your PC printers automatically accessible to your Windows and Linux virtual machines—no configuration or drivers required. Smart card authentication enables you to dedicate a Smart Card reader to a virtual machines or share access.

Features Professionals Cannot Live Without

  • Better than Windows XP Mode, you can run Windows XP with advanced 3D graphics, faster performance, and tighter integration with Unity, shared folders and drag and drop convenience.
  • Install and run VMware vSphere 4 and VMware ESXi in a virtual machine
  • New IDE integrations for the SpringSource Tools Suite and Eclipse IDE for C/C++
  • Replay debugging is now easier and faster
  • Remote Replay Debugging makes it easier to share virtual machine recordings for analysis

More Refined Than Ever

Protect from Prying Eyes
Protect your virtual machines from prying eyes with 256-bit AES encryption.Printing that Just Works
Driver-less printing makes your PC printers automatically accessible to your Windows and Linux VMs—no configuration or drivers required. Your PC’s default printer even shows up as the default, too.

Go Back in Time
Buggy applications, hardware failures, viruses and other malware do not give you fair warning to take a manual snapshot. AutoProtect luckily automatically takes snapshots at set intervals, protecting you from unexpected bumps in the road, making it easy to go back in time to when things were good.

Free Up System Resources
Pause a virtual machine to free up CPU resources for use by other running virtual machines or demanding applications.

What’s New in VMware Workstation 7.1

  • Support for 8 virtual processors (or 8 virtual cores) and 2 TB virtual disks.
  • Support for OpenGL 2.1 for Windows Vista and Windows 7 guests.
  • Greatly improved DirectX 9.0 graphics performance for Windows Vista and Windows 7 guests. Up to 2x faster than Workstation 7.
  • Launch virtualized applications directly from the Windows 7 taskbar to create a seamless experience between applications in your virtual machines and the desktop.
  • Optimized performance for Intel’s Core i3, i5, i7 processor family for faster virtual machine encryption and decryption.
  • Support for more Host and Guest Operating Systems, including: Hosts: Windows 2008 R2, Ubuntu 10.04, RHEL 5.4, and more Guests: Fedora 12, Ubuntu 10.04, RHEL 5.4, SEL 11 SP1, and more.
  • Now includes built in Automatic Updates feature to check, download, and install VMware Workstation updates.
  • Ability to import and export Open Virtualization Format (OVF 1.0) packaged virtual machines and upload directly to VMware vSphere, the industry’s best platform for building cloud infrastructures.

 

Load More Posts

Fatal error: Uncaught exception 'GuzzleHttp\Exception\ClientException' with message 'Client error: `POST https://dc.services.visualstudio.com/v2/track` resulted in a `400 Invalid instrumentation key` response: {"itemsReceived":1,"itemsAccepted":0,"errors":[{"index":0,"statusCode":400,"message":"Invalid instrumentation key"}]} ' in /home/coretek/public_html/wp-content/plugins/application-insights/vendor/guzzlehttp/guzzle/src/Exception/RequestException.php:113 Stack trace: #0 /home/coretek/public_html/wp-content/plugins/application-insights/vendor/guzzlehttp/guzzle/src/Middleware.php(66): GuzzleHttp\Exception\RequestException::create(Object(GuzzleHttp\Psr7\Request), Object(GuzzleHttp\Psr7\Response)) #1 /home/coretek/public_html/wp-content/plugins/application-insights/vendor/guzzlehttp/promises/src/Promise.php(203): GuzzleHttp\Middleware::GuzzleHttp\{closure}(Object(GuzzleHttp\Psr7\Response)) #2 /home/coretek/public_html/wp-content/plugins/application-insights/vendor/guzzlehttp/promises/src/Promise.php(156): GuzzleHttp\Promi in /home/coretek/public_html/wp-content/plugins/application-insights/vendor/guzzlehttp/guzzle/src/Exception/RequestException.php on line 113