Virtual Servers, Real Growth

2017-07-27T00:01:09+00:00 July 12th, 2010|Uncategorized|

 

If you follow tech industry trends, you’ve probably heard of cloud computing, an increasingly popular approach of delivering technology resources over the Internet rather than from on-site computer systems.

Chances are, you’re less familiar with virtualization — the obscure software that makes it all possible.

The concept is simple: rather than having computers run a single business application — and sit idle most of the time — virtualization software divides a system into several “virtual” machines, all running software in parallel.

The technology not only squeezes more work out of each computer, but makes large systems much more flexible, letting data-center techies easily deploy computing horsepower where it’s needed at a moment’s notice.

The approach cuts costs, reducing the amount of hardware, space and energy needed to power up large data centers. Maintaining these flexible systems is easier, too, because managing software and hardware centrally requires less tech support.

The benefits of virtualization have made cloud computing an economical alternative to traditional data centers.

“Without virtualization, there is no cloud,” said Charles King, principal analyst of Pund-IT.

That’s transforming the technology industry and boosting the fortunes of virtualization pioneers such as VMware (NYSE:VMW – News), Citrix Systems (NMS:CTXS), two of the best-performing stocks in IBD’s specialty enterprise software group. As of Friday, the group ranked No. 24 among IBD’s 197 Industry Groups, up from No. 121 three months ago.

1. Business

Specialty enterprise software represents a small but fast-growing segment of the overall software enterprise market, which according to market research firm Gartner is set to hit $229 billion this year.

As with most software, the segment is a high-margin business. With high upfront development costs but negligible manufacturing and distribution expenses, specialty software companies strive for mass-market appeal. Once developers recoup their initial development costs, additional sales represent pure profit.

Software developers also make money helping customers install and run their software, another high-margin business.

But competition is fierce. Unlike capital-intensive businesses, software companies require no factory, heavy equipment, storefront or inventory to launch. Low barriers to entry mean a constant stream of new competitors looking to out-innovate incumbents.

In addition to the virtualization firms, notable names in the group include CA Technologies (NMS:CA) and Compuware (NMS:CPWR).

All offer infrastructure software to manage data centers.

“Big-iron” mainframe computers began using virtualization in the 1970s, around the time when CA and Compuware were founded.

In the late 1990s, VMware brought the technology to low-cost systems running ordinary Intel (NMS:INTC) chips. VMware has since emerged as the dominant player in virtualization.

Citrix has added a twist to the concept, virtualizing desktop computers. Rather than installing workers’ operating system and applications on hundreds of PCs spread across the globe, companies can use the technology to run PCs from a bank of central servers. Workers, who access their virtual PCs over the Internet, don’t know the difference.

Microsoft (NMS:MSFT) has jumped in with its own virtualization product, HyperV, which it bundles free into Windows Server software packages. Oracle (NMS:ORCL) and Red Hat (NYSE:RHT – News) have launched virtualization products as well.

Meanwhile, CA and Compuware are racing to move beyond their mainframe roots to support virtualization and cloud-computing-enabled data centers. In February, CA said it would buy 3Tera to build services and deploy applications aimed at the cloud-computing market.

And Compuware bought privately held Gomez, Inc. last fall to manage cloud application performance.

Name Of The Game: Innovate. With a fast-moving market and steady influx of new competitors, keeping customers happy with good service and money-saving breakthroughs is vital.

2. Market

Nearly everyone who runs a corporate computer system is a potential buyer of virtualization software. Companies ramping up their information-technology purchases use the software to manage their sprawling infrastructure; others with limited budgets use it to squeeze more out of their existing systems.

Sales of server-virtualization software are set to grow 14% this year to $1.28 billion, according to a report by Lazard Capital Markets. Sales of software to manage virtual environments will grow 44% in 2010 to $1.88 billion.

Desktop virtualization revenue will rise 184% this year to $847.8 million. Citrix has the edge in this budding market with its XenDesktop product.

VMware is dominant among large enterprises, controlling about 85% of the server virtualization market. Microsoft is favored by small and midsize companies.

Virtualization is seen as “a strategic asset” for enabling cloud computing, and continues to gain momentum, says Lazard analyst Joel Fishbein.

VMware has the early-mover advantage in this market with its vSphere platform and has stayed ahead by adding new features such as data security and disaster recovery, analysts say.

But Citrix is partnering closely with Microsoft to take on VMware in virtualization.

3. Climate

Competition is heating up as companies scramble to adopt virtualization. Before 2009, just 30% of companies used virtualization, says analyst Fishbein. This year, that will double to 60%. Most of the gain is coming from small and midsize customers.

In addition, virtual servers are soon expected to more than double as a percentage of the overall server workload, from 18% today to 48% by 2012.

VMware says it can stay a step ahead of the pack by building new features into its products, says Dan Chu, VMware’s vice president of cloud infrastructure and services.

“We have a large technology lead with what we enable for our customers,” Chu said. “We are several years ahead of what the others are doing.”

Citrix CEO Mark Templeton says his firm’s broadening strategy — offering a variety of products with multiple licensing options and distribution channels — will grow sales.

“What’s going on is a massive shift in how computing gets delivered,” Templeton said. “In an environment that’s changing so dramatically, the highest-risk thing you can do is not act.”

4. Technology

The first virtualization boom stemmed from a shift over the last decade away from big expensive mainframes and minicomputers to massive banks of cheap Intel-powered machines. Virtualization gave these low-cost systems some of the high-end features of their pricier counterparts.

Virtualization software makers are betting on a second wave of growth fueled by the industrywide shift to cloud computing.

Technology managers use virtualization to run cloud computing in their own data centers. And large tech vendors such as Microsoft use the technology for cloud-computing services they sell to customers.

Dividing computers into isolated virtual machines gives cloud service providers the benefits of shared computing resources without the security downsides.

VMware has the early lead in virtualization. But the technology is quickly becoming a commodity as Microsoft and others bundle it into their broader platforms.

“VMware is known as a virtualization company, and Microsoft is a platform company,” said David Greschler, who heads up Microsoft’s virtualization efforts. “Their strategy is to sell virtualization, but our strategy is to make virtualization available as part of a larger platform at no extra cost.”

At the same time, a shift toward a world of cloud-computing services hosted by the likes of Microsoft, Amazon.com (NMS:AMZN) and Google (NMS:GOOG) could lead to fewer companies purchasing virtualization software themselves.

Source: Investor’s Business Daily

Avoiding the Pitfalls of Virtualization

2017-07-27T00:01:09+00:00 July 8th, 2010|Uncategorized|

Virtualization is rarely as simple to implement and manage as it has been made out to be. Here’s what to look out for when planning your organization’s next virtualization project.

No technology in recent memory has come with as many promises as server virtualization. As I’m sure you know, all of these promises can be broken down into one simple concept: Virtualization allows you to consolidate a bunch of underutilized servers into a single server, which allows the organization to save a bundle on maintenance costs.

So with server virtualization promising such a dramatic boost to an organization’s return on investment (ROI), even in a bad economy, what’s not to like? What many organizations are finding out is that in practice, virtualization is rarely as simple to implement and manage as it has been made out to be. In fact, there are numerous potential pitfalls associated with the virtualization process. In this article, I want to take a look at some of these pitfalls, and at how they can impact an organization.

Subpar Performance
While it’s true that virtualizing your data center has the potential to make better use of server resources, any increase in ROI can quickly be consumed by decreased user productivity if virtual servers fail to perform as they did prior to being virtualized. In fact, it has been said that subpar performance is the kiss of death for a virtual data center.

So how do you make sure that your servers are going to perform as well as they do now when virtualized? One common solution is to work through some capacity planning estimates, and then attempt to virtualize the server in an isolated lab environment. But this approach will only get you so far. Lab environments do not experience the same loads as production environments, and while there are load simulation tools available, the reliability of these tools decreases dramatically when multiple virtual servers are being tested simultaneously.

While proper capacity planning and testing are important, you must be prepared to optimize your servers once they have been virtualized. Optimization means being aware of what hardware resources are being used by each virtual server, and making any necessary adjustments to the way hardware resources are distributed among the virtual machines (VMs) in an effort to improve performance across the board for all of the guest servers on a given host.

Network Management Difficulties
When organizations initially begin to virtualize their servers, they’re often surprised by how difficult it can be to manage those virtual servers using their legacy network management software. While any decent network management application will perform application metering, compile a software inventory, and allow remote control sessions for both physical and virtual servers, there are some areas in which traditional network management software is not well-equipped to deal with VMs.

One example of such a problem is that most of the network management products on the market are designed to compile a hardware inventory of all managed computers. If such an application is not virtualization-aware, then the hardware inventory will be misreported.

Likewise, some of the network management applications on the market track server performance, but performance metrics can be greatly skewed in a virtual server environment. While the skewed data may not be a problem in and of itself, it is important to remember that some network management products contain elaborate alerting and automated remediation mechanisms that engage when certain performance problems are detected. These types of mechanisms can wreak havoc on virtual servers.

Finally, legacy network management software is not able to tell you on which host machine a virtual server is currently running. It also lacks the ability to move virtual servers between hosts. While most virtualization products come with their own management consoles, it’s far more efficient to manage physical and virtual servers through a single console.

Virtual Server Sprawl
So far I have talked about various logistical and performance issues associated with managing a virtual data center. Believe it or not, though, a virtual server deployment that works a little too well can be just as big a problem. Some organizations find virtualization to be so effective, that virtual server sprawl ends up becoming an issue.

One organization ended up deploying so many virtual servers that it ended up with more server hardware than it had before it decided to consolidate its servers. This completely undermined its stated goal of reducing hardware costs.

For other organizations, virtual machine sprawl has become a logistical nightmare, as virtual servers are created so rapidly that it becomes difficult to keep track of each one’s purpose, and of which ones are currently in use.

There are some key practices to help avoid virtual server sprawl. One of them is helping management and administrative staff to understand that there are costs associated with deploying virtual servers. Many people I have talked to think of virtual servers as being free because there are no direct hardware costs, and in some cases there’s no cost for licensing the server’s OS. However, most virtual servers do incur licensing costs in the form of anti-virus software, backup agents and network management software. These are in addition to the cost of the license for whatever application the virtual server is running. There are also indirect costs associated with things like system maintenance and hardware resource consumption.

Another way to reduce the potential for VM sprawl is educating the administrative staff on some of the dangers of excessive VM deployments. By its very nature, IT tends to be reactive. I have lost count of the number of times when I have seen a virtual server quickly provisioned in response to a manager’s demands. Such deployments tend to be performed in a haphazard manner because of the pressure to bring a new virtual server online quickly. These types of deployments can undermine security, and may impact an organization’s regulatory compliance status.

Learning New Skills
One potential virtualization pitfall often overlooked is the requirement for the IT staff to learn new skills.

“Before deploying virtualization solutions, we encourage our customers to include storage and networking disciplines into the design process,” says Bill Carovano, technical director for the Datacenter and Cloud Division at Citrix Systems Inc. “We’ve found that a majority of our support calls for XenServer tend to deal with storage and networking integration.”

Virtualization administrators frequently find themselves having to learn about storage and networking technologies, such as Fibre Channel, that connect VMs to networked storage. The issue of learning new skill sets is particularly problematic in siloed organizations where there’s a dedicated storage team, a dedicated networking team and a dedicated virtualization team.

One way Citrix is trying to help customers with such issues is through the introduction of a feature in XenServer Essentials called StorageLink. StorageLink is designed to reduce the degree to which virtualization and storage administrators must work together. It allows the storage admins to provide virtualization admins with disk space that can be sub-divided and used on an as-needed basis.

In spite of features such as StorageLink, administrators in siloed environments must frequently work together if an organization’s virtualization initiative is to succeed. “A virtualization administrator with one of our customers was using XenServer with a Fibre Channel storage array, and was experiencing performance problems with some of the virtual machines,” explains Carovano.

He continues: “After working with the storage admin, it turned out that the root of the problem was that the VMs were located on a LUN cut from relatively slow SATA disks. A virtualization administrator who just looked at an array as a ‘black box’ would have had more difficulty tracking down the root cause.”

Underestimating the Required Number of Hosts
Part of the capacity planning process involves determining how many host servers are going to be required. However, administrators who are new to virtualization often fail to realize that hardware resources are not the only factor in determining the number of required host servers. There are some types of virtual servers that simply should not be grouped together. For example, I once saw an organization place all of its domain controllers (DCs) on a single host. If that host failed, there would be no DCs remaining on the network.

One of the more comical examples of poor planning that I have seen was an organization that created a virtual failover cluster. The problem was that all of the cluster nodes were on the same host, which meant that the cluster was not fault tolerant.

My point is that virtual server placement is an important part of the capacity planning process. It isn’t enough to consider whether or not a host has the hardware resources to host a particular VM. You must also consider whether placing a virtual server on a given host eliminates any of the redundancy that has intentionally been built into the network.

Multiple Eggs into a Single Basket
On a similar note, another common virtualization pitfall is the increasingly high-stakes game of server management. A server failure in a non-virtualized data center is inconvenient, but not typically catastrophic. The failure of a host server in a virtual data center can be a different issue altogether, because the failure of a single host can mean the unavailability of multiple virtual servers.

I’ll concede that both VMware Inc. and Microsoft offer high-availability solutions for virtual data centers, but it’s worth noting that not all organizations are taking advantage of these solutions. Besides, sometimes it’s the virtualization software that ends up causing the problem. Take for instance a situation that recently faced Troy Thompson, of the Department of Defense Education Activity division.

Thompson was running VMware ESX version 3.5, and decided to upgrade his host servers to version 4.0. While the upgrade itself went smoothly, there were nine patches that needed to be applied to the servers when the upgrade was complete. Unfortunately, the servers crashed after roughly a third of the patches had been applied. Although the virtual servers themselves were unharmed, the crash left the host servers in an unbootable state. Ultimately, VMware ESX 4.0 had to be reinstalled from scratch.

My point is that in this particular situation, a routine upgrade caused a crash that resulted in an extended amount of downtime for three virtual servers. In this case, all three of the virtual servers were running mission-critical applications: a Unity voice mail system, and two Cisco call managers. Granted, these servers were scheduled to be taken offline for maintenance, but because of the problems with the upgrade, the servers were offline for much longer than planned. This situation might have been avoided had the upgrade been tested in a lab.

Best Practice Recommendations
I do not claim to have all of the answers to creating a standardized set of best practices for virtualization. Even so, here are a few of my own recommended best practices.

Test Everything Ahead of Time
I’ve always been a big believer in testing upgrades and configuration changes in a lab environment prior to making modifications to production servers. Using this approach helps to spot potential problems ahead of time.

Although lab testing works more often than not, experience has shown me that sometimes lab servers do not behave identically to their production counterparts. There are several reasons why this occurs. Sometimes an earlier modification might have been made to a lab server, but not to a production box, or vice versa. Likewise, lab servers do not handle the same workload as a production server, and they usually run on less-powerful hardware.

When it comes to your virtual data center, though, there may be a better way of testing host server configuration changes. Most larger organizations today seem to think of virtualization hosts less as servers, and more as a pool of resources that can be allocated to VMs. As such, it’s becoming increasingly common to have a few well-equipped but currently unused host servers online. These servers make excellent candidates for testing host-level configuration changes because they should be configured identically to the other host servers on the network, and are usually equipped with comparable hardware.

Some Servers Not Good for Virtualization
Recently, I’ve seen a couple of different organizations working toward trying to virtualize every server in their entire data center. The idea behind this approach isn’t so much about server consolidation as it is about fault tolerance and overall flexibility.

Consider, for example, a database server that typically caries a heavy workload. Such a server would not be a good candidate for consolidation, because the server’s hardware is not being underutilized. If such a server were virtualized, it would probably have to occupy an entire host all by itself in order to maintain the required level of performance. Even so, virtualizing the server may not be a bad idea because doing so may allow it to be easily migrated to more powerful hardware as the server’s workload increases in the future.

At the same time, there are some servers in the data center that are poor candidates for virtualization. For example, some software vendors copy-protect their applications by requiring USB-based hardware keys. Such keys typically won’t work with a virtual server. Generally speaking, any server that makes use of specialized hardware is probably going to be a poor virtualization candidate. Likewise, servers with complex storage architecture requirements may also make poor virtualization candidates because moving such a server from one host to another may cause drive mapping problems.

Virtualization technology continues to improve, so I expect that in a few years fully virtualized data centers will be the norm. For right now, though, it’s important to accept that some servers should not be virtualized.

Consider Replacing Your Network Management Software
As I stated earlier, legacy network management software is often ill-equipped to manage both physical and virtual servers. As such, virtual server-aware management software is usually a wise investment.

Avoid Over-Allocating Server Resources
It’s important to keep in mind that each host server contains a finite set of hardware resources. Some of the virtualization products on the market will allow you to over-commit the host server’s resources, but doing so is almost always a bad idea. Microsoft Hyper-V Server, for example, has a layer of abstraction between virtual CPUs and Logical CPUs (which map directly to the number of physical CPU cores installed in the server). Because of this abstraction, it’s possible to allocate more virtual CPUs than the server has logical CPUs.

Choosing not to over-commit hardware resources is about more than just avoiding performance problems; it’s about avoiding surprises. For example, imagine that a virtual server has been allocated two virtual CPUs, and that both of those virtual CPUs correspond to physical CPU cores. If you move that virtual server to a different host, you can be relatively sure that its performance will be similar to what it was on its previous host so long as the same hardware resources are available on the new server. Once moved, the virtual server might be a little bit faster or a little bit slower, but there shouldn’t be a major difference in the way that it performs, assuming that the underlying hardware is comparable.

Now, imagine what would happen if you moved the virtual server to a host whose processor cores were already spoken for. The virtualization software would still allocate CPU resources to the recently moved server, but now its performance is directly tied to other virtual servers’ workloads, making it impossible to predict how any of the virtual servers will perform at a given moment.

As you can see, there is much more to server virtualization than meets the eye. Virtualization is an inexact science with numerous potential pitfalls that can only be avoided through proper planning and testing.

  • Source: Redmondmag.com  By Brien Posey

Integrating Dell EqualLogic SANS with Citrix XenServer

2017-07-27T00:01:11+00:00 May 21st, 2010|Uncategorized|

INTEGRATING DELL EQUALLOGIC SANS WITH CITRIX XENSERVER

VIRTUALIZED STORAGE TO OPTIMIZE THE XENSERVER ENVIRONMENT

Server virtualization can offer strategic value to organizations in a number of arenas. By creating an abstraction layer between operating systems and physical hardware, IT administrators can consolidate server capacity, parcel out computing resources as needed, and streamline system provisioning and deployment, all while improving service levels and network security. Server virtualization also facilitates support for multiple operating systems, broadened choices among vendors and solutions, and employee empowerment across organizations, regardless of location.

In implementing virtualization solutions, however, IT managers often overlook a major opportunity: the integration of the virtualization platform and the shared storage backend. The result of this oversight: two critical components that effectively operate as discrete silos, creating new inefficiencies and increasing administrative workloads.

SAN AND SERVER MANAGEMENT VIA A SINGLE CONSOLE

To address this problem, Citrix and Dell have introduced the next generation of virtualization and storage with the Citrix XenServer Adapter for Dell EqualLogic PS Series SAN arrays. Available as part of Citrix XenServer 5.0 Dell Edition, this module allows IT administrators to hand off storage virtualization operations from the server virtualization software over to the EqualLogic SAN, for effective handling and superb overall performance.

The adapter integrates EqualLogic control interfaces directly into the XenCenter Management Client, helping improve overall system performance, enabling more efficient disk utilization, and realizing more completely the benefits of storage consolidation. The integration also leverages the automated features designed into both the XenServer and the PS Series arrays, reducing administrator workloads and increasing efficiency over more conventional, stove-piped deployments. 

Through this partnered integration, Dell and Citrix offer organizations the full benefit of virtualization, including centralized storage, live migration, high availability, realtime streaming and improved management of workload and life-cycle costs.

VIRTUALIZED STORAGE FOR THE VIRTUALIZED ENVIRONMENT

With their ease-of-use, storage virtualization, and automation features, EqualLogic SANs provide a flexible means for managing storage resources, gracefully scaling, and reducing overall complexity, resulting in low total costs of ownership (TCO). When deployed within a XenServer environment, EqualLogic SANs can simplify the tasks and routines by which applications and data are stored and protected. Unlike other SAN platforms, the EqualLogic PS Series arrays share processor and storage resources dynamically, and perform load-balancing continuously, allowing more efficient implementation of these vital resources.

The Citrix XenServer Adapter for Dell EqualLogic capitalizes on this intelligence and goes even further, allowing simple management of these resources — storage provisioning, intelligent backup, rapid recovery, and capacity growth — live in production environments, with no application interruption or downtime. The XenServer Direct Storage Adapter enables IT administrators to perform unique EqualLogic capabilities through the Citrix XenCenter Management Client, automating the creation and assignment of dedicated storage volumes to each Virtual Machine (VM), without the manual work required by classic direct-storage mapping technologies.

As an integrated virtualization solution, XenServer and EqualLogic maintain high operating efficiency by delegating such advanced capabilities as Thin Provisioning, Fast Cloning, and Automated Snapshots to the EqualLogic SAN. Thin Provisioning helps IT administrators control costs by dedicating only the storage capacity needed in the short term, and maintaining unallocated storage in a common pool for later use by applications or user groups as disk resources are actually consumed. Fast Cloning lets storage administrators create copies of entire volumes (e.g., virtual drives or LUNs) as a background process, without disrupting network operations. Once created, clones can be used to accelerate the provisioning and deployment of standardized VMs, as well as to test new applications, configurations or procedures. Snapshots are space-efficient, point-in-time captures of storage volumes that can be created without disrupting network operations, for use in backing up or testing data.

In addition, XenServer supports iSCSI multipath I/O (MPIO) and simplified disaster recovery, two strategic tools for improving business continuity even in the event of network failures or other outages. MPIO support allows multiple networkpaths — e.g., separate subnetworks or VLANS — for both the SAN arrays and the virtualization servers, as a means of both improving performance and safeguarding against Ethernet switch failures or other network problems. Disaster recovery tools apply snapshot and fast cloning technologies to the processes of initial VM placement, the real-time movement of VMs via XenMotion, and automatic high availability.

CITRIX XENSERVER DELL EDITION: A POWERFUL VIRTUALIZATION SOLUTION

Dell and Citrix have partnered to bring virtualization ready platforms for today’s dynamic and growing data centers. With the 64-bit open-source Xen hypervisor at its core, Citrix XenServer Dell Edition is a powerful virtualization solution that enables efficient resource consolidation, utilization, dynamic provisioning, and integrated systems management. XenServer Dell Edition has a small footprint and is optimized to run from an internal flash storage in Dell PowerEdge servers.

EqualLogic SANs deliver the benefits of consolidated networked storage in a self-managing, iSCSI storage area network that is affordable and easy to use, regardless of scale. By eliminating complex tasks and enabling fast and flexible storage provisioning, EqualLogic solutions can reduce the costs of storage acquisition and ongoing operations.

Citrix XenServer Dell Edition is certified and fully supported by Dell for select server and storage configurations. Citrix XenServer Dell Edition comes pre-installed with Dell OpenManage Server Administrator, which enables systems management right out of the box without any additional need to install an agent on the host.

Source: www.equallogic.com

Bandwidth Requirements for XenDesktop 4

2017-07-27T00:01:11+00:00 May 17th, 2010|Uncategorized|

Great resource for determining bandwidth requirements for XenDesktop
 
The following graph provides a summary of the Average Bandwidth consumed for native Citrix XenDesktop and the performance benefits gained when Citrix Branch Repeater is integrated into the Citrix XenDesktop solution. All the numbers shown below are for the respective workflows over a WAN connection of 1.5 Mbps, 80ms latency and 1% packet loss (with the exception of the HD video which required a 10Mbps connection based on the high data rate of the video).

For complete article

Source: Citrix

Wyse Xenith™ – The zero client built for Citrix

2017-07-27T00:01:11+00:00 May 17th, 2010|Uncategorized|

Meet Wyse Xenith

Zero delays, Zero management, Zero security issues – Citrix HDX.

Outfit your cloud with the fastest, easiest to manage, and most secure Citrix client we’ve ever built – and that’s saying something.

From the company that invented the category and built more zero clients than anyone.

Yes, anyone.


 

Zero Delays

 

Go from 0 to productivity faster than a Ferrari.

Wyse Xenith is ready to work in six seconds because its dynamically delivered firmware is smaller than a single digital photograph. Its efficient design is three times faster than competing devices and sports gigabit LAN and wireless-n, so it’s ready for serious tasks whether you’re wired in or not. With an industry first media decoder in hardware, Wyse Xenith will be able to deliver HD video without taxing your server, network or patience.


 

Zero Management

 

That’s right, no management. It’s a zero configuration client.

Most thin clients require you to add software, tweak settings, or configure them in some way before you can use them. Not Wyse Xenith. Just take it out of the box and connect it to your network – your Citrix XenDesktop server configures it to your preferences.

Wyse Xenith is completely configurable, yet no management software is needed. So unlike some clients, when a hot new feature is released, you won’t need to buy new ones to get it.


 

Zero Security Issues

 

It’s one less thing to worry about.

Wyse Xenith is the only dynamically configurable zero client that is virus and malware immune.

Yes, immune.

There are no Windows or Linux APIs for viruses to latch on to, so even network and memory-borne viruses can’t attack. Unlike other HDX compatible clients, Wyse Xenith needs absolutely no local firewall or anti-virus protection.


 

Zero Energy Use (Almost)

 

Save $70 a year in energy versus a PC*

Wyse Xenith draws less than 7 watts of power – in full operation.

That’s less than every PC on the planet. And no multimedia rich client on the market today uses less energy.

When you hug a tree – it might hug back.


 

Zero Compromise User Experience

 

Wyse Xenith is built for Citrix environments.

XenApp, XenDesktop, it’s what Wyse Xenith lives for.
It’s quite simply the best Citrix HDX client this side of Win32.
With HDX support that goes beyond any non-windows device on the market today.

The desktop just got a lot easier.

Specs

Processor: VIA 1GHz
Chipset: VIA VX855
Memory: 128MB Flash / 512MB RAM DDR2
I/O peripheral support: One DVI-I port, DVI to VGA (DB-15) adapter included
Dual-video Support with optional DVI-I to DVI-D plus VGA-monitor splitter cable (sold separately)
Four USB 2.0 ports (2 on front, 2 on back)
Two PS/2 ports
One Mic In
One Line Out
Enhanced PS/2 Keyboard with Windows Keys (104 keys)
PS/2 Optical mouse included
Networking: 10/100/1000 Base-T Gigabit Ethernet
Internal 802.11 b/g/n (optional) eliminates theft of external wireless adapters
Display: VESA monitor support with Display Data Control (DDC) for automatic setting of resolution and refresh rate
Dual monitor supported
Single: 1920×1200@60Hz, Color depth: 8, 15, 16, 24 or 32bpp,
Two independent full resolution frame buffers
Dual: 1920×1200@60Hz, Color Depth: 8, 15, 16, 24 or 32bpp
Audio: Output: 1/8-inch mini jack, full 16 bit stereo, 48KHz sample rate
Input: 1/8-inch mini jack, 8 bit stereo microphone
Physical characteristics: Height: 1.38 inches (34mm)
Width: 6.94 inches (177mm)
Depth: 4.75 inches (121mm)
Shipping Weight: 6 lbs. (2.7kg)
Mountings: Horizontal feet (optional vertical stand)
Optional VESA mounting bracket
Device Security: Built-in Kensington security slot (cable sold separately)
Power: Worldwide auto-sensing 100-240 VAC, 50/60 Hz. Energy Star V.5.0 compliant power supply
Average power usage with device connected to 1 keyboard with 1 PS/2 mouse and 1 monitor: Under 7 Watts
Temperature Range: Horizontal and Vertical positions: 50° to 104° F (10° to 40° C)
Humidity: 20% to 80% condensing
10% to 95% non-condensing
Safety Certifications: German EKI-ITB 2000, ISO 9241-3/-8
cULus 60950, TÜV-GS, EN 60950
FCC Class B, CE, VCCI, C-Tick
WEEE, RoHS Compliant
Warranty: Three-year hardware warranty

Wyse Announces the First Zero Footprint Software Engine for Cloud Client Computing

2017-07-27T00:01:11+00:00 May 12th, 2010|Uncategorized|

Today at Citrix Synergy 2010, the conference where virtualization, networking and cloud computing meet, Wyse Technology revealed the Wyse Zero™ engine. Wyse Zero™ is software that simplifies the development of cloud connected devices. Wyse Zero™, which connects users to cloud computing services and virtual desktops with efficient communications and protocol technology, is already in use in millions of devices, including thin clients, handheld smart devices, and zero clients. Specifically, the Wyse Zero™ engine is currently powering all devices that are utilizing Wyse ThinOS, all implementations of Wyse PocketCloud and — as of today — all Wyse Xenith™ devices.

“Wyse Zero™ addresses the limitations with current embedded options and is already powering the next generation of smart devices connecting to the cloud to provide virtual desktop access.”
.For current users of Wyse ThinOS, the highly optimized, management-free solution for Citrix XenApp, Citrix XenDesktop, Microsoft Terminal Server and VMware View virtual desktop environments, Wyse Zero™ ensures that every Wyse thin client delivers flexible and secure user access, boots-up in seconds, updates itself automatically, and delivers IT managers with simple, scalable administration to suit their organization’s needs; all while featuring an unpublished API that delivers built-in protection from viruses and malware.

For the thousands of users of Wyse PocketCloud on Apple’s iPhone, iPad and iPod touch, Wyse Zero™ functionality expands the end user browsing capability to include Adobe Flash support, for example, by intelligently using cloud resources.

Announced today in conjunction with Citrix Synergy 2010, Wyse Xenith™ will also capitalize on the Wyse Zero™ engine. This technology foundation completely eliminates the management and security issues associated with traditional clients, while ensuring a high-definition HDX user experience. More information on Wyse Xenith™ is available at http://www.wyse.com/citrix.

“For cloud computing to continue to take hold in the enterprise, OEMs require a software ingredient that simplifies the development of cloud connected devices; one with a lightweight footprint that can perform at the speed of business,” according to Curt Schwebke, Chief Technical Officer at Wyse. “Wyse Zero™ addresses the limitations with current embedded options and is already powering the next generation of smart devices connecting to the cloud to provide virtual desktop access.”

The Wyse Zero™ engine delivers several benefits:

•Rich – includes networking, management, and protocol technology
•Fast – does not require an underlying operating system, starts instantly, and provides a fast user experience
•Secure – no attack surface, so no vulnerability to virus and malware threats
•Green – lowest carbon footprint, energy efficient
About Wyse Technology

Wyse Technology is the global leader in cloud client computing, leveraging its industry-leading thin and zero client computing-based desktop virtualization software, hardware and services. Cloud Client Computing is the ultimate client computing solution for our time. It replaces the outdated computing model of the unsecure, unreliable, un-green and expensive PC. It delivers the security, reliability, user experience with the lowest energy usage and total cost of ownership. It simply connects all the dots: Thin and zero client computing, unified communications, desktop virtualization and the web for users to reach the clouds – in a private, public, government or hybrid cloud. It is software. It is hardware. It is services. It is in business. It is at home. It is on the go. It is freedom – so users can focus on what is important. Wyse partners with industry-leading IT vendors, including, Cisco, Citrix, CSC, IBM, Microsoft, and VMware. Wyse also partners with globally-recognized distribution and service partners along with its award-winning partner programs to service any customer, anywhere and anytime, in the world. Wyse is headquartered in San Jose, California, U.S.A., with offices worldwide.

Source: Wyse via BusinessWire

Local Virtual Machine-based Desktops – Powered by XenClient

2017-07-27T00:01:11+00:00 May 11th, 2010|Uncategorized|


The consumerization of IT is having a profound impact on enterprise IT in general and desktop management in particular. An increase in demand for mobility and the features of consumer-oriented devices, has led to the introduction of Bring Your Own Computer programs to offer users the flexibility to choose their own laptop. The benefits for users are clear – BYOC programs enable users to select computers that meet both their personal and professional requirements. For IT, BYOC frees them to work on strategic business and IT initiatives as opposed to PC asset management and break-fix.

So what’s the problem? Historically, concerns about security and the applicability of traditional desktop configuration management tools have slowed the adoption of BYOC initiatives. Enter XenClient – a revolutionary new Xen-based client hypervisor that will enable IT to deliver each employee’s corporate desktop into a secure, centrally-managed virtual machine that runs directly on that user’s laptop, making these problems a thing of the past. When available, XenClient will be the industry’s only client hypervisor to offer “100% isolation” to ensure that all corporate applications and data are completely isolated from personal data, increasing security and simplifying regulatory compliance.

When utilized in conjunction with XenDesktop, the new XenClient offering will transform the way corporate desktops are delivered and managed, giving IT the benefits of centralized management and security while also providing users with the performance, personalization and flexibility they demand. In the future, XenDesktop may also offer the Traditional desktop management challenges such as hardware upgrades, employee moves and lost or stolen laptops will become less significant because IT will now be able to centrally terminate the corporate desktop on any device and quickly deliver it on a new laptop. Best of all, because the desktop and applications execute locally, users are free to work online or offline with the rich performance and experience of a traditional laptop.

Wyse – What is Thin Computing and Zero Client Computing?

2017-07-27T00:01:11+00:00 May 7th, 2010|Uncategorized|

Thin Computing delivers the productivity people need, at a lower cost than traditional methods, all without compromising security or manageability. Thin Computing replaces the PC with a Thin or Zero Client, making it easier for IT to manage user desktops by moving their complexity to the datacenter. Analysts agree that this approach improves the reliability and security of information, dramatically lowers IT costs, reduces energy consumption, and is far better for the environment. Yet Thin Computing still provides the access to applications and data that people need in order to move the business forward. All while improving on the security, reliability, and availability of PCs.
People often make the mistake of thinking that Thin Computing is just another name for thin-client computing. Actually, Thin Computing includes hardware services and software that work with Thin Clients, Zero Clients, and PCs, as well as wireless devices and other systems. It gives everybody in the organization secure access to the information and the applications they need, without requiring the desktop systems to store them.


Why Thin Computing and Why Now?

Today, as much as 80 percent of IT’s budget is allocated to maintenance, making it very hard for any IT organization to add value to the business. Chief Information Officers have seen their titles evolve to Chief Infrastructure Officer, as they are totally consumed by the need to avoid regulatory problems and keep things running at the same time. Thin Computing not only reduces the cost to deliver desktop computing by as much as 40 percent or more, it also frees IT staff time to focus on more strategic initiatives.

Additionally, the increased availability of high-bandwidth network connections and the sophistication of datacenter architectures based on Microsoft Windows Terminal Services, Citrix Application Delivery, or VMware Virtual Desktop Infrastructure, allow Thin Computing solutions to run at desktop speeds, and display rich multimedia like never before. This makes it easier and more acceptable for business professionals to use Thin Clients in mission-critical applications.

Click to view diagram

What is Zero Client Computing?

The majority of today’s thin clients have a small OS in them that provides the functionality to securely connect to servers and display applications for workers. The small amount of software in these devices is why they’re called “thin” clients. Enterprises have successfully adopted this technology in the millions of units, and for some, there is an opportunity to make the desktop or mobile device even thinner – called a Zero Client. A Zero Client has no local OS pre-installed on the unit. This information is provisioned to the desktop (like a cell phone is provisioned when purchased) when it is powered up, based on the worker’s role in the organization. Zero Clients cost less, and don’t need to be managed, but require more network bandwidth than Thin Clients.

Why Wyse Thin Computing?

Wyse is the worldwide leader in Thin Computing, offering Thin and Zero Clients before anyone else. Wyse software makes it easy to provision, manage, update, and even service any Thin or Zero Client from one central location. After all, it’s much easier and more cost-effective to manage several servers than thousands of individual desktop PCs. And with no moving parts thanks to solid-state technology, Wyse Thin and Zero Clients deliver greater reliability, availability, and lower cost of ownership than other solutions.

Wyse Thin Computing solution includes:

Thin-Computing Hardware
Thin-Computing Software
Thin-Computing Services

This total solution allows Wyse to deliver improved:

Going forward, the world is only going to get thinner. And Wyse is already moving in that direction. Check back soon to see exactly how we’re getting thinner. Thin Computing delivers the productivity people need, at a lower cost than traditional methods, all without compromising security or manageability. Thin Computing replaces the PC with a Thin or Zero Client, making it easier for IT to manage user desktops by moving their complexity to the datacenter. Analysts agree that this approach improves the reliability and security of information, dramatically lowers IT costs, reduces energy consumption, and is far better for the environment. Yet Thin Computing still provides the access to applications and data that people need in order to move the business forward. All while improving on the security, reliability, and availability of PCs.
People often make the mistake of thinking that Thin Computing is just another name for thin-client computing. Actually, Thin Computing includes hardware services and software that work with Thin Clients, Zero Clients, and PCs, as well as wireless devices and other systems. It gives everybody in the organization secure access to the information and the applications they need, without requiring the desktop systems to store them.

Citrix Receiver for Windows & Mac

2017-07-27T00:01:11+00:00 April 5th, 2010|Uncategorized|

On-demand applications from any Windows & Mac device

Citrix Receiver for Windows and Mac is a lightweight software client that runs on laptops, desktops, Macs and netbooks – turning any device into a powerful business tool to access virtual applications and desktops. With Receiver installed on a device, IT can rapidly deliver virtual applications and desktops people need to do their jobs. Receiver for Windows and Mac, along with its innovative console, Merchandising Server, enables faster roll-out of virtual applications and desktops, simplified client management and update, and a single unified user experience for everything Citrix.

Flexible Client Software Configuration
Built with a browser-like ‘plug-in’ architecture, Receiver for Windows & Mac enables flexible client software configuration ideal for every user in your organization.  IT administrators can select from plug-in functions for hosted applications, secure VPN access, communications services and self-service applications with Citrix Dazzle.  Receiver also supports 3rd party plug-ins to further simplify client software management.

Safe, Secure, High Definition Experience
Receiver includes built-in HDX technologies to bring a high definition user experience for virtual applications including document-oriented, data-intensive, graphics-rich and multimedia applications – on any network connection.  Receiver supports high performance, standards-based encryption security for all data from the datacenter, over the network to users anywhere.

Zero-Touch, Silent Updates
Receiver allows the use of enterprise computing services without the need to understand or worry about the underlying complexities. Installs are a simple point and click, and once Receiver is installed, it’s pretty much ‘lights out’, with zero-touch, silent updates from there forward.

Auto-Updating with Targeted Deliveries
Receiver is completely under centralized administrative control to allow the entire Receiver setup, including plug-ins, to be kept up-to-date, with scheduled automatic download to users based upon user preferences for update checks, or IT-controlled mandatory updating.  Receiver enables IT to deliver client software updates to users using a rules-based system based on group credentials, IP addresses, machine names, login IDs, or operating systems. This provides the flexibility to ensure plug-ins are delivered to only appropriate, authorized users.

Receiver for Windows & Mac is free and is available immediately for any Windows or Mac-based device, including PCs, Laptops, Macs and Netbooks. Receiver is also available for smartphones and iPad.

Load More Posts