How to import the HKCU values of a different profile into your registry…

2012-08-01T23:01:31+00:00 August 1st, 2012|Uncategorized|

When troubleshooting an application installation issue — or an issue with the application itself — on the Windows operating system, it’s often necessary to view the registry.  The Windows registry holds a wealth of information and settings that determine just about everything related to how Windows operates; what it looks like, where and what users can access, and how applications behave.  If you know where to look, or what you’re looking for, this can be a pretty easy task; the “Find” function in the Registry Editor works pretty well.  However, if the issue is with registry keys under the HKEY_CURRENT_USER (HKCU) hive, it can be a challenge.  You see, the keys under the HKCU hive are unique to each users profile; thus, when you view the registry keys under HKCU in the Registry Editor, you are viewing the keys of the current logged-in user profile.

In an enterprise environment, applications are typically distributed via a configuration/distribution system such as Novell ZENworks Configuration Mannager (ZCM) or Microsoft System Center Configuration Manager (SCCM).  When software is installed through these systems, the application installers can be configured to run as the logged on user, under the system profile, or even as a “dynamic” administrator.  If the installer writes values to the HKCU profile they are written to the profile that ran the installer.  Thus, there is often need to view the HKCU values of a different user profile.  Adding more complexity to this, you may find yourself in a situation where the user profile under which the HKCU values are located doesn’t have access to the Registry Editor.  In this situation, you need to import the profile’s HKCU hive into your registry so you can view it.

The HKCU values for a profile are stored in a file called ntuser.dat, located in the root of the users’ profile.  On Windows 7 the path is c:users(profile)ntuser.dat.  There are a few ways to import the ntuser.dat into the registry with Registry Editor, but the quickest and easiest way I’ve found to do this is using the following command from an elevated command prompt:

reg.exe load HKLMTempHive c:users(profile)ntuser.dat

Now the HKCU values of the profile you imported can be viewed under a key called “TempHive” in the HKEY_LOCAL_MACHINE hive.

Hopefully, this will help you to resolve that issue you’re looking into!

 

How to run bginfo.exe at startup on Windows Server 2008 R2

2017-07-27T00:01:07+00:00 May 9th, 2012|Uncategorized|

No matter what area of IT you work in, there’s always some important piece of information you frequently need to retrieve from a workstation or server; often, it’s several pieces of information.  A lot of time can be spent searching a system to obtain that info. Fortunately, there’s a tool that’s been around for years that can display system info right on the desktop: bginfo (http://technet.microsoft.com/en-us/sysinternals/bb897557).

 

Bginfo is often a necessity in a lab environment, but it can be used anywhere.  Some of the most popular information to display is:

  • OS version
  • SP version
  • IP address
  • Boot time
  • Disk “Free Space”

…but there’s a whole lot more.  In fact, you can configure bginfo to display just about any attribute of the system.  (NOTE: A detailed explanation about how to display custom info using bginfo is beyond the scope of this article; but if you would like to learn more, check out Shay Levy’s article here: http://blogs.microsoft.co.il/blogs/scriptfanatic/archive/2008/07/22/bginfo-custom-information.aspx).

It’s nice to run bginfo at startup silently and unattended.  This can be challenging, though, particularly on Windows Server 2008 R2.  To do so, you need to edit the following registry key:

HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionRun

**Credit to James from redkitten.co.uk for documenting this here: http://www.redkitten.co.uk/windows-server/using-bginfo-on-windows-server-2008/

Create a new REG_SZ value under the “Run” key named “bginfo”, or whatever you want.  The value of the key will be the path to bginfo.exe, and any parameters you want to pass.  Personally, I’ve had the best luck passing /silent /accepteula /timer:0 to run bginfo silently at startup.  The help file indicates /NOLICPROMPT is also a parameter to bypass the Sysinternals accept EULA dialog, but /accepteula always works for me.

Something else to be aware of — especially if you intend to run bginfo in an enterprise with UAC turned on and GPOs applied — is to keep the output bitmap file in a location that the logged on user has write permissions.  The reason for this is that bginfo runs in the user context to display on the user’s desktop; and because the information is “dynamic” — at least at each user logon — the output bitmap file needs to be updated.  You can change where bginfo stores the .bmp under the Bitmap -> Location… menu.  The default location is in the user’s TEMP directory, which should be okay.

 Bginfo is a fun, easy and very useful way to customize your desktop, and I hope this helps you (and other users) be more productive!

 

Remotely Reboot a Bunch of XP Workstations…

2012-04-04T22:46:33+00:00 April 4th, 2012|Uncategorized|

I got an interesting question from a co-worker today (we’ll call him “Ray” to protect his identity).  Ray wanted to know if it’s possible for his customer to reboot a bunch of workstations at once, in a way other than the customer’s workstation management system. 

The customer is in the midst of a major migration from XP to Windows 7, and simultaneously from Novell ZENworks 7 to Novell ZCM 11.  Ray’s team has put together an amazing set of automatic deployment steps that take the ZENworks-controlled XP machine all the way to a completely-deployed, domain-managed, ZCM-controlled Win 7 machine with all needed applications installed, via a method called “Zero Touch Deployment”.   And it all kicks off with a reboot — but the only problem is that the bundled reboot in the old ZENworks is not always reliable.

Note: This particular customer’s machines are XP, all in one domain, all are resolvable (either via WINS or DNS), and can all be managed by a single set of credentials; allowing remote administrative execution and permitting the following to work.

So when Ray asked the question, I said, “Absolutely!”  I can do that, and not even break out PowerShell (or bash, for that matter).  Ray had nothing more than a list of computer names, but that is all we need.  Let’s do it old-school, with a DOS batch “for” loop.  Man, I love my job… 

First, the input file; it is just a single list of computer names or IP addresses, one-per-line, in a TXT file.  We put them in a file called C:TEMPRemoteRebooter-Input.txt, and here’s a varied example of how it might look:

pc1
10.2.1.3
wks22.domain.local
192.168.33.44
amyscomputer

The script is a bit on the simple side, but does the job.  I call this RemoteRebooter.bat, and notice that it calls the other input file by name:

@ECHO OFF
@Echo Process RemoteRebooter...
@For /F "tokens=*" %%Q in (c:tempRemoteRebooter-Input.txt) Do @(
1>&2 ECHO Rebooting: %%Q
shutdown -m \%%Q -r -f -t 20 -c "Rebooting in 20 seconds via %0 -- please save your work quickly."
)
1>&2 type nul
@ECHO Complete!

If you need details on the shutdown flags, type shutdown /? into a command prompt.  And I’m not sure if I should mention it here, but if you want this to work on Windows 7, you have to change the syntax a bit, flipping hyphens (-) to slashes (/).  And of course, this is only one way of doing it, and I know you all have others. 

Make sure to drop a comment and tell us how *you* do it!

 

 

 

Windows Installer Verbose Log Analyzer…

2017-07-27T00:01:08+00:00 March 7th, 2012|Uncategorized|

All software packagers know that an installation log file is critical for understanding and analyzing the behavior of an installation package, particularly with Windows Installer .MSI packages.  Many, however, are not aware of a very helpful utility that can be used to help read log files.

Microsoft provides a tool called “Windows Installer Verbose Log Analyzer” with the Windows SDK for Windows 7 and .NET Framework Service Pack 1.  The log analyzer, WiLogUtl.exe, provides a graphical interface that allows you to interact with the log and presents critical installation information in an easy to read format.

 

 

The entire SDK is approximately 1.4 GB; but if you only want to install WiLogUtl.exe, as well as a few other handy utilities like Orca, select only Win32 Development Tools on the Install Options install dialog.  By default the log analyzer is installed to C:Program FilesMicrosoft SDKsWindows

[version number]BinWiLogUtl.exe.

One particularly handy feature of WiLogUtl.exe is the ability to view a log file in an HTML format.  This special format presents the installer actions in a color coded layout which allows you to easily distinguish errors from custom actions, standard actions and other information in the log.  The interface includes buttons to quickly navigate through the log.

 

Another very useful feature of the log analyzer is the Property button, which allows you to see all the installer properties and – more importantly – their values, in one window.  Often times unexpected installation behavior can be attributed to incorrect property values.

 

 

The States button provides a view of the installation’s features and components; also very handy.

 

Understanding the root cause of unexpected installation behavior and resolving it can help ensure that your package won’t cause problems in production – and the Windows Installer Verbose Setup Log Analyzer can help save time doing it.

 

 

Basic MSI – Public_Property as Component Install Location

2017-07-27T00:01:08+00:00 February 1st, 2012|Uncategorized|

Sometimes enterprise level packaging can be quite complex, having requirements for variables, sources, destinations, etc., that are all over the map — and all over the network. 

For instance, what if you are working with application bundle files in an MSI that are not intended to be local to the device?  In that case, how does one use a “public property” that contains a path to a network drive as the destination for files to be installed in a basic MSI?

Well, that leads us to this item from Kathy at the Flexera Community Forum:

“If you’re installing the file through a component, you can use a type 35 custom action sequenced after CostFinalize to set the destination folder for the component to the value in the property.”

This translates to using a “Set Directory” custom action.

Steps:

  1. Create a new component.
  2. Under the new component, create a folder under a location that will always exist (such as under the INSTALLDIR/TARGETDIR location).
  3. Add any files needing to be installed to said folder.
  4. Create a public property with the full destination path as its value (where you want the files to end up eventually).
  5. Create a “Set Directory” custom action.
  6. Source of the CA is the directory created in Step 2.
  7. Target of the CA is the public property created in Step 4 (inputted as
    [PUBLIC_PROPERTY]).
  8. Leave the rest of the CA choices as the default options aside from the “Install Execute Sequence”.   Change the setting to “After CostFinalize”. 
  9. Complete the CA with default options.

The public property can be populated through many different means (a VBS, command-line, etc.) making it quite flexible.  The files will install to the location specified in the public property!

 

Logging Windows Installer transactions…

2012-01-04T23:42:41+00:00 January 4th, 2012|Uncategorized|

Building and deploying Windows packages in enterprise deployments can be a challenge when it goes smoothly, but especially difficult when the package deployment hits bumps in the road.  Just when you think you’ve got that application perfectly bundled, tested, and deployed, an unforeseen interaction can knock the legs out from under you.

And of course, the behavior of  Windows Installer itself can be frustrating and may even seem a bit mysterious; making app deployment even more of a challenge.  The installer follows very explicit rules for everything it does, and enforces them rigidly.  Creating a log file for an .MSI installation might just be the saving grace – providing insight when troubleshooting installation errors and other unexpected behavior.

Most application packagers are aware that a verbose log file can be can be generated by passing the parameter /l*v install.log to the Windows Installer engine  (msiexec.exe).  But what about when Windows Installer unexpectedly initiates a “self repair”, or errors occurs during an application uninstall?

Here’s one thing that can help:  There is a registry value that can be set which causes all Windows Installer transactions on a system (installs, uninstalls, repairs, etc.) to be logged to a file.  Just add the following key to the registry:

Warning:  Don’t goof around in the registry if you don’t know what you’re doing.  Seriously.  Don’t.

Registry key: HKLMSoftwarePoliciesMicrosoftWindowsInstaller
Value Name: Logging
Value Data (Reg_SZ): voicewarmup

Note that which temp directory the log file is created in depends on the user account under which the Windows Installer transaction was run.  All the relevant information is logged: custom actions, property states, feature states, and error codes.  These can be very helpful in resolving the issue!

Top 10 Storage Virtualization Trends of 2010

2017-07-27T00:01:09+00:00 August 4th, 2010|Uncategorized|

The storage area network (SAN) is now an essential technology for many large and midsize enterprises. Over the years SANs have become more sophisticated as vendors have rolled out systems that deliver better storage utilization and functionality. Based on these positive developments, 2010 should bring new and interesting products in several key areas. Here are our top 10 trends to keep an eye on in the coming year — along with the insights of key IT managers who are looking to optimize their existing storage and virtualization strategies.

1. Integration of solid state with rotating media for higher performance and lower energy costs.
Product picks: EMC FAST, Fusion-io, Compellent Storage Center

In an effort to provide the best possible storage solutions, many storage vendors are looking for ways to marry the high performance of solid-data memory to the lower cost of rotating media. As prices continue to drop for all storage technologies — and as hard drives get faster and cheaper — vendors are specifically working to incorporate the latest solid-state drive technologies into traditional SAN arrays. EMC Corp. and Compellent both offer fully automated storage tiering, which is the ability to store data depending on the needs of the application. More-frequently accessed files are stored on faster-performing disks, while less-frequently needed files are moved to tape.

“We’re using the Compellent product as part of our new Savvis Symphony cloud infrastructure service offering,” says Bryan Doerr, CTO of St. Louis-based services provider Savvis Inc. “We like how it has a policy that sits between the application and the array to control how each block of data is written to the physical media, based on frequency of usage.”

Doerr is pleased that these decisions are made automatically. “We don’t have to map tables or keep track of what files are stored where, and that’s a very powerful benefit to us,” he says. “Compellent can move individual blocks from a low-cost and low-performing SATA drive to a solid-state drive for the most-frequently updated data.”

One of the more interesting products is a hardware accelerator plug-in adapter card from Fusion-io that can pre-cache data using solid data memory for SAN arrays and other large-scale storage applications.

2. De-duplication technology — on storage and backups — can help open unused space.
Product picks: EMC Avamar, Symantec/Veritas Netbackup PureDisk, IBM/Tivoli Storage Manager, NetApp FlexClone

De-duplication technologies can provide a powerful way to quickly reclaim storage and minimize backup jobs. When users first start applying these technologies, they’re frequently surprised at how much duplication actually exists. As depicted in Figure 1, with PureDisk software from Symantec Corp., users can drill into a backup job and see that they could save more than 95 percent of their storage by getting rid of duplicate data. This capability offers huge potential savings, particularly when backing up virtual machine (VM) collections and remote offices.

Part of the challenge when using VMs is dealing with the fact that they share many common files inside each virtual image — the boot files for the operating system, the applications and so forth. A de-duplication product can leverage this by making only a single copy of common files.

PureDisk is typical of de-duplication products in that it operates in two different ways. For starters, you can use a PureDisk client or agent that runs on each VM and reports the unique files back to the central PureDisk backup server. And PureDisk can also back up the entire VMware VMDK image file without any agents on the separate VMs. This offloads backup from the ESX server and enables single-pass backups to protect all the files — whether they’re in use or not — that comprise the VM.

“De-duplication gives us big storage savings,” says Chuck Ballard, network and technical services manager at food manufacturer J&B Group, based in St. Michael, Minn. “We have 30 machines, each with a 20GB virtual hard drive, on our SAN. Rather than occupy 600GB, we have about a third of that, and we can grow and shrink our volumes as our needs dictate. We use the

[NetApp] LUN copy utility to replicate our workstation copies off of a master image.”

Ballard stores his images on NetApp’s SAN arrays that have their own utility — called FlexClone — to make virtual copies of the data. “We had EMC and also looked at IBM, but both of them had limited dynamic-provisioning features,” he says, adding that a VMware upgrade that required 4.5TB on J&B Group’s old SAN now uses just 1.5TB on the company’s new storage infrastructure.

3. More granularity in backup and restoration of virtual servers.
Product picks: Vizioncore vRanger Pro, Symantec Netbackup, Asigra Cloud Backup

When combined with de-duplication technologies, more granular backups make for efficient data protection — particularly in virtualized environments where storage requirements quickly balloon and it can take longer than overnight to make backups. Backup vendors are getting better at enabling recoveries that understand the data structure of VM images and can extract just the necessary files without having to restore an entire VM disk image. Symantec Netbackup and Vizioncore vRanger both have this feature, which makes them handy products to have in the case of accidentally deleted configuration or user files. For its part, Asigra Cloud Backup can protect server resources both inside the data center and the cloud.

4. Live migrations and better integration of VM snapshots make it easier to back up, copy and patch VMs.
Product picks: FalconStor FDS, VMware vMotion and vStorage APIs, Citrix XenServer

VMware vStorage API for Data Protection facilitates LAN-free backup of VMs from a central proxy server rather than directly from an ESX Server. Users can do centralized backups without the overhead and hassle of having to run separate backup tasks from inside each VM. These APIs were formerly known as the VMware Consolidated Backup, and the idea behind them is to offload the ESX server from the backup process. This involves taking VM snapshots at any point in time to facilitate the backup and recovery process, so an entire .VMDK image doesn’t have to be backed up from scratch. It also shortens recovery time.

Enhanced VM storage management also includes the ability to perform live VM migrations without having to shut down the underlying OS. Citrix Systems XenServer offers this feature in version 5.5, and VMware has several tools including vMotion and vSphere that can make it easier to add additional RAM and disk storage to a running VM.

Finally, vendors are getting wise to the fact that many IT engineers are carrying smartphones and developing specific software to help them manage their virtualization products. VMware has responded to this trend with vCenter Mobile Access, which allows users to start, stop, copy and manage their VMs from their BlackBerry devices. Citrix also has its Receiver for iPhone client, which makes it possible to remotely control a desktop from an iPhone and run any Windows apps on XenApp 5- or Presentation Server 4.5-hosted servers. While looking at a Windows desktop from the tiny iPhone and BlackBerry screens can be frustrating — and a real scrolling workout — it can also be helpful in emergency situations when you can’t get to a full desktop and need to fix something quickly on the fly.

5. Thin and dynamic provisioning of storage to help moderate storage growth.
Product picks: Symantec/Veritas Storage Foundation Manager, Compellent Dynamic Capacity, Citrix XenServer Essentials, 3Par Inserv

There are probably more than a dozen different products in this segment that are getting better at detecting and managing storage needs. A lot of space can be wasted setting up new VMs on SAN arrays, and these products can reduce that waste substantially. This happens because, when provisioning SANs, users generally don’t know exactly how much storage they’ll need, so they tend to err on the high side by creating volumes that are large enough to meet their needs for the life of the server. The same thing happens when they create individual VMs on each virtual disk partition.

With dynamic-provisioning applications, as application needs grow, SANs automatically extend the volume until it reaches the configured maximum size. This allows users to over-provision disk space, which is fine if their storage needs grow slowly. However, because VMs can create a lot of space in a short period of time, this can also lead to problems. Savvy users will deal with this situation by monitoring their storage requirements with Storage Resource Management tools and staying on top of what has been provisioned and used.

Savvis is using the 3Par InServ Storage Servers for thin provisioning. “We don’t have to worry about mapping individual logical units to specific physical drives — we just put the physical drives in the array and 3Par will carve them up into usable chunks of storage. This gives us much higher storage densities and less wasted space,” says Doerr.

Citrix XenServer Essentials includes both thin- and dynamic-provisioning capabilities, encoding differentials between the virtual disk images so that multiple VMs consume a fraction of the space required because the same files aren’t duplicated. Dynamic workload streaming can be used to rapidly deploy server workloads to the most appropriate server resources — physical or virtual — at any time during the week, month, quarter or year. This is particularly useful for applications that may be regularly migrated between testing and production environments or for systems that require physical deployments for peak user activity during the business cycle.

Compellent has another unique feature, which is the ability to reclaim unused space. Their software searches for unused storage memory blocks that are part of deleted files and marks them as unused so that Windows OSes can overwrite them.

6. Greater VM densities per host will improve storage performance and management.
Product pick: Cisco Unified Communications Server

As corporations make use of virtualization, they find that it can have many applications in a variety of areas. And nothing — other than video — stretches storage faster than duplicating a VM image or setting up a bunch of virtual desktops. With these greater VM densities comes a challenge to keep up with the RAM requirements needed to support them.

In this environment, we’re beginning to see new classes of servers that can handle hundreds of gigabytes of RAM. For example, the Cisco Systems Unified Communications Server (UCS) supports large amounts of memory and VM density (see Figure 2): In one demonstration from VirtualStorm last fall at VMworld, there were more than 400 VMs running Windows XP on each of six blades on one Cisco UCS. Each XP instance had more than 90GB of applications contained in its Virtual Desktop Infrastructure image, which was very impressive.

“It required a perfect balance between the desktops, the infrastructure, the virtualization and the management of the desktops and their applications in order to scale to thousands of desktops in a single environment,” says Erik Westhovens, one of the engineers from VirtualStorm writing on a blog entry about the demonstration.

Savvis is an early UCS customer. “I like where Cisco is taking this platform; combining more functionality within the data center inside the box itself,” Doerr says. “Having the switching and management under the hood, along with native virtualization support, helps us to save money and offer different classes of service to our Symphony cloud customers and ultimately a better cloud-computing experience.”

“If you don’t buy enough RAM for your servers, it doesn’t pay to have the higher-priced VMware licenses,” says an IT manager for a major New York City-based law firm that uses EMC SANs. “We now have five VMware boxes running 40 VMs a piece, and bought new servers specifically to handle this.”

As users run more guest VMs on a single physical server, they’ll find they need to have more RAM installed on the server to maintain performance. This may mean they need to move to a more expensive, multiple-CPU server to handle the larger RAM requirements. Cisco has recognized that many IT shops are over-buying multiple-CPU servers just so they can get enough dual in-line memory module slots to install more RAM. The Cisco UCS hardware will handle 384GB of RAM and not require the purchase of multiple processor licenses for VMware hypervisors, which saves money in the long run.

James Sokol, the CTO for a benefits consultancy in New York City, points out that good hypervisor planning means balancing the number of guest VMs with the expanded RAM required to best provision each guest VM. “You want to run as many guests per host [as possible] to control the number of host licenses you need to purchase and maintain,” Sokol says. “We utilize servers with dual quad-core CPUs and 32GB of RAM to meet our hosted-server requirements.”

A good rule of thumb for Windows guest VMs is to use a gigabyte of RAM for every guest VM that you run.

7. Better high-availability integration and more fault-tolerant operations.
Product picks: VMware vSphere 4 and Citrix XenServer 5.5

The latest hypervisors from VMware and Citrix include features that expedite failover to a backup server and enable fault-tolerant operations. This makes it easier for VMs to be kept in sync when they’re running on different physical hosts, and enhances the ability to move the data stored on one host to another without impacting production applications or user computing. The goal is to provide mainframe-class reliability and operations to virtual resources.

One area where virtualized resources are still playing catch-up to the mainframe computing world is security policies and access controls. Citrix still lacks role-based access controls, and VMware has only recently added this to its vSphere line. This means that in many shops, just about any user can start and stop a VM instance without facing difficult authentication hurdles. There are third-party security tools — such as the HyTrust Appliance for VMware — that allow more granularity over which users have what kind of access to particular VMs. Expect other third-party virtualization management vendors to enter this market in the coming year. (To get an idea of how HyTrust’s software operates, check out the screencast I prepared for them here.)

8. Private cloud creation and virtualized networks — including vendor solutions that offer ways to virtualize your data center entirely in the cloud.
Product picks: Amazon Virtual Private Cloud, VMware vSphere vShield Zones, ReliaCloud, Hexagrid VxDataCenter

Vendors are virtualizing more and more pieces of the data center and using virtual network switches — what VMware calles vShield Zones — to ensure that your network traffic never leaves the virtualized world but still retains nearly the same level of security found in your physical network. For example, you can set up firewalls that stay with the VMs as they migrate between hypervisors, create security policies and set up virtual LANs. Think of it as setting up a security perimeter around your virtual data center.

Amazon has been hard at work with Elastic Computing — its cloud-based, virtualization-hosted storage — and last summer added Virtual Private Cloud to its offerings (see Figure 3). This enables users to extend their VPNs to include the Amazon cloud, further mixing the physical and virtual network infrastructures. It’s also possible to extend any security device on your physical network to cover the Amazon cloud-based servers. The same is true with Amazon Web Services, where customers pay on a usage-only basis with no long-term contracts or commitments.

Microsoft has a series of new projects to extend its Windows Azure cloud-based computing to private clouds. They can be found at here and include ventures such as “Project Sydney,” which enables customers to securely link their on premises-based and cloud servers; AppFabric, which is a collection of existing Windows Azure developer components; and updates to Visual Studio 2010.

Some of these are, or soon will be, available in beta. But like other efforts, more federated security between the cloud and in-house servers will require improvements before these new offerings can be dependably used by most enterprises.

Two new entrants to the cloud computing services arena are Hexagrid Inc. and ReliaCloud, both of which offer a wide range of infrastructure services, including high availability, hardware firewalls and load balancing. With these companies, all cloud servers are assigned private IP addresses and have persistence, meaning that users treat them as real servers even though they’re residing in the cloud. Expect more vendors to offer these and other features that allow IT managers to combine physical and cloud resources.

9. Better application awareness of cloud-based services.
Product picks: Exchange 2010, Sparxent MailShadow
It isn’t just about networks in the cloud, but actual applications too, such as Microsoft Exchange services. The days are coming when you’ll be able to run an Exchange server on a remote data center and failover without anyone noticing. Part of this has to do with improvements Microsoft is making to the upcoming 2010 release of its popular e-mail server software. This also has to do with how the virtualization and third-party vendors are incorporating and integrating disaster recovery into their software offerings. An example of the latter is MailShadow from Sparxent Inc. This cloud-based service makes a “shadow” copy of each user’s Exchange mailbox that’s kept in constant synchronization. There are numerous cloud-based Exchange hosting providers that have offered their services over the past few years, and Microsoft is working on its own cloud-based solutions as well.

10. Start learning the high-end, metric system measurements of storage.
If you thought you knew the difference between gigabytes and terabytes, start boning up on the higher end of the metric scale. SAN management vendor DataCore Software Corp. now supports arrays that can contain up to a petabyte — a thousand terabytes — of data. Savvis sells 50GB increments of its SAN utility storage to its co-location customers, which Doerr says has been very well received. “It’s for customers that don’t want to run their own SANs or just want to run the compute-selected functions,” he states. “There’s a lot of variation across our customers. You have to be flexible if you want to win their business.” Given that it wasn’t too long ago when no one could purchase a 50GB hard drive, he says this shows that, “we’re going to be talking exabytes when it comes to describing our storage needs before too long.” Next up: zettabytes and yottabytes.

Source: Redmondmag.com

Small Business Computer Support in Detroit Metro

2017-07-27T00:01:11+00:00 March 9th, 2010|Uncategorized|

Did you know that Coretek offers computer and network support to small and medium sized business in the the Detroit Michigan metro area?  We do!!!

              Contact us today at (248) 684-9400 for more information.

 

SMB Computer Support:

Coretek is committed to providing computer and network solutions that address small and medium sized businesses in the Detroit area. Our goal is to help our clients achieve a predictable and cost-effective IT support.  

  • Cost Effective
  • Computer Supportdell_ultra_small_desktop
  • Network Support
  • Server Support

Coretek offers computer and network support on a subscription basis or time and material basis that saves a company time, worry and money.  We will proactively update and stabilize a company’s computing environment, eliminating problems before they occur.  Coretek utilizes processes and tools that over time reduce the number of problems in a network and amount of time it takes to resolve computer issues when they do occur.  This service will make a company and its employees more productive, which is Coretek’s ultimate goal.  Coretek is dedicated to providing its clients with more than just great IT people but innovative computer solutions.  Solutions provided by Coretek include the assessment and design of appropriate technology solutions, technology implementation, ongoing support, and appropriate migration planning and implementation.  Coretek looks forward to building a reputation of excellence with its clients as a provider of computer support services.  We will not waste your employee’s time on the phone instructing them on what to do to fix the issue.  We will either remote control into the machine with the problem and fix it, or we will send someone on-site to repair it quickly.  Our small to medium business team is comprised of a group of highly trained consultants and engineers who understand the needs of smaller businesses and have the experience to provide technology recommendations as well as their implementation and support.

 

Technology Solutions Offered: 

  • Computer, Printer and Network Support
  • Server Migrations and Upgrades
  • Network Security
  • Technology Assessments and Strategic Planning
  • IP Telephony Systems and Integration (Voice over IP, VoIP)
  • Server and Desktop Virtualization
  • Project Management
  • Microsoft Integrated Solutions (AD, Exchange, System Center, ISA, IIS, SharePoint, Project Server)

 

       101 Best and Brightes

  Microsoft Gold Partner