2017-07-27T00:00:55+00:00 May 19th, 2017|News|

Farmington Hills, MI – May 19, 2017 – Coretek is pleased to announce that Jeff Harvey has joined the company as a Senior Sales Executive. Jeff brings over 15 years of experience in End User Computing and EMM solutions in the healthcare industry. Jeff’s territory will cover Indiana, Illinois, Wisconsin and Minnesota.

“Our business is about our clients and improving their experience. We are thrilled to have Jeff be a part of our team. His experience with in the industry will be a huge asset to our company,” said Cyndi Meinke, Director of Sales and Marketing.

Jeff recently worked at VMware where he was responsible for solution selling to enterprise and healthcare customers located in the central US region. Jeff has also worked at both Oracle and Sun Microsystems delivering holistic solutions and platforms helping customers gain performance outcomes and cost savings benefits deploying enterprise EUC/VDI. Jeff lives in Cincinnati with his wife Kim and a family of four daughters. Jeff’s passions include volunteering at a local community homeless shelter and Sister Mary Rose Mission foodbank, and when possible attending and following Formula One racing events.

For more information, contact us at or visit the Coretek Services website at



About Coretek Services

Coretek Services is a Systems Integration and IT Consulting Company that delivers high value and innovative solutions.  Coretek works with your team to custom-design an IT architecture based on each clients’ unique requirements; the solution encompasses server and desktop virtualization, optimization of a virtual desktop environment, cloud desktop, mobile device management, infrastructure consulting and project management.  Our goal is to help our clients achieve Project Success. No exceptions. For more information, visit

The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|



I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.


2017-07-27T00:00:55+00:00 May 17th, 2017|News|

May 17, 2017 – Farmington Hills, MI – Coretek Services announced the addition of Bill Zack to their cloud team. Bill Zack is a Cloud Architect with Coretek Services based in Nashville, Tennessee.

“Bill is a powerful addition to our team, his expertise in Microsoft Azure will add another layer of depth to our already growing company,” said Ray Jaksic, Chief Technology Officer at Coretek Services. “We are expanding our cloud offerings and Bill is a key step into our continued growth.”

Bill Zack was formerly a Principal Architect Evangelist at Microsoft and has been an Azure specialist since before it was called Azure.  When Azure was launched in 2008 he was on a Microsoft team as one of twenty-four Azure Subject Matter Experts world-wide.

For three years, Bill acted as an Architecture Consultant to the Microsoft Azure Product Team. In that capacity, he provided architecture and technical support to M
icrosoft partners and customers.  During that time, he held the role of Microsoft Virtual Technical Support Professional (V-TSP). He was, and continues to be, an active member of the Azure Advisors group run by the Microsoft Azure Customer Advisory Team.

Bill is also the Founder and President of the Nashville Microsoft Azure Users Group and has published books, white papers and blogs including:  CloudyInNashville, Microsoft Ignition Showcase, and the InfoQ White paper on The Software as a Service Development Life Cycle. Bill is a periodic guest on the Microsoft Azure Podcast. He also presents frequently at community groups and conferences.

Bill Zack

Bill Zack – Cloud Architect Coretek Services


About Coretek Services

Coretek Services is a Systems Integration and IT Consulting Company that delivers high value and innovative solutions.  Coretek works with your team to custom-design an IT architecture based on each clients’ unique requirements; the solution encompasses server and desktop virtualization, optimization of a virtual desktop environment, cloud desktop, mobile device management, infrastructure consulting and project management.  Our goal is to help our clients achieve Project Success. No exceptions. For more information, visit

How to protect against the next Ransomware Worm

2017-07-27T00:00:58+00:00 May 15th, 2017|blog, Ransomware, Security|

Hopefully you were one of the prepared organizations who avoided the latest Ransomware worms that made its way around the globe this past week.  This worm crippled dozens of companies and government entities, as it impacted over 230K computers in 150 countries.  Most of the infections were in Europe, Asia, and the Middle East, so if you did not get hit, you were either prepared, or lucky.  This blog post will help you be prepared for when this happens again, so that you don’t have to rely on luck.

Patch everything you can, as quick as you can

The exploit at the root of this Ransomware worm was resolved in MS17-010, which was released in March of 2017, giving organizations more than enough time to download, test, pilot through your UAT (User Acceptance Testing), and deploy to Production.  While introducing new patches and changes to your environment carries risk of breaking applications, there is far more risk in remaining unpatched – especially security specific patches.  Allocate the proper resources to test and roll out patches as quickly as you can.

Run the newest OS that you can

While the EternalBlue exploit that was patched by MS17-010 was applicable to every Windows OS, you were safe if you were running Windows 10 due to a security feature called ELAM (Early Launch Anti-Malware).  Many of the infected machines were running Windows XP, or Server 2003, that did not get the MS17-010 patch (Microsoft has released a patch for these OS variants after the infection, please patch if you still have these in your environment).  It is not possible to secure Windows XP or Server 2003.  If you insist on running them in your environment, assume that they are already breached, and any information stored on them has already been compromised (You don’t have any service accounts logging into them that have Domain Admin privileges, right?).


Proper perimeter and host firewall rules help stop and contain the spread of worms.  While there was early reports that the initial attack vector was via E-mail, these are unconfirmed.  It appears that the worm was able to spread over the 1.3 Million Windows devices that have SMB (Port 445) open to the Internet.  Once inside the perimeter, the worm was able to spread to any device that had port 445 open without MS17-010 installed.

Turn off Unnecessary Services

Evaluate the services running in your desktop and server environment, and turn them off if they are no longer necessary.  SMB1 is still enabled by default, even in Windows 10.


These types of attacks are going to be the new normal, as they are extremely lucrative for the organizations who are behind them.  Proper preparation is key, as boards are starting to hold both the CEO and CIO responsible in the case of a breach.  While you may have cyber-security insurance, it may not pay out if you are negligent by not patching or running an OS that stopped receiving security updates 3 years ago.  I would recommend to be prepared for the next attack, as you may not be as lucky next time.

Additional Layers of Defense to Consider

For those over-achievers, additional layers of defense can prove quite helpful in containing a breach.
1.    Office 365 Advanced Threat Protection – Protect against bad attachments
2.    Windows Defender Advanced Threat Protection – Post-breach response, isolate/quarantine infected machines
3.    OneDrive for Business – block known bad file types from syncing

Good luck out there.

Disaster Recovery: Asking the wrong question?

2017-07-27T00:00:58+00:00 May 11th, 2017|Azure, blog, Cloud, Disaster Recovery|


In my role as an Azure specialist I get asked a lot of questions about Disaster Recovery. IMHO they almost always ask the wrong question.

Usually it goes something like “I need Disaster Recovery protection for my data center. I have N VMs to protect. I have heard that I can use Azure Site Recovery either to facilitate Disaster Recovery to my backup data center, or even use Azure as my backup data center.” That is true. Smile

In a previous lifetime I used to work on a Disaster Recovery  testing team for a major New York based international bank. We learned early on two major principles:

1. It is all about workloads since different application workloads have different Disaster Recovery  requirements. Every workload has unique Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Not all workloads are created equal.   For instance email is a critical workload in the typical enterprise. An outage of more than a few hours would affect business continuity significantly.  Other application workloads (such Foreign Exchange Trading, Order Processing, Expense Reporting, etc.) would have more or less stringent RTO and RPO requirements.

2. So it is really is all about Business Continuity. Disaster Recovery is a business case. It is clear that providing perfect Disaster Recovery for all application workloads would be extremely expensive; and in many cases not cost effective. In effect there is an exponential cost curve. So it is all about risk management and cost/benefit.

So where does that leave us?

1. Evaluate your Disaster recovery requirements on a workload by workload basis.

2. Plan how to implement it considering the objective of Business Continuity, RTO and RPO.

3. Use Azure Site Recovery to make it happen. Smile

This article originally posted 4/1/2016 at Bill’s other blog, Cloudy in Nashville.

Walk for Wishes 2017…

2017-07-27T00:00:58+00:00 May 8th, 2017|Announcements, blog|

We did it again!  As the 19th Annual Walk For Wishes ® approached, fellow Coreteker Sarah Roland (a Walk veteran since 2011) assembled set up her annual “Sarah’s Dream Team” and put out the call for participation, as she has done every year since 2013.  And again this year, fellow Walk veterans Avi Moskovitz (since 2016) and I (since 2013) got our families involved and headed along to the Detroit Zoo and joined in the effort.

We are proud to have been a small — but important — part of the more than $500,000 and counting because of the efforts of more than 6,500 walkers combined!  It was a really great crowd, and there were tons of interesting things to do and see along the way (it’s the Detroit Zoo, after all), supported by care and food volunteers along the way.   And given some threatening weather, we were amazed that we also managed to get the whole day in with no rain, and just a bit of cold.  That’s 5 years in a row of good walking weather; someone must be watching out for us…

We are truly thankful to the folks at Coretek who donated so generously to us and our team, both individually and as a company, raising over $1400 with us by walk time.  We hope you are as moved as we are by the work that is done by the organization and the good that it brings to those that — in many cases — don’t have a whole lot of good in their lives at the moment the wish fulfillment is granted to them.  For instance, here’s the moment that Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

And did I mention that you can still donate to our team!?!?  Just click here go to Sarah’s Dream Team page and follow the process to donate to the team or any of us individually.

Oh, and by the way… we decided that we are on a mission to get more members of our team for 2018, so you may as well just plan to join now…  😉

See you then!

A Cloud is a Cloud is a Cloud?

2017-07-27T00:00:58+00:00 May 4th, 2017|Azure, blog, Cloud|

It never fails to amaze me that it seems like every vendor in the industry, every hosting company and every virtualization or database vendor that puts something in one of the Public Clouds is quick to claim that “They have a Cloud”. A while back they even invented a term for that. The term is “CloudWashing” (i.e. naming everything that you have as “Cloud”)

Let’s apply some common sense. Back in 2011 the National Institute of Standards (NIST) produced a concise and clear definition of what it means for something to be a Cloud. You can read about it here. This is the standard against which all Cloud pretenders should be measured. In case you don’t have time to read it I will summarize.

The NIST model defines a Cloud as “Enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (The underlining is mine)

It is composed of:

  • 5 Essential characteristics
  • 3 Service models
  • 4 Deployment models

The 5 essential characteristics are:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

The 3 Service Models are:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

The 4 Deployment Models are:

  • Public Cloud
  • Private Cloud
  • Community Cloud
  • Hybrid Cloud

Let’s take the definitions apart a bit and see what makes them tick:

The 5 Essential Characteristics

On-demand self-service: Being able to self-provision and de-provision resources as needed, without vendor involvement.

Broad network access: Resources available over a network accessed through standard protocols from all kinds of devices. For Public Clouds, these resources are typically available all over the world with data centers that accommodate all country’s need for data sovereignty and locality.

Resource pooling: Lots of resources that can be allocated from a large pool. Often the user does not know, or have to know, their exact location. Although, in the case of Public Clouds like Microsoft Azure, locality is often under the user’s control. In many cases the pool of resources appears to be nearly infinite.

Rapid elasticity: The ability to scale up and down as needed at any time, in some cases automatically, based on policies set by the user.

Measured service: Providing the ability to track costs and allocate them.  Public Cloud resources are often  funded using an Operating Expenditure (OpEx) rather than the Capital Expenditure (CapEx) accounting model used where hardware purchases are involved.

The 3 Service Models

In the case of Public Clouds these service models are defined by who supports various levels in the technology stack; the customer or the cloud vendor.

This Microsoft Azure diagram has been in existence, in one form or another, at least since the 1970s:


It illustrates who is responsible for what part of each service type the Vendor (in blue) and the customer (in black).

Software as a Service (SaaS) asks the customer to take no responsibility for the underlying hardware and software stack. In fact, the customers only responsibility is to use the service and pay for it.

Platform as a Service (PaaS) lets the vendor manage the lower layers of the technology stack, while the customer is responsible for the Application and Data layers.

Infrastructure as a Service (IaaS) corresponds most closely to what the customer must manages  in a typical data center and what the vendor is responsible for. With IaaS, the vendor takes care of everything from part of the operating system layer to the bottom of the stack. Notice that in Azure the operating system layer is partly the vendor’s responsibility, and partly the customer’s. The customer is still responsible for applying operating system patches, for instance. (This may differ slightly for other Public clouds.)

I often include a slide in my Cloud presentations that addresses the question of what to use when. In my mind the decision tree is fairly obvious:

  • It’s SaaS until it isn’t
    • Then It’s PaaS until it isn’t
      • Then It’s IaaS
  • Or Hybrid with any of the above

If you can find a SaaS service that meets your objectives, use it. Let the vendor have all the stack support headaches. If you can’t, then consider using PaaS by either buying a 3rd party application or building your own.

Finally, if none of these approaches work, you can always take advantage of IaaS since that most closely matches what you have in your own data center. Even there, however, the vendor will take care of a great deal of the plumbing. (As an aside IaaS is often the approach of choice for taking a “lift and shift” approach to migrating what you already have running in your data center up into the cloud.)

And yes, I know we haven’t discussed Hybrid yet, but patience, we will get there.

The 4 Deployment Models

A Public Cloud is a pool of computing resources offered by a vendor, typically, on a “pay as you go” basis. It supports Self-Provisioning by its customers as discussed above. It also often has a massive global presence and appears to have near-infinite capacity. In general, it is available to anyone.

Gartner defines a large number of Leadership Quadrants, but the ones that are most relevant to our discussion are the ones for Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Gartner Leadership Quadrant for IaaS includes just Amazon and Microsoft. The one for PaaS just includes Microsoft Azure and Salesforce. There are other lesser Public Cloud vendors including Google, IBM and Rackspace.

One other point: Unless a company can spend the many billions of dollars necessary to create a global datacenter presence it is hard for them to be considered a Public Cloud leader.

A Private cloud on the other hand is normally local to a single company. It is normally located on their own premises. If a customer creates a data center, or a complex of data centers, that conforms to the NIST definitions discussed herein then it can rightly be called a “Private Cloud”. Be cautious here, however. Remember the term “CloudWashing” as defined above. Just because a customer has a large number of computers in a data center does not make that a Private Cloud no matter how vehemently they insist on calling it one.

Although there is no requirement for the architecture of Public and Private Clouds to be the same most companies agree that compatibility between them would be helpful to support CloudBursting. That is, the ability to move applications and data freely between the data center and the Public Cloud. (See the discussion on Hybrid Cloud below.)

A Community Cloud is a variation of a Public Cloud that restricts its clientele to a specific community of users. For instance, there are Community Clouds for Government and Education as well as for other communities. Each of these may have different levels of security and compliance requirements or other unique characteristics. The NIST document does state that it may be operated for the community by one of the companies in it, however, it is typically operated by a Public Cloud vendor using a walled-off subset of it resources.

A Hybrid Cloud is typically formed by the integration of two or more Public and/or Private Clouds. In the typical case a company utilizes both a Public Cloud and resources in their own data center structured as a private cloud with strong networking integration between them.

A high degree of standardization between them is desirable in order to make it possible to distribute resources across them or to load balance or cloudburst resources between them. This is often done for security and compliance reasons where a company feels that some data and/or processing must remain behind their own firewall, while other data and/or processing can take advantage of a Public Cloud.

In the case of Microsoft technology there is one example where compatibility will be extremely high and advantageous. I expect the Private Cloud space to be dominated, at least in data centers utilizing Microsoft technology, by the new Azure Stack appliance coming out this summer from 5 initial vendors. Make no mistake about it. Azure Stack is Azure running in your own data center. In effect Azure stack is pretty much like Just another Azure Region  (JAAR?) to the Public Azure Cloud. Having that high degree of compatibility will help facilitate the Hybrid Cloud architecture discussed above. I have already blogged about Azure Stack and why I feel that it is so important so I will not go into that in detail here. See this bog post: The Future of Azure is Azure Stack.

We also should distinguish between true Hybrid Cloud and an IT Hybrid Infrastructure In the case of the latter the resources in the data center need not be in the form of a Private Cloud as discussed above. Microsoft is the clear leader in this space now because of its traditional enterprise data center presence and it’s leadership in converting itself to be a Cloud Company.

So, don’t be fooled by Cloud Pretenders. Apply the NIST litmus test.

This article originally posted 5/3/2017 at Bill’s other blog, Cloudy in Nashville.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]