2017-07-27T00:00:55+00:00 June 22nd, 2017|blog, Headquarters|

Technical. Modern. Collaborative.

These three words personify Coretek Services’s new gorgeous headquarters, located just west of downtown Farmington at 34900 Grand River Avenue, Farmington Hills, MI.

On June 8, Coretek showcased the New Headquarters with an Open House with over 300 guests from around the nation, including clients, partners, friends, family and members of our community. We cannot express enough gratitude for everyone who was able to make it, as well as the sponsors who made it happen.

The highlight of the event was a Ribbon Cutting Ceremony, featuring speeches from Coretek Services CEO, Ron Lisch, Drew Coleman, Business Manager of Michigan Economic Development Corporation; Ken Massey, Mayor of Farmington Hills; Mary Martin, Executive Director of the Greater Farmington Area Chamber of Commerce; as well as a State Tribute by State Representative Christine Grieg.

Periodic TEK talks focusing on solutions to solve IT business problems were presented throughout the day including: Azure Management Suite, VDI: Clinical Workflows and Cloud Desktops, Post-Breach Response with Win10, Hyperconvergence with Hybrid Cloud, Transforming Outdated Devices, User-Experience Analytics and Optimized Managed Infrastructure.

Additionally, the Open House featured a red carpet (see photos here), networking, food and drinks!

Ron Lisch had a vision to create a headquarters that is a tech center for the Metro Detroit area. He envisioned a place where employees, clients, prospective clients, and partners can come together to collaborate on creative solutions hands-on in a lab. The headquarters would foster the Coretek philosophy of bringing great people together to do great work.

Ron’s vision came into full fruition in May 2017, as the Coretek family officially lives and breathes in the new Headquarters. Inside the new HQ is an innovation center, which hosts different technology solutions Coretek offers that allows visitors to test the technology hands on. Additionally, there is a large events center where user-group meetings, training sessions and other events will be held, a company fitness center for employee use, and a collaboration center that allows employees and visitors a space to work together, network, or relax.

The modern space is bright with the latest technology implemented throughout the space, highlighting pictures from around our community canvased in conference rooms. The innovative HQ truly is a hub of technology and collaboration for the Metro Detroit area.

We will continue to host upcoming events at our new headquarters and are excited to welcome even more guests in the future to collaborate in the new work space.

Some photographs of the new space and Open House are below to get a sneak peek on what we have in store for you when you visit! Thank you to Resa Abbey for the photography!

Ron Lisch – CEO of Coretek Services

Innovation Center – 360 degree view


Clint Adkins providing a demo of Coretek Services Cloud Solutions




Thank you to our Partner Sponsors:

Azure – Next Available IP Address…

2017-06-01T19:53:53+00:00 June 15th, 2017|Azure, blog, Cloud, PowerShell|

The other day, I was working at a customer location with one of their IT admins named Jim, designing an Azure IaaS-based deployment.  During our session, he challenged me to make an easy way for him and his coworkers to see if a particular address is available, or find the next available “local” address in their various Azure VNets.

Because while addressing can be handled with ARM automation — or just dynamically by the nature of IaaS server creation — in an Enterprise it is usually necessary to document and review the build information by a committee before deployment.  And as a result, Jim wanted to be able to detect and select the addresses he’d be using as part of build designs, and he wanted his coworkers to be able to do the same without him present.  So, this one is for Jim.

I wanted to write it to be user-friendly, with clear variables, and be easy-to-read, and easy-to-run.  …unlike so much of what you find these days…  😉

Copy these contents below to a file (if you wish, edit the dummy values for subscription, Resource Group, and VNet) and run it with PowerShell.  It will do a mild interview of you, asking about the necessary environment values (and if you need to change them), and then asking you for an address to validate as available.  If the address you specified is available, it tells you so; if it is not, it returns the next few values in the subnet from which the address you entered resides.

# TestIpaddress - Jeremy Pavlov @ Coretek Services 
# You just want to know if an internal IP address is available in your Azure VNet/Subnet.
# This script will get you there.  
# Pre-reqs:
# You need a recent version of the AzureRm.Network module, which is 4.0.0 at his writing.  
# You can check with this command: 
#     Get-Command -Module
# …and by the way, I had to update my version and overwrite old version with this command:
#     Install-Module AzureRM -AllowClobber
# Some things may need to be hard-coded for convenience...
$MySubscriptionName = "My Subscription"
$MyResourceGroup = "My Resource Group"
$MyVnet = "My VNet"
Start-Sleep 1
Write-Host ""
Write-Host "Here are the current settings:"
Write-Host "Current subscription: $MySubscriptionName"
Write-Host "Current Resource Group: $MyResourceGroup"
Write-Host "Current VNet: $MyVnet"
Write-Host ""
$ChangeValues = read-host "Do you wish to change these values? (Y/N)"
if ($ChangeValues -eq "Y")
  Write-Host ""
  $ChangeSub = read-host "Change subscription? (Y/N)"
  if ($ChangeSub -eq "Y")
    $MySubscriptionName = Read-host "Enter subscription name "
  Write-Host ""
  $ChangeRg = read-host "Change resource group? (Y/N)"
  if ($ChangeRg -eq "Y")
    $MyResourceGroup = Read-host "Enter Resource group "
  Write-Host ""
  $ChangeVnet = read-host "Change Vnet? (Y/N)"
  if ($ChangeVnet -eq "Y")
    $MyVnet = Read-host "Enter VNet "
  $MySubs = Get-AzureRmSubscription
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Unable to retrieve subscriptions."
$MySubsName = $
Start-Sleep 1
Write-Host ""
if ($MySubsName -contains "$MySubscriptionName")
  Write-Host "You are logged in and have access to `"$MySubscriptionName`"..."
  Write-Host "It appears that you are not logged in."
  Write-Host ""
  $NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
  if ($NeedToLogin -eq "Y")
    Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
  elseif ($NeedToLogin -eq "N")
    Write-Host "You must already be logged in then.  Fine. Continuing..."
    Write-Host ""
    Write-Host "You made an invalid choice.  Exiting..."
Start-Sleep 1
Write-Host ""
Write-Host "We will now check to see if a given IP address is available from any subnet in VNet `"$MyVnet`" "
Write-Host "...and if it is not available, provide the next few available on that subnet."
Start-Sleep 1
Write-Host ""
$MyTestIpAddress = Read-Host "What address to you wish to test for availability?"
$MyNetwork = Get-AzureRmVirtualNetwork -name $MyVnet -ResourceGroupName $MyResourceGroup
$MyResults = Test-AzureRmPrivateIPAddressAvailability -VirtualNetwork $MyNetwork -IPAddress $MyTestIpAddress
$MyResultsAvailableIPAddresses = $MyResults.AvailableIPAddresses
$MyResultsAvailable = $MyResults.Available
Start-Sleep 1
if ($MyResultsAvailable -eq $False)
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Sorry, but $MyTestIpAddress is not available."
  Write-Host ""
  Write-Host -ForegroundColor Green "However, the following adddresses are free to use:"
  Write-Host ""
  Write-Host ""
  Write-Host -ForegroundColor Green "Yes! $MyTestIpAddress is available."
Write-Host ""
Write-Host " ...Complete"

Now, if you know a better way to handle it, or have tips for improvement — or if you find a bug — I’d love to hear them (and so would Jim).  I hope it helps you out there…

Thanks, and enjoy!

Mobile Application Management with Intune

2017-07-27T00:00:55+00:00 June 2nd, 2017|blog, Intune, Mobility|

Mobile Application Management (MAM) is a feature that’s not new.  However, Microsoft is always improving on the MAM capabilities, and today Intune supports multiple operating systems on Mobile devices.  This is not an easy feat; since Microsoft are bound by the APIs that these other platforms offer, such as iOS and Android.  These non-Microsoft operating systems are the most prevalent on mobile devices today; and with greater access to corporate data, this poses a threat to data protection and leakage.


We’ve all used application policies from Microsoft’s wide range of applications that have been for many years.  For example:

  • GPOs control where icons are, where data is saved, what drives are mapped, etc.
  • Configuration manager is used to push software out to authorized users and remove applications from those who are not
  • Active Directory provides a way to secure data on the network with Groups and Users

…And while Microsoft released Intune quite a few years back, I’ve only recently become a real fan since I’ve started using Mobile Application Management without enrollment.  Let’s take a quick look at how MAM allows you to offer access to corporate data without compromising too much of that flexibility that users enjoy by choosing their own device platform and bringing their own devices to work.


There’s nothing new with the concept of “Bring your own device” (BYOD); it’s a concept that’s been around for quite some time.   Users can bring their own devices and use them for daily business when a cell phone is needed to do just that.  Traditionally, users would logon to a segmented Wi-Fi network that has no access to the corporate network.  This allowed IT admins to avoid having to manage additional network access to the company resources and provide an open network for these devices as well as guests visiting their offices.  However, with many companies moving data and apps to “the Cloud”, the focus is no longer about segmenting networks, and is instead more focused on protecting the data.

Traditional office apps like Word, Excel, and PowerPoint have been available on mobile devices for quite some time now too, but they commonly required sending the documents to your phone and then opening them.  With Office 365, SharePoint online, and OneDrive, these apps now have access to a massive amount of your corporate data.  Without protecting this data when accessed on a mobile device, a user could download sensitive company information on their mobile device unencrypted and unprotected from prying eyes.  This is where I think Mobile Application Management really starts to come into play.

A Real-World Example

Intune’s Mobile Application Management provides the capabilities to protect your sensitive information on the device, wherever that device is, whether it is in a hotel half-way across the world, left behind in a taxi cab, or picked from the pocket of your CEO.  The device may be compromised but the data is secure.  This is due to the way application management protects the data on the device.  Let me provide you with an example:

Bob’s a CEO of an organization that provides financial information to customers across the financial markets.  The details of the finances can make or break a company’s stock profile if they were to be leaked.  Bob uses an iPhone to read emails and open documents on his device while traveling the subway in New York City.  During a busy morning, he’s shuffling to make it to his next appointment and accidentally drops his phone while exiting the train.

Because of a rich set of policies that Bob’s admin has configured with MAM, the data Bob accesses is not allowed to be stored on the device; and after 5 attempts to unlock the phone unsuccessfully, the corporate apps and data would be wiped.  Even if they were to guess the PIN on Bob’s phone, they would still have to guess his credentials; which are required to open any of the company apps that Bob uses.  It’s important to understand that:

  • The data is not on the device
  • There’s a high-probability that someone would automatically wipe the device by guessing the PIN wrong 5 times
  • By the time Bob realizes he’s lost his phone, a quick call to his IT Department triggers the admin to send a remote wipe request to his device AND receives a confirmation of success

That was just one example and there are many more features that MAM can enable to protect your data.

Bringing MAM Home

Mobile Application Management is easy to enable and deploy to your users.  With proper communication and process, your company data will be secured.  Don’t wait for one of your end-users to accidentally leak sensitive information that could make or break your organization’s reputation.  Identify those that are using mobile devices and protect them sooner than later.

The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|



I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.

How to protect against the next Ransomware Worm

2017-07-27T00:00:58+00:00 May 15th, 2017|blog, Ransomware, Security|

Hopefully you were one of the prepared organizations who avoided the latest Ransomware worms that made its way around the globe this past week.  This worm crippled dozens of companies and government entities, as it impacted over 230K computers in 150 countries.  Most of the infections were in Europe, Asia, and the Middle East, so if you did not get hit, you were either prepared, or lucky.  This blog post will help you be prepared for when this happens again, so that you don’t have to rely on luck.

Patch everything you can, as quick as you can

The exploit at the root of this Ransomware worm was resolved in MS17-010, which was released in March of 2017, giving organizations more than enough time to download, test, pilot through your UAT (User Acceptance Testing), and deploy to Production.  While introducing new patches and changes to your environment carries risk of breaking applications, there is far more risk in remaining unpatched – especially security specific patches.  Allocate the proper resources to test and roll out patches as quickly as you can.

Run the newest OS that you can

While the EternalBlue exploit that was patched by MS17-010 was applicable to every Windows OS, you were safe if you were running Windows 10 due to a security feature called ELAM (Early Launch Anti-Malware).  Many of the infected machines were running Windows XP, or Server 2003, that did not get the MS17-010 patch (Microsoft has released a patch for these OS variants after the infection, please patch if you still have these in your environment).  It is not possible to secure Windows XP or Server 2003.  If you insist on running them in your environment, assume that they are already breached, and any information stored on them has already been compromised (You don’t have any service accounts logging into them that have Domain Admin privileges, right?).


Proper perimeter and host firewall rules help stop and contain the spread of worms.  While there was early reports that the initial attack vector was via E-mail, these are unconfirmed.  It appears that the worm was able to spread over the 1.3 Million Windows devices that have SMB (Port 445) open to the Internet.  Once inside the perimeter, the worm was able to spread to any device that had port 445 open without MS17-010 installed.

Turn off Unnecessary Services

Evaluate the services running in your desktop and server environment, and turn them off if they are no longer necessary.  SMB1 is still enabled by default, even in Windows 10.


These types of attacks are going to be the new normal, as they are extremely lucrative for the organizations who are behind them.  Proper preparation is key, as boards are starting to hold both the CEO and CIO responsible in the case of a breach.  While you may have cyber-security insurance, it may not pay out if you are negligent by not patching or running an OS that stopped receiving security updates 3 years ago.  I would recommend to be prepared for the next attack, as you may not be as lucky next time.

Additional Layers of Defense to Consider

For those over-achievers, additional layers of defense can prove quite helpful in containing a breach.
1.    Office 365 Advanced Threat Protection – Protect against bad attachments
2.    Windows Defender Advanced Threat Protection – Post-breach response, isolate/quarantine infected machines
3.    OneDrive for Business – block known bad file types from syncing

Good luck out there.

Disaster Recovery: Asking the wrong question?

2017-07-27T00:00:58+00:00 May 11th, 2017|Azure, blog, Cloud, Disaster Recovery|


In my role as an Azure specialist I get asked a lot of questions about Disaster Recovery. IMHO they almost always ask the wrong question.

Usually it goes something like “I need Disaster Recovery protection for my data center. I have N VMs to protect. I have heard that I can use Azure Site Recovery either to facilitate Disaster Recovery to my backup data center, or even use Azure as my backup data center.” That is true. Smile

In a previous lifetime I used to work on a Disaster Recovery  testing team for a major New York based international bank. We learned early on two major principles:

1. It is all about workloads since different application workloads have different Disaster Recovery  requirements. Every workload has unique Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Not all workloads are created equal.   For instance email is a critical workload in the typical enterprise. An outage of more than a few hours would affect business continuity significantly.  Other application workloads (such Foreign Exchange Trading, Order Processing, Expense Reporting, etc.) would have more or less stringent RTO and RPO requirements.

2. So it is really is all about Business Continuity. Disaster Recovery is a business case. It is clear that providing perfect Disaster Recovery for all application workloads would be extremely expensive; and in many cases not cost effective. In effect there is an exponential cost curve. So it is all about risk management and cost/benefit.

So where does that leave us?

1. Evaluate your Disaster recovery requirements on a workload by workload basis.

2. Plan how to implement it considering the objective of Business Continuity, RTO and RPO.

3. Use Azure Site Recovery to make it happen. Smile

This article originally posted 4/1/2016 at Bill’s other blog, Cloudy in Nashville.

Walk for Wishes 2017…

2017-07-27T00:00:58+00:00 May 8th, 2017|Announcements, blog|

We did it again!  As the 19th Annual Walk For Wishes ® approached, fellow Coreteker Sarah Roland (a Walk veteran since 2011) assembled set up her annual “Sarah’s Dream Team” and put out the call for participation, as she has done every year since 2013.  And again this year, fellow Walk veterans Avi Moskovitz (since 2016) and I (since 2013) got our families involved and headed along to the Detroit Zoo and joined in the effort.

We are proud to have been a small — but important — part of the more than $500,000 and counting because of the efforts of more than 6,500 walkers combined!  It was a really great crowd, and there were tons of interesting things to do and see along the way (it’s the Detroit Zoo, after all), supported by care and food volunteers along the way.   And given some threatening weather, we were amazed that we also managed to get the whole day in with no rain, and just a bit of cold.  That’s 5 years in a row of good walking weather; someone must be watching out for us…

We are truly thankful to the folks at Coretek who donated so generously to us and our team, both individually and as a company, raising over $1400 with us by walk time.  We hope you are as moved as we are by the work that is done by the organization and the good that it brings to those that — in many cases — don’t have a whole lot of good in their lives at the moment the wish fulfillment is granted to them.  For instance, here’s the moment that Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

And did I mention that you can still donate to our team!?!?  Just click here go to Sarah’s Dream Team page and follow the process to donate to the team or any of us individually.

Oh, and by the way… we decided that we are on a mission to get more members of our team for 2018, so you may as well just plan to join now…  😉

See you then!

A Cloud is a Cloud is a Cloud?

2017-07-27T00:00:58+00:00 May 4th, 2017|Azure, blog, Cloud|

It never fails to amaze me that it seems like every vendor in the industry, every hosting company and every virtualization or database vendor that puts something in one of the Public Clouds is quick to claim that “They have a Cloud”. A while back they even invented a term for that. The term is “CloudWashing” (i.e. naming everything that you have as “Cloud”)

Let’s apply some common sense. Back in 2011 the National Institute of Standards (NIST) produced a concise and clear definition of what it means for something to be a Cloud. You can read about it here. This is the standard against which all Cloud pretenders should be measured. In case you don’t have time to read it I will summarize.

The NIST model defines a Cloud as “Enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (The underlining is mine)

It is composed of:

  • 5 Essential characteristics
  • 3 Service models
  • 4 Deployment models

The 5 essential characteristics are:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

The 3 Service Models are:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

The 4 Deployment Models are:

  • Public Cloud
  • Private Cloud
  • Community Cloud
  • Hybrid Cloud

Let’s take the definitions apart a bit and see what makes them tick:

The 5 Essential Characteristics

On-demand self-service: Being able to self-provision and de-provision resources as needed, without vendor involvement.

Broad network access: Resources available over a network accessed through standard protocols from all kinds of devices. For Public Clouds, these resources are typically available all over the world with data centers that accommodate all country’s need for data sovereignty and locality.

Resource pooling: Lots of resources that can be allocated from a large pool. Often the user does not know, or have to know, their exact location. Although, in the case of Public Clouds like Microsoft Azure, locality is often under the user’s control. In many cases the pool of resources appears to be nearly infinite.

Rapid elasticity: The ability to scale up and down as needed at any time, in some cases automatically, based on policies set by the user.

Measured service: Providing the ability to track costs and allocate them.  Public Cloud resources are often  funded using an Operating Expenditure (OpEx) rather than the Capital Expenditure (CapEx) accounting model used where hardware purchases are involved.

The 3 Service Models

In the case of Public Clouds these service models are defined by who supports various levels in the technology stack; the customer or the cloud vendor.

This Microsoft Azure diagram has been in existence, in one form or another, at least since the 1970s:


It illustrates who is responsible for what part of each service type the Vendor (in blue) and the customer (in black).

Software as a Service (SaaS) asks the customer to take no responsibility for the underlying hardware and software stack. In fact, the customers only responsibility is to use the service and pay for it.

Platform as a Service (PaaS) lets the vendor manage the lower layers of the technology stack, while the customer is responsible for the Application and Data layers.

Infrastructure as a Service (IaaS) corresponds most closely to what the customer must manages  in a typical data center and what the vendor is responsible for. With IaaS, the vendor takes care of everything from part of the operating system layer to the bottom of the stack. Notice that in Azure the operating system layer is partly the vendor’s responsibility, and partly the customer’s. The customer is still responsible for applying operating system patches, for instance. (This may differ slightly for other Public clouds.)

I often include a slide in my Cloud presentations that addresses the question of what to use when. In my mind the decision tree is fairly obvious:

  • It’s SaaS until it isn’t
    • Then It’s PaaS until it isn’t
      • Then It’s IaaS
  • Or Hybrid with any of the above

If you can find a SaaS service that meets your objectives, use it. Let the vendor have all the stack support headaches. If you can’t, then consider using PaaS by either buying a 3rd party application or building your own.

Finally, if none of these approaches work, you can always take advantage of IaaS since that most closely matches what you have in your own data center. Even there, however, the vendor will take care of a great deal of the plumbing. (As an aside IaaS is often the approach of choice for taking a “lift and shift” approach to migrating what you already have running in your data center up into the cloud.)

And yes, I know we haven’t discussed Hybrid yet, but patience, we will get there.

The 4 Deployment Models

A Public Cloud is a pool of computing resources offered by a vendor, typically, on a “pay as you go” basis. It supports Self-Provisioning by its customers as discussed above. It also often has a massive global presence and appears to have near-infinite capacity. In general, it is available to anyone.

Gartner defines a large number of Leadership Quadrants, but the ones that are most relevant to our discussion are the ones for Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Gartner Leadership Quadrant for IaaS includes just Amazon and Microsoft. The one for PaaS just includes Microsoft Azure and Salesforce. There are other lesser Public Cloud vendors including Google, IBM and Rackspace.

One other point: Unless a company can spend the many billions of dollars necessary to create a global datacenter presence it is hard for them to be considered a Public Cloud leader.

A Private cloud on the other hand is normally local to a single company. It is normally located on their own premises. If a customer creates a data center, or a complex of data centers, that conforms to the NIST definitions discussed herein then it can rightly be called a “Private Cloud”. Be cautious here, however. Remember the term “CloudWashing” as defined above. Just because a customer has a large number of computers in a data center does not make that a Private Cloud no matter how vehemently they insist on calling it one.

Although there is no requirement for the architecture of Public and Private Clouds to be the same most companies agree that compatibility between them would be helpful to support CloudBursting. That is, the ability to move applications and data freely between the data center and the Public Cloud. (See the discussion on Hybrid Cloud below.)

A Community Cloud is a variation of a Public Cloud that restricts its clientele to a specific community of users. For instance, there are Community Clouds for Government and Education as well as for other communities. Each of these may have different levels of security and compliance requirements or other unique characteristics. The NIST document does state that it may be operated for the community by one of the companies in it, however, it is typically operated by a Public Cloud vendor using a walled-off subset of it resources.

A Hybrid Cloud is typically formed by the integration of two or more Public and/or Private Clouds. In the typical case a company utilizes both a Public Cloud and resources in their own data center structured as a private cloud with strong networking integration between them.

A high degree of standardization between them is desirable in order to make it possible to distribute resources across them or to load balance or cloudburst resources between them. This is often done for security and compliance reasons where a company feels that some data and/or processing must remain behind their own firewall, while other data and/or processing can take advantage of a Public Cloud.

In the case of Microsoft technology there is one example where compatibility will be extremely high and advantageous. I expect the Private Cloud space to be dominated, at least in data centers utilizing Microsoft technology, by the new Azure Stack appliance coming out this summer from 5 initial vendors. Make no mistake about it. Azure Stack is Azure running in your own data center. In effect Azure stack is pretty much like Just another Azure Region  (JAAR?) to the Public Azure Cloud. Having that high degree of compatibility will help facilitate the Hybrid Cloud architecture discussed above. I have already blogged about Azure Stack and why I feel that it is so important so I will not go into that in detail here. See this bog post: The Future of Azure is Azure Stack.

We also should distinguish between true Hybrid Cloud and an IT Hybrid Infrastructure In the case of the latter the resources in the data center need not be in the form of a Private Cloud as discussed above. Microsoft is the clear leader in this space now because of its traditional enterprise data center presence and it’s leadership in converting itself to be a Cloud Company.

So, don’t be fooled by Cloud Pretenders. Apply the NIST litmus test.

This article originally posted 5/3/2017 at Bill’s other blog, Cloudy in Nashville.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Why use Cloud App Security when my firewall already does this?

2017-07-27T00:00:58+00:00 April 12th, 2017|blog, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

Microsoft’s new Cloud App Security (CAS) is a new product feature that comes with Microsoft’s Enterprise Mobility + Security E5 line of products.  The solution is a cloud-based application model built on Azure Active Directory but can also be used independently, although the dataset will not be as rich.

The idea is for customers to be able to gain deep insight into what apps their end-users are consuming, identify data drift and leakage, be able to “sanction” or “unsanctioned” applications, and even generate a block script to block those unsanctioned apps at the firewall level.  There are two paths to gathering this data:  Firewall Logs and Connected apps.


You can discover information by manually importing firewall logs or even setup a connector VM which will gather the logs and upload them to the CAS for you.  This is not much different than technologies offered by the firewall providers themselves and, in many cases, will not provide as quick of a reaction as you’d receive from those vendor provided solutions.  The connector, by default, uploads logs from the firewall every 20 minutes and imports those into the CAS.

What I believe separates Cloud App Security different from the firewall provider solutions is that it integrates with Connected apps.  A Connected app is an application where CAS leverages APIs provided by the cloud provider.  Each provider has its own framework and limitations, so the functionality for each may depend on how much the provider has extended the API.

There are currently few Connected Apps that CAS supports but I’ve found that the biggest bang for your buck will be the Office 365 suite of applications.  This allows the CAS to see usage of the standard suite of Office apps and your Azure AD connected users.


With the data gathered from the Connected apps, you can see information on File usage, owner information, app name, Collaborators, and more.  You will be able to tell who is accessing what files and who those files have been shared with.  You can drill down on particular user activity and see all of the apps and traffic volumes for their particular usage.

The Cloud Discovery Dashboard provides a rich view of information from a graphical perspective including dashboard items like App Categories which highlight usage based on categories such as CRM, Collaboration, Accounting, Storage apps, and more.  Other items on the dashboard show top discovered apps, top users, and even a geographical map of usage based on where the apps are being used.


Through alerts you may be made aware of Risky IP addresses, Mass downloads, New Cloud app usage, and more.  If a particular user is a concern, you even have the ability to suspend usage of a particular connected app for a particular user.  This adds a layer of security that a standard firewall report may not provide – especially if the user roams to another location off-premise where your firewall is not present.

Using the policies feature, I can set an alert, notify the user and CC their manager, or even suspend the user based on the several configuration policies that are available to me via the console.  This allows me to mitigate threats as they happen instead of waiting for a review of alerts or logs, possibly days or even weeks after the events occurred.

To summarize on Microsoft’s Cloud App Security, I would have to say that they are opening the door to rich integration with cloud based apps and providing another avenue to secure your corporate data.  With the deep integration of Office 365, those that have the E5 licensing should definitely take advantage of this product.  Even those interested, but not licensed can subscribe for a trial version and use the firewall discovery solution to get an immediate view of what’s being used internally.  This will allow your organization to have that much-needed discussion on BYOD and the security risks that ultimately partner with an open door policy.

If setup properly, adding Cloud App Security to your environment can greatly increase the level of security your organization has regarding the mobility of your data and users!

For More information, review the following useful links:


5 Tips for connecting your Azure space to On-Premises…

2017-07-27T00:00:58+00:00 March 26th, 2017|Azure, blog, Cloud, Micrsoft Cloud Solution Provider|

Often, the easiest thing about using the Azure Cloud is getting started.  Whether it’s creating a VM, establishing a web service, etc., it’s usually as easy as a few clicks and you’ve got some “thing” running in a bubble.  That’s the easy part.

It’s what you do right after that can often be a challenge, since in many cases it involves inter-connectivity with your on-premises environment.  And at that point, whether it’s the early on, or even after long-thought-out deliberative designing, you may want to sit down and have a talk with your firewall/network team (who may never even have heard the word “Azure” before) and talk about exactly how to connect.

Please be mindful that the router/firewall people roll in a different workspace, and may have a different approach to what you’re trying to accomplish.  They may prefer to use third-party firewall/tunnel capabilities with which they are already familiar, or utilize the off-the-shelf options that Microsoft provides.  Note: This article is all about the built-in Microsoft options; we’ll have a discussion about third-party items in a future article.

Specifically when working with the native Azure connectivity options, the first thing you’ll want to do is point yourself and others at this URL, which provides most everything needed to get started:

…note that there are some great sub-pages there too, to take the conversation from “Why are we doing this” to “What is Azure” to “Let’s submit a ticket to increase our route limitation“.

But speaking as a server/cloud guy, I wanted to give you some simple but important tips you’ll need to know off the top of your head when speaking to your router people:

Tip #1
There are two types of Gateways for on-prem connections to the cloud: ExpressRoute and VPN.  ExpressRoute is awesome and preferred if you have that option.  If you don’t know what ExpressRoute is already, you probably can’t afford it or don’t need it — which leaves you with VPN.  The good news is that if done right, the VPNs can be perfectly fine for an Enterprise if you set them up right, and mind them well.

Tip #2
I’m mostly writing these VPN options down because I always forget it, but you need to know the definitions too:
“Static Routing” used to be called “PolicyBased” and your router person knows it as IKEv1
“Dynamic Routing” used to be called  “RouteBased” and your router person knows it as IKEv2

Tip #3
PolicyBased can only be used with “Basic” SKU, and only permits one tunnel and no transitive routing.  You probably do not want this except in the most simple of configurations.  Ironically, your router/firewall person will most likely configure your VPN this way if you don’t instruct them otherwise.  Watch out!

Tip #4
The firewall you have may not be supported.  But even if it’s not, that means two things:  you may be forced into PolicyBased (read Tip #3), or in many cases it will work just fine even if it’s not supported.  But you might be on your own if you have a problem, so know what you’re getting into.

Tip #5
Please calculate the total number of routes and gateways and such that you’ll be permitted based on the SKUs you’re chosen.  Make sure that your fanciful networking dreams will all come true when you finally get where you’re going.  Everything in Azure has a quota or limitation of some sort, and you can almost always get them raised from the low original limit, but some things just aren’t possible without changing SKUs.

Extra Bonus Tip
Look into the “Network Watcher” preview for validating some of your networking flow and security, and for an instant dashboard of the quotas (mentioned in Tip #5).  It’s only available in some locations right now, but it’s looks like it will be quite nice.

…and that’s just scratching the surface, but those are some of the things I run into out there, and I thought it might save you a minute… or a day… or more…

Good luck out there!

Load More Posts