Which Azure Plan is Right for You?

2017-07-27T00:47:04+00:00 July 27th, 2017|Azure, blog, Cloud, Microsoft, Microsoft Infrastructure, Micrsoft Cloud Solution Provider, Office 365|

As you start to explore the world of Microsoft Azure Cloud Services, you will start to see there are many options.  Let’s discuss the three types of Microsoft programs for you to purchase.

#1 – Pay-As-You-Go Subscriptions

Pay-As-You-Go subscriptions are simple to use and simple to set up.  There are no minimum purchases or commitments.  You pay for your consumption by credit-card on a monthly basis and you can cancel anytime.  This use-case is primarily for infrastructure environments that are setup for a temporary purpose.  It’s important to understand that organizations using this model pay full list price for consumption and do not have direct support from Microsoft.

#2 – Microsoft Enterprise Agreement (EA)

Microsoft Enterprise Agreements are commitment based Microsoft Volume Licensing agreements with a contractual term of 3 years.  Enterprise Agreement customers can add Azure to their EA by making an upfront monetary commitment for Azure services.  That commitment is consumed throughout the year by using a combination of the wide variety of Microsoft cloud services including Azure and Office 365.  This is paid annually in advance with a true up on a quarterly basis for overages.  Any unused licenses are still charged based on the commitment.  If you are a very large enterprise, the greatest advantage of an EA is having a direct relationship with a Microsoft sales team.  Also, EAs offer discounts based on your financial commitment.  And while there are many pros to the EA approach, understanding and controlling the cost of consumption can be a challenge for customers within EAs.  Personally, I recently took over the management of our EA and can attest that this can be very complicated.

#3 – Cloud Solution Provider (CSP)

When using Microsoft Cloud Services through the Cloud Solution Provider (CSP) program, you work directly with a partner to design and implement a cloud solution that meets your unique needs.  Cloud Solution Providers, support all Microsoft Cloud Services (i.e., Azure, Office 365, Enterprise Mobility Suite and Dynamics CRM Online) through a single platform.  CSP is similar to the Pay-As-You-Go subscription in that there are no minimum purchases or commitments.  Your consumption is invoiced monthly based on actual consumption (either via invoiced PO or credit card, your choice), and you can cancel at anytime.  This will significantly simplify your Azure and Office 365 billing process!  CSP offers many advantages over Pay-As-You-Go Subscriptions and Enterprise Agreements, and in most cases can be a more cost effective solution.

As a CSP, Coretek helps customers optimize their consumption cost by working with our customers to ensure they have the right Azure server types assigned to their workloads.  We also work with customers to shut down services when they are minimally used after business hours.  As part of Coretek’s Managed Support, our team provides proactive maintenance to ensure your infrastructure is running in an optimal manor including monitoring and patching of your servers.  Coretek’s Azure Management Suite (AMS) Portal enables business users to understand where the cost of their consumption is being utilized.  The AMS portal can display real time consumption cost based on department and projects.  It also enables business users to understand what Microsoft licenses are being utilized and who they are assigned to in a simple graphical format.

Coretek Services – Improving End User Experience and IT Efficiency.

Microsoft Azure – Global. Trusted. Hybrid.  This is cloud on your terms.

Azure – What tags are we using again…?

2017-07-27T00:00:55+00:00 July 7th, 2017|Azure, blog, PowerShell|

Have you wondered what tags are assigned to all your Azure VMs?  Do you not have ARM Policies in place to enforce your preferred tags yet?

I was in just such a situation the other day.  Just like in my previous post on quick Azure-related scripts, I was working with a customer that just wanted a quick utility to report on what VMs are properly tagged (or not) in their implementation, without having to fish around the Portal.  No ARM Policies there yet…  *yet*.

So I whipped this together.  And much like that previous script, you just paste this into a PS1 file, set the subscription name in the variable, and then run it.

# GetVmTags - Jeremy Pavlov @ Coretek Services 
# Setting the subscription here
$MySubscriptionName = "My Subscription"
# Set some empty arrays
$vmobjs = @()
$vmTagsKeys = @()
$vmTagsValues = @()
# Get the VMs...
$vms = Get-AzureRmVm 
#
$NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
if ($NeedToLogin -eq "Y")
{
  Login-AzureRmAccount
  Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
}
elseif ($NeedToLogin -eq "N")
{
  Write-Host "You must already be logged in then.  Fine. Continuing..."
}
else
{
  Write-Host ""
  Write-Host "You made an invalid choice.  Exiting..."
  exit
}
#
foreach ($vm in $vms)
{
    Write-Host ""
    $vmname = $vm.name
    $MyResourceGroup = $vm.ResourceGroupName
    Write-Host "Checking tags for VM $vmname... "
    Start-Sleep 1
    $vmTags = (Get-AzureRmVM -name $vmname -ResourceGroupName $MyResourceGroup).Tags
    $vmTagsCount = $vmTags.Count
    if ($vmTagsCount -gt 0)
    {
      $vmTagsKeys = $vmTags.Keys -split '
[\r\n]' $vmTagsValues = $vmTags.Values -split '[\r\n]' for ($i=0;$i -lt $vmTagsCount; $i++) { $CurrentTagKey = $vmTagsKeys[$i] $CurrentTagValue = $vmTagsValues[$i] Write-Host -ForegroundColor Green "Key : Value -- $CurrentTagKey : $CurrentTagValue" } } else { Write-Host -ForegroundColor Yellow "No tags for $vmname" } }

The results should look something like this, except hopefully a lot more serious and business-y:

Have fun with it, change it up, and let me know what you do with it…   I hope it helps.  Enjoy!

Azure – Next Available IP Address…

2017-06-01T19:53:53+00:00 June 15th, 2017|Azure, blog, Cloud, PowerShell|

The other day, I was working at a customer location with one of their IT admins named Jim, designing an Azure IaaS-based deployment.  During our session, he challenged me to make an easy way for him and his coworkers to see if a particular address is available, or find the next available “local” address in their various Azure VNets.

Because while addressing can be handled with ARM automation — or just dynamically by the nature of IaaS server creation — in an Enterprise it is usually necessary to document and review the build information by a committee before deployment.  And as a result, Jim wanted to be able to detect and select the addresses he’d be using as part of build designs, and he wanted his coworkers to be able to do the same without him present.  So, this one is for Jim.

I wanted to write it to be user-friendly, with clear variables, and be easy-to-read, and easy-to-run.  …unlike so much of what you find these days…  😉

Copy these contents below to a file (if you wish, edit the dummy values for subscription, Resource Group, and VNet) and run it with PowerShell.  It will do a mild interview of you, asking about the necessary environment values (and if you need to change them), and then asking you for an address to validate as available.  If the address you specified is available, it tells you so; if it is not, it returns the next few values in the subnet from which the address you entered resides.

# TestIpaddress - Jeremy Pavlov @ Coretek Services 
# You just want to know if an internal IP address is available in your Azure VNet/Subnet.
# This script will get you there.  
#
# Pre-reqs:
# You need a recent version of the AzureRm.Network module, which is 4.0.0 at his writing.  
# You can check with this command: 
#     Get-Command -Module azurerm.network
# …and by the way, I had to update my version and overwrite old version with this command:
#     Install-Module AzureRM -AllowClobber
#
# Some things may need to be hard-coded for convenience...
$MySubscriptionName = "My Subscription"
$MyResourceGroup = "My Resource Group"
$MyVnet = "My VNet"
#
Start-Sleep 1
Write-Host ""
Write-Host "Here are the current settings:"
Write-Host "Current subscription: $MySubscriptionName"
Write-Host "Current Resource Group: $MyResourceGroup"
Write-Host "Current VNet: $MyVnet"
Write-Host ""
$ChangeValues = read-host "Do you wish to change these values? (Y/N)"
if ($ChangeValues -eq "Y")
{
  Write-Host ""
  $ChangeSub = read-host "Change subscription? (Y/N)"
  if ($ChangeSub -eq "Y")
  {
    $MySubscriptionName = Read-host "Enter subscription name "
  }
  Write-Host ""
  $ChangeRg = read-host "Change resource group? (Y/N)"
  if ($ChangeRg -eq "Y")
  {
    $MyResourceGroup = Read-host "Enter Resource group "
  }
  Write-Host ""
  $ChangeVnet = read-host "Change Vnet? (Y/N)"
  if ($ChangeVnet -eq "Y")
  {
    $MyVnet = Read-host "Enter VNet "
  }
}
#
try
{
  $MySubs = Get-AzureRmSubscription
}
catch
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Unable to retrieve subscriptions."
}
$MySubsName = $MySubs.name
Start-Sleep 1
Write-Host ""
if ($MySubsName -contains "$MySubscriptionName")
{
  Write-Host "You are logged in and have access to `"$MySubscriptionName`"..."
}
else
{
  Write-Host "It appears that you are not logged in."
  Write-Host ""
  $NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
  if ($NeedToLogin -eq "Y")
  {
    Login-AzureRmAccount
    Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
  }
  elseif ($NeedToLogin -eq "N")
  {
    Write-Host "You must already be logged in then.  Fine. Continuing..."
  }
  else
  {
    Write-Host ""
    Write-Host "You made an invalid choice.  Exiting..."
    exit
  }
}
#
Start-Sleep 1
Write-Host ""
Write-Host "We will now check to see if a given IP address is available from any subnet in VNet `"$MyVnet`" "
Write-Host "...and if it is not available, provide the next few available on that subnet."
Start-Sleep 1
Write-Host ""
$MyTestIpAddress = Read-Host "What address to you wish to test for availability?"
#
$MyNetwork = Get-AzureRmVirtualNetwork -name $MyVnet -ResourceGroupName $MyResourceGroup
$MyResults = Test-AzureRmPrivateIPAddressAvailability -VirtualNetwork $MyNetwork -IPAddress $MyTestIpAddress
$MyResultsAvailableIPAddresses = $MyResults.AvailableIPAddresses
$MyResultsAvailable = $MyResults.Available
#
Start-Sleep 1
if ($MyResultsAvailable -eq $False)
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Sorry, but $MyTestIpAddress is not available."
  Write-Host ""
  Write-Host -ForegroundColor Green "However, the following adddresses are free to use:"
  Write-Host ""
  $MyResultsAvailableIPAddresses
}
else
{
  Write-Host ""
  Write-Host -ForegroundColor Green "Yes! $MyTestIpAddress is available."
}
Write-Host ""
Write-Host " ...Complete"

Now, if you know a better way to handle it, or have tips for improvement — or if you find a bug — I’d love to hear them (and so would Jim).  I hope it helps you out there…

Thanks, and enjoy!

The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|

image

image

I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.

Disaster Recovery: Asking the wrong question?

2017-07-27T00:00:58+00:00 May 11th, 2017|Azure, blog, Cloud, Disaster Recovery|

image

In my role as an Azure specialist I get asked a lot of questions about Disaster Recovery. IMHO they almost always ask the wrong question.

Usually it goes something like “I need Disaster Recovery protection for my data center. I have N VMs to protect. I have heard that I can use Azure Site Recovery either to facilitate Disaster Recovery to my backup data center, or even use Azure as my backup data center.” That is true. Smile

In a previous lifetime I used to work on a Disaster Recovery  testing team for a major New York based international bank. We learned early on two major principles:

1. It is all about workloads since different application workloads have different Disaster Recovery  requirements. Every workload has unique Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Not all workloads are created equal.   For instance email is a critical workload in the typical enterprise. An outage of more than a few hours would affect business continuity significantly.  Other application workloads (such Foreign Exchange Trading, Order Processing, Expense Reporting, etc.) would have more or less stringent RTO and RPO requirements.

2. So it is really is all about Business Continuity. Disaster Recovery is a business case. It is clear that providing perfect Disaster Recovery for all application workloads would be extremely expensive; and in many cases not cost effective. In effect there is an exponential cost curve. So it is all about risk management and cost/benefit.

So where does that leave us?

1. Evaluate your Disaster recovery requirements on a workload by workload basis.

2. Plan how to implement it considering the objective of Business Continuity, RTO and RPO.

3. Use Azure Site Recovery to make it happen. Smile

This article originally posted 4/1/2016 at Bill’s other blog, Cloudy in Nashville.

A Cloud is a Cloud is a Cloud?

2017-07-27T00:00:58+00:00 May 4th, 2017|Azure, blog, Cloud|

It never fails to amaze me that it seems like every vendor in the industry, every hosting company and every virtualization or database vendor that puts something in one of the Public Clouds is quick to claim that “They have a Cloud”. A while back they even invented a term for that. The term is “CloudWashing” (i.e. naming everything that you have as “Cloud”)

Let’s apply some common sense. Back in 2011 the National Institute of Standards (NIST) produced a concise and clear definition of what it means for something to be a Cloud. You can read about it here. This is the standard against which all Cloud pretenders should be measured. In case you don’t have time to read it I will summarize.

The NIST model defines a Cloud as “Enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (The underlining is mine)

It is composed of:

  • 5 Essential characteristics
  • 3 Service models
  • 4 Deployment models

The 5 essential characteristics are:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

The 3 Service Models are:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

The 4 Deployment Models are:

  • Public Cloud
  • Private Cloud
  • Community Cloud
  • Hybrid Cloud

Let’s take the definitions apart a bit and see what makes them tick:

The 5 Essential Characteristics

On-demand self-service: Being able to self-provision and de-provision resources as needed, without vendor involvement.

Broad network access: Resources available over a network accessed through standard protocols from all kinds of devices. For Public Clouds, these resources are typically available all over the world with data centers that accommodate all country’s need for data sovereignty and locality.

Resource pooling: Lots of resources that can be allocated from a large pool. Often the user does not know, or have to know, their exact location. Although, in the case of Public Clouds like Microsoft Azure, locality is often under the user’s control. In many cases the pool of resources appears to be nearly infinite.

Rapid elasticity: The ability to scale up and down as needed at any time, in some cases automatically, based on policies set by the user.

Measured service: Providing the ability to track costs and allocate them.  Public Cloud resources are often  funded using an Operating Expenditure (OpEx) rather than the Capital Expenditure (CapEx) accounting model used where hardware purchases are involved.

The 3 Service Models

In the case of Public Clouds these service models are defined by who supports various levels in the technology stack; the customer or the cloud vendor.

This Microsoft Azure diagram has been in existence, in one form or another, at least since the 1970s:

clip_image002[4]

It illustrates who is responsible for what part of each service type the Vendor (in blue) and the customer (in black).

Software as a Service (SaaS) asks the customer to take no responsibility for the underlying hardware and software stack. In fact, the customers only responsibility is to use the service and pay for it.

Platform as a Service (PaaS) lets the vendor manage the lower layers of the technology stack, while the customer is responsible for the Application and Data layers.

Infrastructure as a Service (IaaS) corresponds most closely to what the customer must manages  in a typical data center and what the vendor is responsible for. With IaaS, the vendor takes care of everything from part of the operating system layer to the bottom of the stack. Notice that in Azure the operating system layer is partly the vendor’s responsibility, and partly the customer’s. The customer is still responsible for applying operating system patches, for instance. (This may differ slightly for other Public clouds.)

I often include a slide in my Cloud presentations that addresses the question of what to use when. In my mind the decision tree is fairly obvious:

  • It’s SaaS until it isn’t
    • Then It’s PaaS until it isn’t
      • Then It’s IaaS
  • Or Hybrid with any of the above

If you can find a SaaS service that meets your objectives, use it. Let the vendor have all the stack support headaches. If you can’t, then consider using PaaS by either buying a 3rd party application or building your own.

Finally, if none of these approaches work, you can always take advantage of IaaS since that most closely matches what you have in your own data center. Even there, however, the vendor will take care of a great deal of the plumbing. (As an aside IaaS is often the approach of choice for taking a “lift and shift” approach to migrating what you already have running in your data center up into the cloud.)

And yes, I know we haven’t discussed Hybrid yet, but patience, we will get there.

The 4 Deployment Models

A Public Cloud is a pool of computing resources offered by a vendor, typically, on a “pay as you go” basis. It supports Self-Provisioning by its customers as discussed above. It also often has a massive global presence and appears to have near-infinite capacity. In general, it is available to anyone.

Gartner defines a large number of Leadership Quadrants, but the ones that are most relevant to our discussion are the ones for Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Gartner Leadership Quadrant for IaaS includes just Amazon and Microsoft. The one for PaaS just includes Microsoft Azure and Salesforce. There are other lesser Public Cloud vendors including Google, IBM and Rackspace.

One other point: Unless a company can spend the many billions of dollars necessary to create a global datacenter presence it is hard for them to be considered a Public Cloud leader.

A Private cloud on the other hand is normally local to a single company. It is normally located on their own premises. If a customer creates a data center, or a complex of data centers, that conforms to the NIST definitions discussed herein then it can rightly be called a “Private Cloud”. Be cautious here, however. Remember the term “CloudWashing” as defined above. Just because a customer has a large number of computers in a data center does not make that a Private Cloud no matter how vehemently they insist on calling it one.

Although there is no requirement for the architecture of Public and Private Clouds to be the same most companies agree that compatibility between them would be helpful to support CloudBursting. That is, the ability to move applications and data freely between the data center and the Public Cloud. (See the discussion on Hybrid Cloud below.)

A Community Cloud is a variation of a Public Cloud that restricts its clientele to a specific community of users. For instance, there are Community Clouds for Government and Education as well as for other communities. Each of these may have different levels of security and compliance requirements or other unique characteristics. The NIST document does state that it may be operated for the community by one of the companies in it, however, it is typically operated by a Public Cloud vendor using a walled-off subset of it resources.

A Hybrid Cloud is typically formed by the integration of two or more Public and/or Private Clouds. In the typical case a company utilizes both a Public Cloud and resources in their own data center structured as a private cloud with strong networking integration between them.

A high degree of standardization between them is desirable in order to make it possible to distribute resources across them or to load balance or cloudburst resources between them. This is often done for security and compliance reasons where a company feels that some data and/or processing must remain behind their own firewall, while other data and/or processing can take advantage of a Public Cloud.

In the case of Microsoft technology there is one example where compatibility will be extremely high and advantageous. I expect the Private Cloud space to be dominated, at least in data centers utilizing Microsoft technology, by the new Azure Stack appliance coming out this summer from 5 initial vendors. Make no mistake about it. Azure Stack is Azure running in your own data center. In effect Azure stack is pretty much like Just another Azure Region  (JAAR?) to the Public Azure Cloud. Having that high degree of compatibility will help facilitate the Hybrid Cloud architecture discussed above. I have already blogged about Azure Stack and why I feel that it is so important so I will not go into that in detail here. See this bog post: The Future of Azure is Azure Stack.

We also should distinguish between true Hybrid Cloud and an IT Hybrid Infrastructure In the case of the latter the resources in the data center need not be in the form of a Private Cloud as discussed above. Microsoft is the clear leader in this space now because of its traditional enterprise data center presence and it’s leadership in converting itself to be a Cloud Company.

So, don’t be fooled by Cloud Pretenders. Apply the NIST litmus test.

This article originally posted 5/3/2017 at Bill’s other blog, Cloudy in Nashville.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

5 Tips for connecting your Azure space to On-Premises…

2017-07-27T00:00:58+00:00 March 26th, 2017|Azure, blog, Cloud, Micrsoft Cloud Solution Provider|

Often, the easiest thing about using the Azure Cloud is getting started.  Whether it’s creating a VM, establishing a web service, etc., it’s usually as easy as a few clicks and you’ve got some “thing” running in a bubble.  That’s the easy part.

It’s what you do right after that can often be a challenge, since in many cases it involves inter-connectivity with your on-premises environment.  And at that point, whether it’s the early on, or even after long-thought-out deliberative designing, you may want to sit down and have a talk with your firewall/network team (who may never even have heard the word “Azure” before) and talk about exactly how to connect.

Please be mindful that the router/firewall people roll in a different workspace, and may have a different approach to what you’re trying to accomplish.  They may prefer to use third-party firewall/tunnel capabilities with which they are already familiar, or utilize the off-the-shelf options that Microsoft provides.  Note: This article is all about the built-in Microsoft options; we’ll have a discussion about third-party items in a future article.

Specifically when working with the native Azure connectivity options, the first thing you’ll want to do is point yourself and others at this URL, which provides most everything needed to get started:
https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-plan-design

…note that there are some great sub-pages there too, to take the conversation from “Why are we doing this” to “What is Azure” to “Let’s submit a ticket to increase our route limitation“.

But speaking as a server/cloud guy, I wanted to give you some simple but important tips you’ll need to know off the top of your head when speaking to your router people:

Tip #1
There are two types of Gateways for on-prem connections to the cloud: ExpressRoute and VPN.  ExpressRoute is awesome and preferred if you have that option.  If you don’t know what ExpressRoute is already, you probably can’t afford it or don’t need it — which leaves you with VPN.  The good news is that if done right, the VPNs can be perfectly fine for an Enterprise if you set them up right, and mind them well.

Tip #2
I’m mostly writing these VPN options down because I always forget it, but you need to know the definitions too:
“Static Routing” used to be called “PolicyBased” and your router person knows it as IKEv1
“Dynamic Routing” used to be called  “RouteBased” and your router person knows it as IKEv2

Tip #3
PolicyBased can only be used with “Basic” SKU, and only permits one tunnel and no transitive routing.  You probably do not want this except in the most simple of configurations.  Ironically, your router/firewall person will most likely configure your VPN this way if you don’t instruct them otherwise.  Watch out!

Tip #4
The firewall you have may not be supported.  But even if it’s not, that means two things:  you may be forced into PolicyBased (read Tip #3), or in many cases it will work just fine even if it’s not supported.  But you might be on your own if you have a problem, so know what you’re getting into.

Tip #5
Please calculate the total number of routes and gateways and such that you’ll be permitted based on the SKUs you’re chosen.  Make sure that your fanciful networking dreams will all come true when you finally get where you’re going.  Everything in Azure has a quota or limitation of some sort, and you can almost always get them raised from the low original limit, but some things just aren’t possible without changing SKUs.

Extra Bonus Tip
Look into the “Network Watcher” preview for validating some of your networking flow and security, and for an instant dashboard of the quotas (mentioned in Tip #5).  It’s only available in some locations right now, but it’s looks like it will be quite nice.

…and that’s just scratching the surface, but those are some of the things I run into out there, and I thought it might save you a minute… or a day… or more…

Good luck out there!

Coretek Services Named Microsoft Global Managed Partner

2017-07-27T00:00:58+00:00 December 9th, 2016|Azure, Microsoft, Micrsoft Cloud Solution Provider, News|

Farmington Hills, MI – December 9th, 2016 – Coretek Services, a global leader in managed hybrid IT solutions, has been named a Global Managed Partner by Microsoft Corporation. The prestigious distinction encompasses 20 companies worldwide, 6 of which are in the United States.

Coretek Services Managed Cloud for Azure utilizes the cloud platform and Coretek’s extensive suite of managed services, tools and experience to get customers up and running in their Azure environments quickly, securely, with less overhead and more predictability. Coretek’s managed cloud offering is one of the first of its kind to provide customers a single platform for infrastructure services on the Azure Platform.

“We are extending the value we deliver to our clients through our Managed Cloud for Azure,” said Ron Lisch, CEO of Coretek Services. “Through our services, we enable our clients to more effectively plan, build and run enterprise applications on Azure. Our collaboration with Microsoft further strengthens our leadership in offering the choice and control of managed cloud IT solutions.”

 

About Coretek Services

Coretek Services is a Systems Integration and IT Consulting Company that delivers high value and innovative solutions.  Coretek works with your team to custom-design an IT architecture based on each clients’ unique requirements; the solution encompasses server and desktop virtualization, optimization of a virtual desktop environment, cloud desktop, mobile device management, infrastructure consulting and project management.  Our goal is to help our clients achieve Project Success. No exceptions. For more information, visit coretekservices.com.

The Advantages of Working with a Microsoft Cloud Solution Provider (CSP)

2017-07-27T00:00:58+00:00 October 2nd, 2016|Azure, blog, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

There are many cloud services platforms — and numerous cloud service providers — to assist your organization with the strategy, deployment, and management of your cloud initiative.  In this ever-growing landscape of cloud providers, how do you choose the partner that is best for your business?

We have uncovered the key attributes which will determine your cloud projects’ success when selecting a cloud solution provider: experience, value, and fit.  Evaluating these three credentials of your cloud provider candidates will drive your cloud strategy, deployment, and management success rate.microsoft-cloud-solution-provider

Experience

First, you want a provider that has several cloud veterans that are constantly in touch with the state of the industry.  Coretek Services employs folks that are cloud product veterans in Azure and many of the other cloud technologies.  In fact, members of our team have been instrumental in building the Azure cloud solution when they were employed at Microsoft.

Next, you want to know that your provider isn’t “cloud only” but also has experience in data center infrastructure, virtualization, mobility, security, and your specific business domain such as healthcare, manufacturing, and others.  Few cloud service providers can offer you this additional depth of experience.

Value

You want your provider to provide value beyond just the cloud product being delivered.  This means that you want your new cloud partner to have significant relationships and partnerships with other technology vendors as well as the necessary expertise in those platforms.

As a Microsoft Cloud Solution Provider (CSP), we have a significant value partnership in Azure.  We have relationships with the product development teams and input into the feature development process.  It allows us to represent cloud computing trends that our customers are experiencing to the cloud product development team.

cloud-service-providerWhile you get a great product in Azure because Microsoft is focused on delivering the very best, we are free to build value-added features for our customers.  For example, we can tailor automation to your business to make your cloud usage more efficient, such as decreasing services when your business is closed or by increasing services when demand bursts to higher levels.  This allows you to control your costs and forecast your needs well in advance.

Simply put — you get the best of both worlds.  It allows your organization to receive the best that Azure can provide along

with the detailed focus of your IT business needs, which Coretek Services provides.

Priority and Fitcsp

As a Microsoft CSP, we quickly identify the technical problems and bring the right solutions and people rapidly to your assistance.  Coretek Services makes your organizations needs a top priority.  We will fit into your business in the way that is most appropriate to you providing professional services, managed services, or the mix that you desire.

We believe in one thing.  Customer Success!  No Exceptions!

Enterprise Best Practice does not necessarily equal Cloud Best Practice…

2017-07-27T00:00:58+00:00 July 28th, 2016|Azure, blog|

This article might just be restating the obvious for some — but to put it bluntly, a “best-practice” Enterprise Active Directory (AD) design feature may not perfectly translate to a Cloud-based deployment scenario. Let me explain…

When Good Mappings Go Bad

Let’s imagine an enterprise that has done a good job of providing universal access to user Home Folders by using the AD Home Folder attributes on the user objects.  Very common indeed, and very well loved in most cases.  In a well-designed infrastructure, the users get access to the Home Folder from almost anywhere in the world, and from a variety of platforms including local, remote, and thin/terminal access.

On top of that, imagine further that the environment utilized the individual logon script user object attribute to determine group memberships, deliver printers, and maybe even deliver a mapping or two.  All of this is fine (though arguably cumbersome) in a high-speed environment where the network inter-connectivity is not rate-limited or rate-charged.

Now however, let’s imagine being one of those users authenticating to an RDS/Terminal Server (session hosts) farm in a cloud-based domain instead of in the Enterprise.  Hmm.  Suddenly, different access and performance considerations appear when walking through that logon process.  For instance, while that Home Folder server may be reachable from that RDS farm, that lookup and access of the file server might very well be across a VPN pipe that is slow; or even if it’s fast, there may be a charge for egress data transfer as is the case with Microsoft Azure.  Oh, and that logon script will definitely hit the Domain Controller looking for all of what it needs to draw conclusions; and in the end, may attempt to map you to things you cannot even reach.

Can you solve this problem by putting domain controllers in the cloud?  Well, part of it — if you use good AD Site and Subnet configuration.  But you can’t escape the fact that your enterprise user objects may attempt to reach beyond those controllers and into the infrastructure to access what they must, and time-out on what they cannot (read: slow logon).

The GPO is your frienemy

And don’t even get me started on GPOs.  Yes, you know them, and you love them, and you use them to provide a rock-solid enterprise configuration for your users…  But what about those mandatory proxy registry settings that matter in the cloud?  What about those printer map settings?  What about those WMI evaluations?  The Item-Level Targeting?  And so on.

And then one day of course, there’s the one GPO setting that accidentally gets applied to those users that inexplicably wipes out their access to the application in the cloud-based RDS farm.

The bottom line is that again, things that may be prudent and reasonable in the Enterprise may be detrimental to the Cloud users’ experience.

So what can you do?

First, step back.  Ask yourself if your user logon process is clean, lean, and mean, and prudent for a Cloud-based experience.  It may very well be the case, but it likely is not.  So if you find that you’ve been a good and dutiful Enterprise admin and used Active Directory to tightly configure that user, you might be faced with a need to have a separate directory for your Cloud environment that is either replicated, integrated, or federated.  Which, for some organizations, may very well cause them to have to re-think security models (or at least re-imagine the ones they have), evaluate provisioning, and so on, as part of a larger Cloud Strategy.

Or, if your situation permits, you might be able to take advantage of the soon-to-be-released Azure Active Directory Domain Services, as long your design doesn’t run up against some of the limitations (I strongly recommend you read the FAQ and other documentation before deciding it’s right for you).

Now you’ve heard what to watch out for, but the options you utilize going forward depend on what you are trying to achieve.  Good luck out there, and let us know if we can help…

Load More Posts