2018: The year you migrate to the cloud?

2018-01-23T23:52:54+00:00 January 24th, 2018|Azure, blog, Cloud, Disaster Recovery|

Welcome to 2018, where the rush to the cloud shows no sign of slowing down.  New Azure features are being released so quickly that the Azure marketplace seems to show new services every day.  I sometimes see new features arrive between button clicks!  It is probably safe to say that we are in the “majority adopters” phase of the traditional technology adoption curve for the cloud.

The big question for you is whether this is the year to migrate your data center assets to the cloud.

Lift and Shift vs. Application Modernization

The first thing you need to consider is whether you want to “lift and shift” your infrastructure or start an application modernization project to move your digital assets to the cloud.

Lift and Shift Methodology — This method consists of cloning your server infrastructure to the cloud and transferring the data to the new servers.  The new servers are cloned as virtual machines (VMs) operating as they did in your on-premises infrastructure.  The plan detailed below is going to focus on this methodology.

Application Modernization Methodology — This method requires you to refactor your applications for the cloud.  There are significant cost savings available long term using this method; however, the conversion cost will be much higher than a straight “lift and shift” to the cloud.

The “Lift and Shift” Plan

Azure provides several tools including Azure Migrate (public preview) and Azure Site Recovery (ASR) to execute the migration to the cloud, but you need to understand what you are trying to accomplish.  The following is a typical plan for one of these projects:

  • Assessment – Use tools to collect data in manual or automatic ways to perform a documented analysis.  Data that should be collected and analyzed include the server function, operating system, cores (or CPUs), memory, disk sizes, and network performance.
  • Selection and Planning – Once the assessment is complete, you can then select the servers or workloads that are good candidates for migration.  You should also identify any server issues such as unsupported operating systems, large disk sizes, or incompatible disk technologies.  For example, disk sizes over 4TB cannot be cloned using ASR as of this writing.  Items like available network bandwidth, maintenance windows, and business objectives need to be considered before publishing your plan.
  • Environment Preparation – With any on-premises migration to the cloud, network constraints may need to be implemented (such as QoS and firewall changes for future replication). The target also needs to be created in Azure consisting of the new network, storage, and virtual private networks.  If you are using ASR, an on-premises process server needs rollout.  In this stage, I also recommend implementing a monitoring solution where you can watch the process server health and network bandwidth.
  • Replication – This is probably the longest and most tedious part of this entire project.  At this stage, the data is replicated using the selected migration tool.  Like any backup or disaster recovery solution, the longest part of synchronization is the initial data replication.  You do not merely want to replicate all the servers at once because you could significantly affect your available network bandwidth and that could make you unpopular.  When your end users come to find you with their torches and pitchforks, that’s very bad.  I have seen in the migrations that I have managed that doing more than 2-3 initial replications at a time is the limit.  Once a server has completed the initial replication, then you can add others because the completed servers need incremental changes over time.
  • Cutover – You are almost there!  Now comes the maintenance windows where you can migrate to Azure and then disable the original server.  Depending on your project and business needs, you can either do it all at once, or break up the list based on workloads and other factors.  Doing the migrations in multiple, distinct sets reduces stress.

Here at Coretek we do many of these projects every year.  We can definitely help you with your migration to the Azure cloud!  Just give us a call.

Coretek Wins! – 2018 Citrix Innovation Award for Partners

2018-01-10T01:32:24+00:00 January 10th, 2018|Azure, blog, Citrix, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

We won the 2018 Citrix Innovation Award for Partners!!

Thank so much to Wolverine Worldwide, all the people that voted for us, and of course all the awesome Coretek teammates that made it happen.  Much more to come!

See the details here:

https://www.citrix.com/go/innovation-award/partner-innovation.html

Amazing Video for 2018 Citrix Innovation Award Nomination…

2018-01-07T15:17:04+00:00 January 7th, 2018|Azure, blog, Citrix, Cloud, Micrsoft Cloud Solution Provider|

As nominees for the 2018 Citrix Innovation Award, Coretek recently had a video crew come through the main office.  The crew was there to interview, get scenery footage, and generally get the “vibe” of how we do what we do.

In the video, we get the overview of the amazing work we did with Wolverine Worldwide, helping them solve what is quite a common modern problem in an uncommon and innovative way.  And it is also really cool to see my fellow Coretekers “movie” extras!

Here is the completed video:

Please take a moment to watch the extremely cool video, and vote for Coretek for the 2018 Citrix Innovation Award Nomination at this link.  Then, if you like what you see, drop us a line and get Coretek and Citrix working to help you do what you do more efficiently!

Coretek’s Own 2018 Nutanix Technology Champions…

2017-12-21T13:38:53+00:00 December 21st, 2017|blog, Cloud, Nutanix|

We’re proud to share that our amazing fellow Coretekers Aaron Evans and Todd Geib have been nominated to the Nutanix Technology Champion (NTC) program for 2018!

http://eapch37923.i.lithium.com/t5/image/serverpage/image-id/3944iB069FBEB74F154B1/image-size/large?v=1.0&px=600

The NTC is an award that recognizes Nutanix and web-scale experts for their ongoing and consistent contributions to the community and industry. But it’s more than just an award — NTC is a program that also provides nominees with unique opportunities to further expand their knowledge, amplify their brand, and ultimately help shape the future of web-scale IT.

For more information on the program, please click here to visit the NTC 2018 announcement page.

Aaron and Todd have a combined 5 years in the program.  Props to Aaron and Todd for continuing to be a driving force behind Coretek’s commitment building Enterprise Clouds!

Coretek and Citrix: Delivering Confidence in the Cloud

2017-12-14T19:08:19+00:00 December 14th, 2017|Azure, blog, Citrix, Cloud, Home Page, Micrsoft Cloud Solution Provider, Mobility, News, Virtualization|

FARMINGTON HILLS, MI – December 14, 2017 – UPDATE on Citrix Innovation Award 2018.  You’ll have your chance to vote for Coretek and the Americas, January 2-9.  Details to follow soon.  Be a part of the excitement!

Click here for the latest info and pictures for the 2018 Citrix Innovation Award…get ready to vote for Coretek early January!

 

Azure – Which Public and Private IPs are In Use and By Which VM…?

2017-08-03T03:01:59+00:00 August 10th, 2017|Azure, blog, Cloud, PowerShell|

In my recent thread of simple-but-handy Azure PowerShell tools, I realized one important thing was missing: A tool that grabbed all the public & private IP addresses in use by VMs (plus some additional info).

I searched around, and found an old post by fellow Coreteker Will Anderson, where he’d solved the problem.  Unfortunately, PowerShell had changed since then, and his suggestion didn’t work anymore.  Fortunately however, Will Anderson is now on the other end of Skype from me, so after a quick explanation he gave me some advice on how to fix it within the new PowerShell behavior.  And of course, it would just be wrong not to post if back to the community…

So here is my script, with help from Will and another blogger Rhoderick Milne, where I found some additional input.  When executed, this script writes some quick handy info to screen, as in this lab example:

…which might be enough for you if you just want a quick review, but then it also dumps my preferred info for each VM to a csv, as follows: “Subscription”, “Mode”, “Name”, “PublicIPAddress”, “PrivateIPAddress”, “ResourceGroupName”, “Location”, “VMSize”, “Status”, “OsDisk”, “DataDisksCount”, “AvailabilitySet”

Just paste the below  into a script file, change the subscription name in the variable to your choice, and execute.  If you don’t know which subscription name to use, you can always run “Get-AzureRmSubscription” after you run “Login-AzureRmAccount” and find it in the list.  So grab the code, hack at it, and let me know where you take it!

# Thanks for help from Will Anderson, Rhoderick Milne for the assistance.
#
# Get Date; Used only for output file name.
$Date = Get-Date
$NOW = $Date.ToString("yyyyMMddhhmm")
#
# Variables
$MySubscriptionName = "Windows Azure  MSDN - Visual Studio Premium"
$VmsOutFilePath = "C:\temp"
$VmsOutFile = "$VmsOutFilePath\VmList-$NOW.csv"
#
$NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
if ($NeedToLogin -eq "Y")
{
  Login-AzureRmAccount
  Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
}
elseif ($NeedToLogin -eq "N")
{
  Write-Host "You must already be logged in then.  Fine. Continuing..."
}
else
{
  Write-Host ""
  Write-Host "You made an invalid choice.  Exiting..."
  exit
}
#
$vms = Get-AzureRmVm 
$vmobjs = @()
foreach ($vm in $vms)
{
  #Write-Host ""
  $vmname = $vm.name
  Write-Host -NoNewline "For VM $vmname... "
  Start-Sleep 1
  $vmInfo = [pscustomobject]@{
      'Subscription'= $MySubscriptionName
      'Mode'='ARM'
      'Name'= $vm.Name
      'PublicIPAddress' = $null
      'PrivateIPAddress' = $null
      'ResourceGroupName' = $vm.ResourceGroupName
      'Location' = $vm.Location
      'VMSize' = $vm.HardwareProfile.VMSize
      'Status' = $null
      'OsDisk' = $vm.StorageProfile.OsDisk.Vhd.Uri
      'DataDisksCount' = $vm.StorageProfile.DataDisks.Count
      'AvailabilitySet' = $vm.AvailabilitySetReference.Id }
  $vmStatus = $vm | Get-AzureRmVM -Status
  $vmInfo.Status = $vmStatus.Statuses[1].DisplayStatus
  $vmInfoStatus = $vmStatus.Statuses[1].DisplayStatus
  Write-Host -NoNewline "Get status `("
  if ($vmInfoStatus -eq "VM deallocated")
  {
    Write-Host -ForegroundColor Magenta -NoNewline "$vmInfoStatus"
  }
  elseif ($vmInfoStatus -eq "VM stopped")
  {
    Write-Host -ForegroundColor Yellow -NoNewline "$vmInfoStatus"
  }
  elseif ($vmInfoStatus -eq "VM generalized")
  {
    Write-Host -ForegroundColor Gray -NoNewline "$vmInfoStatus"
  }
  else
  {
    Write-Host -ForegroundColor White -NoNewline "$vmInfoStatus"
  }
  Write-Host -NoNewline "`)... "
  $VMagain = (Get-AzureRmVm -ResourceGroupName $vm.ResourceGroupName -Name $vmname)
  $NifName = ($VMagain.NetworkProfile[0].NetworkInterfaces.Id).Split('/') | Select-Object -Last 1
  $MyInterface = (Get-AzureRmNetworkInterface -Name $NifName -ResourceGroupName $VMagain.ResourceGroupName).IpConfigurations
  $PrivIP = $MyInterface.privateipaddress
  $vmInfo.PrivateIPAddress = $PrivIP
  Write-Host -NoNewline "Getting Private IP `($PrivIP`)... "
  try
  {
    $PubIPName = (($MyInterface).PublicIPAddress.Id).split('/') | Select-Object -Last 1
    $vmPublicIpAddress = (Get-AzureRmPublicIpAddress -Name $PubIPName -ResourceGroupName $Vmagain.ResourceGroupName).IpAddress 
    Write-Host -NoNewline "Getting public IP `("
    Write-Host -ForegroundColor Cyan -NoNewline "$vmPublicIpAddress"
    Write-Host -NoNewline "`)... "
    $vmInfo.PublicIPAddress = $vmPublicIpAddress
  }
  catch
  {
    Write-Host -NoNewline "No public IP... "
  }
  Write-Host -NoNewline "Add server object to output array... "
  $vmobjs += $vmInfo
  Write-Host "Done."
}  
Write-Host "Writing to output file: $VmsOutFile"
$vmobjs | Export-Csv -NoTypeInformation -Path $VmsOutFile
Write-Host "...Complete!"



I hope that helps!

Which Azure Plan is Right for You?

2017-07-27T00:47:04+00:00 July 27th, 2017|Azure, blog, Cloud, Microsoft, Microsoft Infrastructure, Micrsoft Cloud Solution Provider, Office 365|

As you start to explore the world of Microsoft Azure Cloud Services, you will start to see there are many options.  Let’s discuss the three types of Microsoft programs for you to purchase.

#1 – Pay-As-You-Go Subscriptions

Pay-As-You-Go subscriptions are simple to use and simple to set up.  There are no minimum purchases or commitments.  You pay for your consumption by credit-card on a monthly basis and you can cancel anytime.  This use-case is primarily for infrastructure environments that are setup for a temporary purpose.  It’s important to understand that organizations using this model pay full list price for consumption and do not have direct support from Microsoft.

#2 – Microsoft Enterprise Agreement (EA)

Microsoft Enterprise Agreements are commitment based Microsoft Volume Licensing agreements with a contractual term of 3 years.  Enterprise Agreement customers can add Azure to their EA by making an upfront monetary commitment for Azure services.  That commitment is consumed throughout the year by using a combination of the wide variety of Microsoft cloud services including Azure and Office 365.  This is paid annually in advance with a true up on a quarterly basis for overages.  Any unused licenses are still charged based on the commitment.  If you are a very large enterprise, the greatest advantage of an EA is having a direct relationship with a Microsoft sales team.  Also, EAs offer discounts based on your financial commitment.  And while there are many pros to the EA approach, understanding and controlling the cost of consumption can be a challenge for customers within EAs.  Personally, I recently took over the management of our EA and can attest that this can be very complicated.

#3 – Cloud Solution Provider (CSP)

When using Microsoft Cloud Services through the Cloud Solution Provider (CSP) program, you work directly with a partner to design and implement a cloud solution that meets your unique needs.  Cloud Solution Providers, support all Microsoft Cloud Services (i.e., Azure, Office 365, Enterprise Mobility Suite and Dynamics CRM Online) through a single platform.  CSP is similar to the Pay-As-You-Go subscription in that there are no minimum purchases or commitments.  Your consumption is invoiced monthly based on actual consumption (either via invoiced PO or credit card, your choice), and you can cancel at anytime.  This will significantly simplify your Azure and Office 365 billing process!  CSP offers many advantages over Pay-As-You-Go Subscriptions and Enterprise Agreements, and in most cases can be a more cost effective solution.

As a CSP, Coretek helps customers optimize their consumption cost by working with our customers to ensure they have the right Azure server types assigned to their workloads.  We also work with customers to shut down services when they are minimally used after business hours.  As part of Coretek’s Managed Support, our team provides proactive maintenance to ensure your infrastructure is running in an optimal manor including monitoring and patching of your servers.  Coretek’s Azure Management Suite (AMS) Portal enables business users to understand where the cost of their consumption is being utilized.  The AMS portal can display real time consumption cost based on department and projects.  It also enables business users to understand what Microsoft licenses are being utilized and who they are assigned to in a simple graphical format.

Coretek Services – Improving End User Experience and IT Efficiency.

Microsoft Azure – Global. Trusted. Hybrid.  This is cloud on your terms.

Azure – Next Available IP Address…

2017-06-01T19:53:53+00:00 June 15th, 2017|Azure, blog, Cloud, PowerShell|

The other day, I was working at a customer location with one of their IT admins named Jim, designing an Azure IaaS-based deployment.  During our session, he challenged me to make an easy way for him and his coworkers to see if a particular address is available, or find the next available “local” address in their various Azure VNets.

Because while addressing can be handled with ARM automation — or just dynamically by the nature of IaaS server creation — in an Enterprise it is usually necessary to document and review the build information by a committee before deployment.  And as a result, Jim wanted to be able to detect and select the addresses he’d be using as part of build designs, and he wanted his coworkers to be able to do the same without him present.  So, this one is for Jim.

I wanted to write it to be user-friendly, with clear variables, and be easy-to-read, and easy-to-run.  …unlike so much of what you find these days…  😉

Copy these contents below to a file (if you wish, edit the dummy values for subscription, Resource Group, and VNet) and run it with PowerShell.  It will do a mild interview of you, asking about the necessary environment values (and if you need to change them), and then asking you for an address to validate as available.  If the address you specified is available, it tells you so; if it is not, it returns the next few values in the subnet from which the address you entered resides.

# TestIpaddress - Jeremy Pavlov @ Coretek Services 
# You just want to know if an internal IP address is available in your Azure VNet/Subnet.
# This script will get you there.  
#
# Pre-reqs:
# You need a recent version of the AzureRm.Network module, which is 4.0.0 at his writing.  
# You can check with this command: 
#     Get-Command -Module azurerm.network
# …and by the way, I had to update my version and overwrite old version with this command:
#     Install-Module AzureRM -AllowClobber
#
# Some things may need to be hard-coded for convenience...
$MySubscriptionName = "My Subscription"
$MyResourceGroup = "My Resource Group"
$MyVnet = "My VNet"
#
Start-Sleep 1
Write-Host ""
Write-Host "Here are the current settings:"
Write-Host "Current subscription: $MySubscriptionName"
Write-Host "Current Resource Group: $MyResourceGroup"
Write-Host "Current VNet: $MyVnet"
Write-Host ""
$ChangeValues = read-host "Do you wish to change these values? (Y/N)"
if ($ChangeValues -eq "Y")
{
  Write-Host ""
  $ChangeSub = read-host "Change subscription? (Y/N)"
  if ($ChangeSub -eq "Y")
  {
    $MySubscriptionName = Read-host "Enter subscription name "
  }
  Write-Host ""
  $ChangeRg = read-host "Change resource group? (Y/N)"
  if ($ChangeRg -eq "Y")
  {
    $MyResourceGroup = Read-host "Enter Resource group "
  }
  Write-Host ""
  $ChangeVnet = read-host "Change Vnet? (Y/N)"
  if ($ChangeVnet -eq "Y")
  {
    $MyVnet = Read-host "Enter VNet "
  }
}
#
try
{
  $MySubs = Get-AzureRmSubscription
}
catch
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Unable to retrieve subscriptions."
}
$MySubsName = $MySubs.name
Start-Sleep 1
Write-Host ""
if ($MySubsName -contains "$MySubscriptionName")
{
  Write-Host "You are logged in and have access to `"$MySubscriptionName`"..."
}
else
{
  Write-Host "It appears that you are not logged in."
  Write-Host ""
  $NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
  if ($NeedToLogin -eq "Y")
  {
    Login-AzureRmAccount
    Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
  }
  elseif ($NeedToLogin -eq "N")
  {
    Write-Host "You must already be logged in then.  Fine. Continuing..."
  }
  else
  {
    Write-Host ""
    Write-Host "You made an invalid choice.  Exiting..."
    exit
  }
}
#
Start-Sleep 1
Write-Host ""
Write-Host "We will now check to see if a given IP address is available from any subnet in VNet `"$MyVnet`" "
Write-Host "...and if it is not available, provide the next few available on that subnet."
Start-Sleep 1
Write-Host ""
$MyTestIpAddress = Read-Host "What address to you wish to test for availability?"
#
$MyNetwork = Get-AzureRmVirtualNetwork -name $MyVnet -ResourceGroupName $MyResourceGroup
$MyResults = Test-AzureRmPrivateIPAddressAvailability -VirtualNetwork $MyNetwork -IPAddress $MyTestIpAddress
$MyResultsAvailableIPAddresses = $MyResults.AvailableIPAddresses
$MyResultsAvailable = $MyResults.Available
#
Start-Sleep 1
if ($MyResultsAvailable -eq $False)
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Sorry, but $MyTestIpAddress is not available."
  Write-Host ""
  Write-Host -ForegroundColor Green "However, the following adddresses are free to use:"
  Write-Host ""
  $MyResultsAvailableIPAddresses
}
else
{
  Write-Host ""
  Write-Host -ForegroundColor Green "Yes! $MyTestIpAddress is available."
}
Write-Host ""
Write-Host " ...Complete"

Now, if you know a better way to handle it, or have tips for improvement — or if you find a bug — I’d love to hear them (and so would Jim).  I hope it helps you out there…

Thanks, and enjoy!

The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|

image

image

I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.

Load More Posts