Coretek Services Announces Strategic Partnership with Huntzinger Management Group

2018-02-08T13:14:58+00:00 February 6th, 2018|Announcements, blog, Home Page, News|

Partnership provides full range of IT consulting, solutions and services to the healthcare industry.

Farmington Hills, MI.—Coretek Services (Coretek), an industry-leading IT professional services and consulting firm, announced today that the company has engaged in a strategic partnership with Huntzinger Management Group (Huntzinger).  President and Chief Executive Officer Ron Lisch made this announcement.

Coretek Services announces strategic partnership with Huntzinger Management Group

The partnership with Huntzinger gives Coretek clients access to healthcare experts to help them leverage their IT investment through a range of Advisory, Implementation and Staffing services designed to provide both immediate and sustained impact. Complimented with CoreTek’s professional services and infrastructure capabilities, we will deliver a complete suite of IT solutions to the healthcare industry offering our clients a strategic roadmap focused on actionable plans and strategies that can be implemented and delivered to provide transformational impact and results.

As an IT consulting company, systems integrator and managed services provider, Coretek delivers high value, innovative solutions to improve end-user experience and increase operational agility in the areas of virtualization, cloud solutions, and business continuity. Huntzinger’s Healthcare IT technical experts specialize in infrastructure and organizational assessment, operational efficiencies and strategic roadmap development. Both organizations foster a culture of innovation and outstanding customer service focused project success – no exceptions!

Ron Lisch, Coretek President and CEO, said, “Coretek is excited to announce our partnership with Huntzinger, as it uniquely complements our healthcare focus and will provide added valuable resources for our clients.”

“As a strategic partner, Huntzinger assists healthcare organizations to leverage their IT spend, and implement and support technology utilizing seasoned healthcare IT experts. The relationship with Coretek enhances our access to resources specializing in infrastructure and cloud-based services,” said Nancy Ripari.

 

2018: The year you migrate to the cloud?

2018-01-23T23:52:54+00:00 January 24th, 2018|Azure, blog, Cloud, Disaster Recovery|

Welcome to 2018, where the rush to the cloud shows no sign of slowing down.  New Azure features are being released so quickly that the Azure marketplace seems to show new services every day.  I sometimes see new features arrive between button clicks!  It is probably safe to say that we are in the “majority adopters” phase of the traditional technology adoption curve for the cloud.

The big question for you is whether this is the year to migrate your data center assets to the cloud.

Lift and Shift vs. Application Modernization

The first thing you need to consider is whether you want to “lift and shift” your infrastructure or start an application modernization project to move your digital assets to the cloud.

Lift and Shift Methodology — This method consists of cloning your server infrastructure to the cloud and transferring the data to the new servers.  The new servers are cloned as virtual machines (VMs) operating as they did in your on-premises infrastructure.  The plan detailed below is going to focus on this methodology.

Application Modernization Methodology — This method requires you to refactor your applications for the cloud.  There are significant cost savings available long term using this method; however, the conversion cost will be much higher than a straight “lift and shift” to the cloud.

The “Lift and Shift” Plan

Azure provides several tools including Azure Migrate (public preview) and Azure Site Recovery (ASR) to execute the migration to the cloud, but you need to understand what you are trying to accomplish.  The following is a typical plan for one of these projects:

  • Assessment – Use tools to collect data in manual or automatic ways to perform a documented analysis.  Data that should be collected and analyzed include the server function, operating system, cores (or CPUs), memory, disk sizes, and network performance.
  • Selection and Planning – Once the assessment is complete, you can then select the servers or workloads that are good candidates for migration.  You should also identify any server issues such as unsupported operating systems, large disk sizes, or incompatible disk technologies.  For example, disk sizes over 4TB cannot be cloned using ASR as of this writing.  Items like available network bandwidth, maintenance windows, and business objectives need to be considered before publishing your plan.
  • Environment Preparation – With any on-premises migration to the cloud, network constraints may need to be implemented (such as QoS and firewall changes for future replication). The target also needs to be created in Azure consisting of the new network, storage, and virtual private networks.  If you are using ASR, an on-premises process server needs rollout.  In this stage, I also recommend implementing a monitoring solution where you can watch the process server health and network bandwidth.
  • Replication – This is probably the longest and most tedious part of this entire project.  At this stage, the data is replicated using the selected migration tool.  Like any backup or disaster recovery solution, the longest part of synchronization is the initial data replication.  You do not merely want to replicate all the servers at once because you could significantly affect your available network bandwidth and that could make you unpopular.  When your end users come to find you with their torches and pitchforks, that’s very bad.  I have seen in the migrations that I have managed that doing more than 2-3 initial replications at a time is the limit.  Once a server has completed the initial replication, then you can add others because the completed servers need incremental changes over time.
  • Cutover – You are almost there!  Now comes the maintenance windows where you can migrate to Azure and then disable the original server.  Depending on your project and business needs, you can either do it all at once, or break up the list based on workloads and other factors.  Doing the migrations in multiple, distinct sets reduces stress.

Here at Coretek we do many of these projects every year.  We can definitely help you with your migration to the Azure cloud!  Just give us a call.

Coretek Wins! – 2018 Citrix Innovation Award for Partners

2018-01-10T01:32:24+00:00 January 10th, 2018|Azure, blog, Citrix, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

We won the 2018 Citrix Innovation Award for Partners!!

Thank so much to Wolverine Worldwide, all the people that voted for us, and of course all the awesome Coretek teammates that made it happen.  Much more to come!

See the details here:

https://www.citrix.com/go/innovation-award/partner-innovation.html

Amazing Video for 2018 Citrix Innovation Award Nomination…

2018-01-07T15:17:04+00:00 January 7th, 2018|Azure, blog, Citrix, Cloud, Micrsoft Cloud Solution Provider|

As nominees for the 2018 Citrix Innovation Award, Coretek recently had a video crew come through the main office.  The crew was there to interview, get scenery footage, and generally get the “vibe” of how we do what we do.

In the video, we get the overview of the amazing work we did with Wolverine Worldwide, helping them solve what is quite a common modern problem in an uncommon and innovative way.  And it is also really cool to see my fellow Coretekers “movie” extras!

Here is the completed video:

Please take a moment to watch the extremely cool video, and vote for Coretek for the 2018 Citrix Innovation Award Nomination at this link.  Then, if you like what you see, drop us a line and get Coretek and Citrix working to help you do what you do more efficiently!

Coretek’s Own 2018 Nutanix Technology Champions…

2017-12-21T13:38:53+00:00 December 21st, 2017|blog, Cloud, Nutanix|

We’re proud to share that our amazing fellow Coretekers Aaron Evans and Todd Geib have been nominated to the Nutanix Technology Champion (NTC) program for 2018!

http://eapch37923.i.lithium.com/t5/image/serverpage/image-id/3944iB069FBEB74F154B1/image-size/large?v=1.0&px=600

The NTC is an award that recognizes Nutanix and web-scale experts for their ongoing and consistent contributions to the community and industry. But it’s more than just an award — NTC is a program that also provides nominees with unique opportunities to further expand their knowledge, amplify their brand, and ultimately help shape the future of web-scale IT.

For more information on the program, please click here to visit the NTC 2018 announcement page.

Aaron and Todd have a combined 5 years in the program.  Props to Aaron and Todd for continuing to be a driving force behind Coretek’s commitment building Enterprise Clouds!

Coretek and Citrix: Delivering Confidence in the Cloud

2017-12-14T19:08:19+00:00 December 14th, 2017|Azure, blog, Citrix, Cloud, Home Page, Micrsoft Cloud Solution Provider, Mobility, News, Virtualization|

FARMINGTON HILLS, MI – December 14, 2017 – UPDATE on Citrix Innovation Award 2018.  You’ll have your chance to vote for Coretek and the Americas, January 2-9.  Details to follow soon.  Be a part of the excitement!

Click here for the latest info and pictures for the 2018 Citrix Innovation Award…get ready to vote for Coretek early January!

 

Office 365 Integration with SCCM…..

2017-11-20T02:02:27+00:00 November 20th, 2017|Azure, blog, Configuration Manager, Micrsoft Cloud Solution Provider, Office 365, System Center|

Deploying Office or Office 365 has traditionally been a challenge in most corporate environments.  The file types have changed, components have been added/removed, content size isn’t the most manageable, and the amount of business processes that rely on the productivity suite of products requires close management of the deployment to ensure that work can continue once the newer version is deployed.

Microsoft System Center Configuration Manager (SCCM) — as of version 1602 — integrates with Office 365 to offer the ability to deploy the Office 365 productivity suite natively with SCCM.  The feature is called Office 365 Client Management and is found in the Software Library of the SCCM Console.  Here’s a snapshot of what it looks like:

On the left, you have your Office 365 Folder with Office 365 Updates included.  When in the folder view, you can see a summary of the number of O365 clients and their versions.  If you notice the scroll-bar indicates there’s more to see…

The different sections can be summarized as such:

  1. Number of O365 Clients in total
  2. The breakdown and summarization of the different versions in the environment
  3. A button which will initiate a wizard to create an O365 client deployment package
  4. A chart indicating the number of systems running different languages of O365
  5. A button to create an Automatic Deployment Rule
  6. Another option to create client settings (These are standard SCCM Client settings, nothing special pertaining to O365)
  7. The number of systems configured to the different update Channels for Office 365 client management
  8. If ADRs were created, they would show in this section

I’ve had some great experiences working with the Office 365 Client management integration with SCCM.  The ability to create a single package to support multiple different languages has taken my packaging time and reduced it to minutes.

In addition to providing a built-in package creation utility, SCCM also manages and services O365 packages moving forward.  The updates are all provided through SCCM’s native Software Update technology but are provided to you in a separate node in the console so that you can view only the updates pertaining to the 365 clients in your environment.  This makes it very easy to identify required and installed updates for your managed systems.

As for what’s needed to manage updates for O365 within SCCM:

Requirements for using Configuration Manager to manage Office 365 client updates

To enable Configuration Manager to manage Office 365 client updates, you need the following (summarized from link above):

  • System Center Configuration Manager, update 1602 or later
  • An Office 365 client – Office 365 ProPlus, Visio Pro for Office 365, Project Online Desktop Client, or Office 365 Business
  • Supported channel version for Office 365 client. For more details, see Version and build numbers of update channel releases for Office 365 clients
  • Windows Server Update Services (WSUS) 4.0  — You can’t use WSUS by itself to deploy these updates. You need to use WSUS in conjunction with Configuration Manager
  • On the computers that have the Office 365 client installed, the Office COM object is enabled

All in all, I have to say that I’m very impressed with the integration of Office 365 Client Management into SCCM.  SCCM has been a very powerful tool and to add the ability to manage the productivity suite natively in SCCM will ensure that admins in large environments can spend more time managing than packaging.

Good Job Microsoft!

Initial Domain Integration Discovery…

2017-11-16T03:12:56+00:00 November 16th, 2017|blog, Domain Integration, PowerShell, Scripting|

If you are intending to be involved in an Active Directory Domain integration with Quest Migration Manager for AD, there are some simple AD attribute discovery checks you should do long before you get serious about such things as user counts and remediation and so forth.  And especially if you’re going to perform an Enterprise multi-Domain integration (many-to-one), its even more critical that you map out your attributes that you will be using for merging & matching the user objects across each domain relationship.

Attribute Analysis

Here at Coretek, we do a lot of Organizational and Enterprise Active Directory integrations, and many of them involve Quest Migration Manager for AD (MMAD).  Just today I was working with a customer to gather some of this early info, so I thought I’d post a note on some of these simple tests so you, too, can run them and see if your AD is in good shape to take on such a project.

The Quest Migration Manager for AD requires that you use a pair of Unicode String attributes for each domain relationship.  The default attributes used in a simple non-Exchange migration are “adminDescription” and “adminDisplayName”.  However, the more common scenarios I see involve Exchange and also multiple domains, requiring the use of other attributes such as “extensionAttribute14” & 15 and others.

The most common scenario I get involved with is where the users have already been created in the destination domain (due to an HR automation or other project), and the user objects from the source domain(s) will be merged, rather than created fresh.  In these cases I typically try to get the customer to check for these following critical things at a pre-project state — or as early as can be done — for the set of user objects that are to be part of the migration/integration:

  • Existing sidHistory — In most cases, existing sidHistory attributes on a user object are just a part of an old migration and may not matter.  However, if something like a previous Exchange migration was left un-complete, the sidHistory might be a critical part of the mailbox access for those users… and removing it without planning would be bad!  Tread carefully!
  • Existing extensionAttribute14, 15, etc. — These are the attributes that are commonly used in Enterprise AD migrations, and you’ll often find them still left-over from previous projects.  Those old project-based values don’t matter on their own; however I’ve also seen these attributes quite commonly used for other semi-hidden administrative items.  Why?  Because in Exchange environments, there’s a nifty GUI capability for editing these attributes, putting them at the fingertips of people that would otherwise leave them alone.  Again, make sure they are free and won’t be overwritten by anyone!

PowerShell Queries

So let’s check for these attributes, and below are some simple ways to see if anything is populated for those critical attributes.

To return a simple list of all user distinguishedNames with “sidHistory” populated with something (command is all-one-line):

(Get-ADUser -Filter {sidHistory -like "*"} -SearchBase "ou=MyOweYou,dc=doemane,dc=lowcull").distinguishedName

…then of course, you can swap out extensionAttribute14 for others… and replace the “.distinguishedName” with others, and we could format the output differently, dump to a CSV, etc.  Here is a similar search, but now we’re formatting the output to a table for easier quick reading (command is all-one-line):

Get-ADUser -Filter {extensionAttribute14 -like "*"} -SearchBase "ou=MyOweYou,dc=doemane,dc=lowcull" -Properties sidHistory,extensionAttribute14 |ft -Property name,sidhistory,extensionattribute14

…or, to pull it all together into one command and search for all three of the attributes I mentioned, do this (command is all-one-line):

Get-ADUser -Filter {(extensionAttribute14 -like "*") -or (extensionAttribute15 -like "*") -or (sidHistory -like "*")} -SearchBase "dc=doemane,dc=lowcull" -Properties sidHistory,extensionAttribute14,extensionAttribute15 |ft -Property name,sidhistory,extensionattribute14,extensionattribute15

Of course, you’ll want to change out the specifics in the commands above to match your domain info and attribute discovery needs, but you get the idea.

I hope that helps get you closer to your domain integration…  And I hope you let us help you out!

 

Azure – Which Public and Private IPs are In Use and By Which VM…?

2017-08-03T03:01:59+00:00 August 10th, 2017|Azure, blog, Cloud, PowerShell|

In my recent thread of simple-but-handy Azure PowerShell tools, I realized one important thing was missing: A tool that grabbed all the public & private IP addresses in use by VMs (plus some additional info).

I searched around, and found an old post by fellow Coreteker Will Anderson, where he’d solved the problem.  Unfortunately, PowerShell had changed since then, and his suggestion didn’t work anymore.  Fortunately however, Will Anderson is now on the other end of Skype from me, so after a quick explanation he gave me some advice on how to fix it within the new PowerShell behavior.  And of course, it would just be wrong not to post if back to the community…

So here is my script, with help from Will and another blogger Rhoderick Milne, where I found some additional input.  When executed, this script writes some quick handy info to screen, as in this lab example:

…which might be enough for you if you just want a quick review, but then it also dumps my preferred info for each VM to a csv, as follows: “Subscription”, “Mode”, “Name”, “PublicIPAddress”, “PrivateIPAddress”, “ResourceGroupName”, “Location”, “VMSize”, “Status”, “OsDisk”, “DataDisksCount”, “AvailabilitySet”

Just paste the below  into a script file, change the subscription name in the variable to your choice, and execute.  If you don’t know which subscription name to use, you can always run “Get-AzureRmSubscription” after you run “Login-AzureRmAccount” and find it in the list.  So grab the code, hack at it, and let me know where you take it!

# Thanks for help from Will Anderson, Rhoderick Milne for the assistance.
#
# Get Date; Used only for output file name.
$Date = Get-Date
$NOW = $Date.ToString("yyyyMMddhhmm")
#
# Variables
$MySubscriptionName = "Windows Azure  MSDN - Visual Studio Premium"
$VmsOutFilePath = "C:\temp"
$VmsOutFile = "$VmsOutFilePath\VmList-$NOW.csv"
#
$NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
if ($NeedToLogin -eq "Y")
{
  Login-AzureRmAccount
  Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
}
elseif ($NeedToLogin -eq "N")
{
  Write-Host "You must already be logged in then.  Fine. Continuing..."
}
else
{
  Write-Host ""
  Write-Host "You made an invalid choice.  Exiting..."
  exit
}
#
$vms = Get-AzureRmVm 
$vmobjs = @()
foreach ($vm in $vms)
{
  #Write-Host ""
  $vmname = $vm.name
  Write-Host -NoNewline "For VM $vmname... "
  Start-Sleep 1
  $vmInfo = [pscustomobject]@{
      'Subscription'= $MySubscriptionName
      'Mode'='ARM'
      'Name'= $vm.Name
      'PublicIPAddress' = $null
      'PrivateIPAddress' = $null
      'ResourceGroupName' = $vm.ResourceGroupName
      'Location' = $vm.Location
      'VMSize' = $vm.HardwareProfile.VMSize
      'Status' = $null
      'OsDisk' = $vm.StorageProfile.OsDisk.Vhd.Uri
      'DataDisksCount' = $vm.StorageProfile.DataDisks.Count
      'AvailabilitySet' = $vm.AvailabilitySetReference.Id }
  $vmStatus = $vm | Get-AzureRmVM -Status
  $vmInfo.Status = $vmStatus.Statuses[1].DisplayStatus
  $vmInfoStatus = $vmStatus.Statuses[1].DisplayStatus
  Write-Host -NoNewline "Get status `("
  if ($vmInfoStatus -eq "VM deallocated")
  {
    Write-Host -ForegroundColor Magenta -NoNewline "$vmInfoStatus"
  }
  elseif ($vmInfoStatus -eq "VM stopped")
  {
    Write-Host -ForegroundColor Yellow -NoNewline "$vmInfoStatus"
  }
  elseif ($vmInfoStatus -eq "VM generalized")
  {
    Write-Host -ForegroundColor Gray -NoNewline "$vmInfoStatus"
  }
  else
  {
    Write-Host -ForegroundColor White -NoNewline "$vmInfoStatus"
  }
  Write-Host -NoNewline "`)... "
  $VMagain = (Get-AzureRmVm -ResourceGroupName $vm.ResourceGroupName -Name $vmname)
  $NifName = ($VMagain.NetworkProfile[0].NetworkInterfaces.Id).Split('/') | Select-Object -Last 1
  $MyInterface = (Get-AzureRmNetworkInterface -Name $NifName -ResourceGroupName $VMagain.ResourceGroupName).IpConfigurations
  $PrivIP = $MyInterface.privateipaddress
  $vmInfo.PrivateIPAddress = $PrivIP
  Write-Host -NoNewline "Getting Private IP `($PrivIP`)... "
  try
  {
    $PubIPName = (($MyInterface).PublicIPAddress.Id).split('/') | Select-Object -Last 1
    $vmPublicIpAddress = (Get-AzureRmPublicIpAddress -Name $PubIPName -ResourceGroupName $Vmagain.ResourceGroupName).IpAddress 
    Write-Host -NoNewline "Getting public IP `("
    Write-Host -ForegroundColor Cyan -NoNewline "$vmPublicIpAddress"
    Write-Host -NoNewline "`)... "
    $vmInfo.PublicIPAddress = $vmPublicIpAddress
  }
  catch
  {
    Write-Host -NoNewline "No public IP... "
  }
  Write-Host -NoNewline "Add server object to output array... "
  $vmobjs += $vmInfo
  Write-Host "Done."
}  
Write-Host "Writing to output file: $VmsOutFile"
$vmobjs | Export-Csv -NoTypeInformation -Path $VmsOutFile
Write-Host "...Complete!"



I hope that helps!

New Series – Using PowerShell, Azure Automation, and OMS

2017-08-11T19:15:26+00:00 August 3rd, 2017|Azure, blog|

I’ve just recently begun releasing my new blog series on PowerShell.org – Using PowerShell, Azure Automation, and OMS.  This series is designed to help you become familiar with using Microsoft’s Operations Management Suite’s alerting capability to trigger runbooks in Azure Automation.  I highly encourage you to read through the series as it releases over the next couple of weeks.

Part I – Azure Automation Account Creation and Adding Modules – This article walks you through creating a new Azure Automation account and uploading your own custom modules to Azure for use in your on-prem and cloud-based solutions.

Part II – Configuring Azure Automation Runbooks And Understanding Webhook Data – Walks you through creating your first Azure Automation runbook and breaking down Operations Management Suite’s Webhook data.

Part III – Utilizing Webhook Data in Functions and Validate Results – Takes you through utilizing OMS Webhook data and passing it to your functions in Azure Automation to automatically remediate issues for you.

I’ll be updating the links here as the articles go live.  I hope you enjoy the series!

 

Load More Posts