The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|

image

image

I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.

How to protect against the next Ransomware Worm

2017-07-27T00:00:58+00:00 May 15th, 2017|blog, Ransomware, Security|

Hopefully you were one of the prepared organizations who avoided the latest Ransomware worms that made its way around the globe this past week.  This worm crippled dozens of companies and government entities, as it impacted over 230K computers in 150 countries.  Most of the infections were in Europe, Asia, and the Middle East, so if you did not get hit, you were either prepared, or lucky.  This blog post will help you be prepared for when this happens again, so that you don’t have to rely on luck.

Patch everything you can, as quick as you can

The exploit at the root of this Ransomware worm was resolved in MS17-010, which was released in March of 2017, giving organizations more than enough time to download, test, pilot through your UAT (User Acceptance Testing), and deploy to Production.  While introducing new patches and changes to your environment carries risk of breaking applications, there is far more risk in remaining unpatched – especially security specific patches.  Allocate the proper resources to test and roll out patches as quickly as you can.

Run the newest OS that you can

While the EternalBlue exploit that was patched by MS17-010 was applicable to every Windows OS, you were safe if you were running Windows 10 due to a security feature called ELAM (Early Launch Anti-Malware).  Many of the infected machines were running Windows XP, or Server 2003, that did not get the MS17-010 patch (Microsoft has released a patch for these OS variants after the infection, please patch if you still have these in your environment).  It is not possible to secure Windows XP or Server 2003.  If you insist on running them in your environment, assume that they are already breached, and any information stored on them has already been compromised (You don’t have any service accounts logging into them that have Domain Admin privileges, right?).

Firewall

Proper perimeter and host firewall rules help stop and contain the spread of worms.  While there was early reports that the initial attack vector was via E-mail, these are unconfirmed.  It appears that the worm was able to spread over the 1.3 Million Windows devices that have SMB (Port 445) open to the Internet.  Once inside the perimeter, the worm was able to spread to any device that had port 445 open without MS17-010 installed.

Turn off Unnecessary Services

Evaluate the services running in your desktop and server environment, and turn them off if they are no longer necessary.  SMB1 is still enabled by default, even in Windows 10.

Conclusion

These types of attacks are going to be the new normal, as they are extremely lucrative for the organizations who are behind them.  Proper preparation is key, as boards are starting to hold both the CEO and CIO responsible in the case of a breach.  While you may have cyber-security insurance, it may not pay out if you are negligent by not patching or running an OS that stopped receiving security updates 3 years ago.  I would recommend to be prepared for the next attack, as you may not be as lucky next time.

Additional Layers of Defense to Consider

For those over-achievers, additional layers of defense can prove quite helpful in containing a breach.
1.    Office 365 Advanced Threat Protection – Protect against bad attachments
2.    Windows Defender Advanced Threat Protection – Post-breach response, isolate/quarantine infected machines
3.    OneDrive for Business – block known bad file types from syncing

Good luck out there.

Disaster Recovery: Asking the wrong question?

2017-07-27T00:00:58+00:00 May 11th, 2017|Azure, blog, Cloud, Disaster Recovery|

image

In my role as an Azure specialist I get asked a lot of questions about Disaster Recovery. IMHO they almost always ask the wrong question.

Usually it goes something like “I need Disaster Recovery protection for my data center. I have N VMs to protect. I have heard that I can use Azure Site Recovery either to facilitate Disaster Recovery to my backup data center, or even use Azure as my backup data center.” That is true. Smile

In a previous lifetime I used to work on a Disaster Recovery  testing team for a major New York based international bank. We learned early on two major principles:

1. It is all about workloads since different application workloads have different Disaster Recovery  requirements. Every workload has unique Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Not all workloads are created equal.   For instance email is a critical workload in the typical enterprise. An outage of more than a few hours would affect business continuity significantly.  Other application workloads (such Foreign Exchange Trading, Order Processing, Expense Reporting, etc.) would have more or less stringent RTO and RPO requirements.

2. So it is really is all about Business Continuity. Disaster Recovery is a business case. It is clear that providing perfect Disaster Recovery for all application workloads would be extremely expensive; and in many cases not cost effective. In effect there is an exponential cost curve. So it is all about risk management and cost/benefit.

So where does that leave us?

1. Evaluate your Disaster recovery requirements on a workload by workload basis.

2. Plan how to implement it considering the objective of Business Continuity, RTO and RPO.

3. Use Azure Site Recovery to make it happen. Smile

This article originally posted 4/1/2016 at Bill’s other blog, Cloudy in Nashville.

Walk for Wishes 2017…

2017-07-27T00:00:58+00:00 May 8th, 2017|Announcements, blog|

We did it again!  As the 19th Annual Walk For Wishes ® approached, fellow Coreteker Sarah Roland (a Walk veteran since 2011) assembled set up her annual “Sarah’s Dream Team” and put out the call for participation, as she has done every year since 2013.  And again this year, fellow Walk veterans Avi Moskovitz (since 2016) and I (since 2013) got our families involved and headed along to the Detroit Zoo and joined in the effort.

We are proud to have been a small — but important — part of the more than $500,000 and counting because of the efforts of more than 6,500 walkers combined!  It was a really great crowd, and there were tons of interesting things to do and see along the way (it’s the Detroit Zoo, after all), supported by care and food volunteers along the way.   And given some threatening weather, we were amazed that we also managed to get the whole day in with no rain, and just a bit of cold.  That’s 5 years in a row of good walking weather; someone must be watching out for us…

We are truly thankful to the folks at Coretek who donated so generously to us and our team, both individually and as a company, raising over $1400 with us by walk time.  We hope you are as moved as we are by the work that is done by the organization and the good that it brings to those that — in many cases — don’t have a whole lot of good in their lives at the moment the wish fulfillment is granted to them.  For instance, here’s the moment that Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

And did I mention that you can still donate to our team!?!?  Just click here go to Sarah’s Dream Team page and follow the process to donate to the team or any of us individually.

Oh, and by the way… we decided that we are on a mission to get more members of our team for 2018, so you may as well just plan to join now…  😉

See you then!

A Cloud is a Cloud is a Cloud?

2017-07-27T00:00:58+00:00 May 4th, 2017|Azure, blog, Cloud|

It never fails to amaze me that it seems like every vendor in the industry, every hosting company and every virtualization or database vendor that puts something in one of the Public Clouds is quick to claim that “They have a Cloud”. A while back they even invented a term for that. The term is “CloudWashing” (i.e. naming everything that you have as “Cloud”)

Let’s apply some common sense. Back in 2011 the National Institute of Standards (NIST) produced a concise and clear definition of what it means for something to be a Cloud. You can read about it here. This is the standard against which all Cloud pretenders should be measured. In case you don’t have time to read it I will summarize.

The NIST model defines a Cloud as “Enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (The underlining is mine)

It is composed of:

  • 5 Essential characteristics
  • 3 Service models
  • 4 Deployment models

The 5 essential characteristics are:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

The 3 Service Models are:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

The 4 Deployment Models are:

  • Public Cloud
  • Private Cloud
  • Community Cloud
  • Hybrid Cloud

Let’s take the definitions apart a bit and see what makes them tick:

The 5 Essential Characteristics

On-demand self-service: Being able to self-provision and de-provision resources as needed, without vendor involvement.

Broad network access: Resources available over a network accessed through standard protocols from all kinds of devices. For Public Clouds, these resources are typically available all over the world with data centers that accommodate all country’s need for data sovereignty and locality.

Resource pooling: Lots of resources that can be allocated from a large pool. Often the user does not know, or have to know, their exact location. Although, in the case of Public Clouds like Microsoft Azure, locality is often under the user’s control. In many cases the pool of resources appears to be nearly infinite.

Rapid elasticity: The ability to scale up and down as needed at any time, in some cases automatically, based on policies set by the user.

Measured service: Providing the ability to track costs and allocate them.  Public Cloud resources are often  funded using an Operating Expenditure (OpEx) rather than the Capital Expenditure (CapEx) accounting model used where hardware purchases are involved.

The 3 Service Models

In the case of Public Clouds these service models are defined by who supports various levels in the technology stack; the customer or the cloud vendor.

This Microsoft Azure diagram has been in existence, in one form or another, at least since the 1970s:

clip_image002[4]

It illustrates who is responsible for what part of each service type the Vendor (in blue) and the customer (in black).

Software as a Service (SaaS) asks the customer to take no responsibility for the underlying hardware and software stack. In fact, the customers only responsibility is to use the service and pay for it.

Platform as a Service (PaaS) lets the vendor manage the lower layers of the technology stack, while the customer is responsible for the Application and Data layers.

Infrastructure as a Service (IaaS) corresponds most closely to what the customer must manages  in a typical data center and what the vendor is responsible for. With IaaS, the vendor takes care of everything from part of the operating system layer to the bottom of the stack. Notice that in Azure the operating system layer is partly the vendor’s responsibility, and partly the customer’s. The customer is still responsible for applying operating system patches, for instance. (This may differ slightly for other Public clouds.)

I often include a slide in my Cloud presentations that addresses the question of what to use when. In my mind the decision tree is fairly obvious:

  • It’s SaaS until it isn’t
    • Then It’s PaaS until it isn’t
      • Then It’s IaaS
  • Or Hybrid with any of the above

If you can find a SaaS service that meets your objectives, use it. Let the vendor have all the stack support headaches. If you can’t, then consider using PaaS by either buying a 3rd party application or building your own.

Finally, if none of these approaches work, you can always take advantage of IaaS since that most closely matches what you have in your own data center. Even there, however, the vendor will take care of a great deal of the plumbing. (As an aside IaaS is often the approach of choice for taking a “lift and shift” approach to migrating what you already have running in your data center up into the cloud.)

And yes, I know we haven’t discussed Hybrid yet, but patience, we will get there.

The 4 Deployment Models

A Public Cloud is a pool of computing resources offered by a vendor, typically, on a “pay as you go” basis. It supports Self-Provisioning by its customers as discussed above. It also often has a massive global presence and appears to have near-infinite capacity. In general, it is available to anyone.

Gartner defines a large number of Leadership Quadrants, but the ones that are most relevant to our discussion are the ones for Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Gartner Leadership Quadrant for IaaS includes just Amazon and Microsoft. The one for PaaS just includes Microsoft Azure and Salesforce. There are other lesser Public Cloud vendors including Google, IBM and Rackspace.

One other point: Unless a company can spend the many billions of dollars necessary to create a global datacenter presence it is hard for them to be considered a Public Cloud leader.

A Private cloud on the other hand is normally local to a single company. It is normally located on their own premises. If a customer creates a data center, or a complex of data centers, that conforms to the NIST definitions discussed herein then it can rightly be called a “Private Cloud”. Be cautious here, however. Remember the term “CloudWashing” as defined above. Just because a customer has a large number of computers in a data center does not make that a Private Cloud no matter how vehemently they insist on calling it one.

Although there is no requirement for the architecture of Public and Private Clouds to be the same most companies agree that compatibility between them would be helpful to support CloudBursting. That is, the ability to move applications and data freely between the data center and the Public Cloud. (See the discussion on Hybrid Cloud below.)

A Community Cloud is a variation of a Public Cloud that restricts its clientele to a specific community of users. For instance, there are Community Clouds for Government and Education as well as for other communities. Each of these may have different levels of security and compliance requirements or other unique characteristics. The NIST document does state that it may be operated for the community by one of the companies in it, however, it is typically operated by a Public Cloud vendor using a walled-off subset of it resources.

A Hybrid Cloud is typically formed by the integration of two or more Public and/or Private Clouds. In the typical case a company utilizes both a Public Cloud and resources in their own data center structured as a private cloud with strong networking integration between them.

A high degree of standardization between them is desirable in order to make it possible to distribute resources across them or to load balance or cloudburst resources between them. This is often done for security and compliance reasons where a company feels that some data and/or processing must remain behind their own firewall, while other data and/or processing can take advantage of a Public Cloud.

In the case of Microsoft technology there is one example where compatibility will be extremely high and advantageous. I expect the Private Cloud space to be dominated, at least in data centers utilizing Microsoft technology, by the new Azure Stack appliance coming out this summer from 5 initial vendors. Make no mistake about it. Azure Stack is Azure running in your own data center. In effect Azure stack is pretty much like Just another Azure Region  (JAAR?) to the Public Azure Cloud. Having that high degree of compatibility will help facilitate the Hybrid Cloud architecture discussed above. I have already blogged about Azure Stack and why I feel that it is so important so I will not go into that in detail here. See this bog post: The Future of Azure is Azure Stack.

We also should distinguish between true Hybrid Cloud and an IT Hybrid Infrastructure In the case of the latter the resources in the data center need not be in the form of a Private Cloud as discussed above. Microsoft is the clear leader in this space now because of its traditional enterprise data center presence and it’s leadership in converting itself to be a Cloud Company.

So, don’t be fooled by Cloud Pretenders. Apply the NIST litmus test.

This article originally posted 5/3/2017 at Bill’s other blog, Cloudy in Nashville.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Why use Cloud App Security when my firewall already does this?

2017-07-27T00:00:58+00:00 April 12th, 2017|blog, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

Microsoft’s new Cloud App Security (CAS) is a new product feature that comes with Microsoft’s Enterprise Mobility + Security E5 line of products.  The solution is a cloud-based application model built on Azure Active Directory but can also be used independently, although the dataset will not be as rich.

The idea is for customers to be able to gain deep insight into what apps their end-users are consuming, identify data drift and leakage, be able to “sanction” or “unsanctioned” applications, and even generate a block script to block those unsanctioned apps at the firewall level.  There are two paths to gathering this data:  Firewall Logs and Connected apps.

DISCOVER

You can discover information by manually importing firewall logs or even setup a connector VM which will gather the logs and upload them to the CAS for you.  This is not much different than technologies offered by the firewall providers themselves and, in many cases, will not provide as quick of a reaction as you’d receive from those vendor provided solutions.  The connector, by default, uploads logs from the firewall every 20 minutes and imports those into the CAS.

What I believe separates Cloud App Security different from the firewall provider solutions is that it integrates with Connected apps.  A Connected app is an application where CAS leverages APIs provided by the cloud provider.  Each provider has its own framework and limitations, so the functionality for each may depend on how much the provider has extended the API.

There are currently few Connected Apps that CAS supports but I’ve found that the biggest bang for your buck will be the Office 365 suite of applications.  This allows the CAS to see usage of the standard suite of Office apps and your Azure AD connected users.

INVESTIGATE

With the data gathered from the Connected apps, you can see information on File usage, owner information, app name, Collaborators, and more.  You will be able to tell who is accessing what files and who those files have been shared with.  You can drill down on particular user activity and see all of the apps and traffic volumes for their particular usage.

The Cloud Discovery Dashboard provides a rich view of information from a graphical perspective including dashboard items like App Categories which highlight usage based on categories such as CRM, Collaboration, Accounting, Storage apps, and more.  Other items on the dashboard show top discovered apps, top users, and even a geographical map of usage based on where the apps are being used.

CONTROL

Through alerts you may be made aware of Risky IP addresses, Mass downloads, New Cloud app usage, and more.  If a particular user is a concern, you even have the ability to suspend usage of a particular connected app for a particular user.  This adds a layer of security that a standard firewall report may not provide – especially if the user roams to another location off-premise where your firewall is not present.

Using the policies feature, I can set an alert, notify the user and CC their manager, or even suspend the user based on the several configuration policies that are available to me via the console.  This allows me to mitigate threats as they happen instead of waiting for a review of alerts or logs, possibly days or even weeks after the events occurred.

To summarize on Microsoft’s Cloud App Security, I would have to say that they are opening the door to rich integration with cloud based apps and providing another avenue to secure your corporate data.  With the deep integration of Office 365, those that have the E5 licensing should definitely take advantage of this product.  Even those interested, but not licensed can subscribe for a trial version and use the firewall discovery solution to get an immediate view of what’s being used internally.  This will allow your organization to have that much-needed discussion on BYOD and the security risks that ultimately partner with an open door policy.

If setup properly, adding Cloud App Security to your environment can greatly increase the level of security your organization has regarding the mobility of your data and users!

For More information, review the following useful links:

https://www.microsoft.com/en-us/cloud-platform/cloud-app-security

https://docs.microsoft.com/en-us/cloud-app-security/enable-instant-visibility-protection-and-governance-actions-for-your-apps

https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/what-is-cloud-app-security.md

 

5 Tips for connecting your Azure space to On-Premises…

2017-07-27T00:00:58+00:00 March 26th, 2017|Azure, blog, Cloud, Micrsoft Cloud Solution Provider|

Often, the easiest thing about using the Azure Cloud is getting started.  Whether it’s creating a VM, establishing a web service, etc., it’s usually as easy as a few clicks and you’ve got some “thing” running in a bubble.  That’s the easy part.

It’s what you do right after that can often be a challenge, since in many cases it involves inter-connectivity with your on-premises environment.  And at that point, whether it’s the early on, or even after long-thought-out deliberative designing, you may want to sit down and have a talk with your firewall/network team (who may never even have heard the word “Azure” before) and talk about exactly how to connect.

Please be mindful that the router/firewall people roll in a different workspace, and may have a different approach to what you’re trying to accomplish.  They may prefer to use third-party firewall/tunnel capabilities with which they are already familiar, or utilize the off-the-shelf options that Microsoft provides.  Note: This article is all about the built-in Microsoft options; we’ll have a discussion about third-party items in a future article.

Specifically when working with the native Azure connectivity options, the first thing you’ll want to do is point yourself and others at this URL, which provides most everything needed to get started:
https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-plan-design

…note that there are some great sub-pages there too, to take the conversation from “Why are we doing this” to “What is Azure” to “Let’s submit a ticket to increase our route limitation“.

But speaking as a server/cloud guy, I wanted to give you some simple but important tips you’ll need to know off the top of your head when speaking to your router people:

Tip #1
There are two types of Gateways for on-prem connections to the cloud: ExpressRoute and VPN.  ExpressRoute is awesome and preferred if you have that option.  If you don’t know what ExpressRoute is already, you probably can’t afford it or don’t need it — which leaves you with VPN.  The good news is that if done right, the VPNs can be perfectly fine for an Enterprise if you set them up right, and mind them well.

Tip #2
I’m mostly writing these VPN options down because I always forget it, but you need to know the definitions too:
“Static Routing” used to be called “PolicyBased” and your router person knows it as IKEv1
“Dynamic Routing” used to be called  “RouteBased” and your router person knows it as IKEv2

Tip #3
PolicyBased can only be used with “Basic” SKU, and only permits one tunnel and no transitive routing.  You probably do not want this except in the most simple of configurations.  Ironically, your router/firewall person will most likely configure your VPN this way if you don’t instruct them otherwise.  Watch out!

Tip #4
The firewall you have may not be supported.  But even if it’s not, that means two things:  you may be forced into PolicyBased (read Tip #3), or in many cases it will work just fine even if it’s not supported.  But you might be on your own if you have a problem, so know what you’re getting into.

Tip #5
Please calculate the total number of routes and gateways and such that you’ll be permitted based on the SKUs you’re chosen.  Make sure that your fanciful networking dreams will all come true when you finally get where you’re going.  Everything in Azure has a quota or limitation of some sort, and you can almost always get them raised from the low original limit, but some things just aren’t possible without changing SKUs.

Extra Bonus Tip
Look into the “Network Watcher” preview for validating some of your networking flow and security, and for an instant dashboard of the quotas (mentioned in Tip #5).  It’s only available in some locations right now, but it’s looks like it will be quite nice.

…and that’s just scratching the surface, but those are some of the things I run into out there, and I thought it might save you a minute… or a day… or more…

Good luck out there!

Happy Holidays from Coretek Services

2017-07-27T00:00:58+00:00 December 16th, 2016|blog, Winter|

dina-francis-award “He’s over here!” Employee RJ Armstrong exclaimed, pointing out a window near the front of the banquet hall. All the children quickly raced to the window with excitement in their eyes.

“I don’t see him,” said his niece Ava. “Are you sure it was Santa?”

Coretek Services hosted their largest Christmas Party and A.I.R. Awards Celebration yet earlier this month; it was a true celebration of the year’s success and of the holidays with coworkers and their families.

 

voltaire-toledo-awardA.I.R. represents Coretek’s values of:

Attitude – Having a servant’s heart

Integrity – We say we will do, and then we do it!

Relationships – Building relationships with clients, partners, our community, and most importantly – each other

The A.I.R. awards are nominated and voted on by Coretek employees and awards are for the following categories below. In tradition, the winners of the previous year announced this year’s recipients.

 

 

Congratulations to:

Rookie of the Year – Resa Abbey

Operations Teammate of the Year – Dina Francis

Managed Solutions Teammate of the Year – Scott Ruoff

Consulting Services Teammate of the Year – Chris Barnes

PMO Teammate of the Year – Heather Spencer

Sales and Marketing Teammate of the Year – Jeff Bollock

The Most Improved – Brett Decker

Sales Win of the Year – Kettering Health Network – Jack Wilson & Voltaire Toledo

Road Warrior of the Year (The Pianki Award) – John Blickensdorf

Teammate of the Year – Chris Shalda

 

santa The night ended with Santa making an appearance! Merry Christmas and a Happy New Year to you and your loved ones.

Let’s slash through the hype and keep I.T. real

2017-07-27T00:00:58+00:00 December 9th, 2016|Announcements, blog|

New Video Series

Hello TekTopics readers!  I am excited to announce we are launching a vlog: “Keeping I.T. Real”

In this new video series, we will slash through the hype and share critical and essential technologies, tools, and tips.  We will bring you only those which are real, worthy of your trust, and upon which you can build and grow your company.  As a trusted and reputable virtualization and cloud company with customers across the globe, Coretek has worked with all kinds of old and new technologies – some good, some bad, some ahead of their time, and some that need to be retired.  Be a part of the discussion of new ideas, trends, and technologies.

Your host, Mitch Howell, will be your guide through each topic.  Mitch will be introducing you to the key players, including guests within Coretek Services and our most valuable partners.  Be sure to stayed tuned… and keep I.T. real!

Get to know the Host

Mitch Howell is a Client Executive with Coretek Services.  Mitch Howell came to Coretek Services in February of 2016, after spending 7+ years in Technical Sales, Sales Management & Solution Architecture within the Data Center Infrastructure industry.  Once Mitch identified the trends towards a Software Defined World, he naturally made the decision to join the growing Coretek Team.  Coretek began as a nationally recognized Virtualization Solutions Integrator, but has evolved into an organization that is transforming the way we leverage Cloud technology today.

mitchimage

Community Service Project with Rebuilding Together

2017-07-27T00:00:58+00:00 October 20th, 2016|blog, Community Project, Giving Back|

Coretek Services and Rebuilding Together Community Service Project 2016Keeping with the annual charitable tradition of our Community Service project for 2016, Coretek jumped in with our hearts and hands to renovate and rejuvenate the home of some local neighbors who do not have the financial resources or manpower to do it themselves.  Partnering with Rebuilding Together again for the second year in a row, Coretek visited the home of a veteran and his wife whose established home was in need of some major landscaping and yard work, along with some indoor repairs and organization.

The day’s weather loomed overcast and rainy, but that did not dissuade nearly 30 Coretek employees and their families from waking up early on a Saturday morning and driving from various parts of metro Detroit to put in their all.  It was a messy and muddy day — full of tree branches, carpentry, and unexpected project delays.  But these are the challenges that bring out the best in all of us!  And in the end, it was nothing but sunshine beaming from the happy home.

The homeowners were grateful and pleased with the work the Coretek team did, and we all were thankful to have the opportunity to give back to this deserving couple.

And we also want to thank the folks at Rebuilding Together for everything they bring to our Community Service projects!  To find out more about who they are, click here to learn more about Rebuilding Together and see if you or your organization can get involved.

Load More Posts