A Cloud is a Cloud is a Cloud?

2017-07-27T00:00:58+00:00 May 4th, 2017|Azure, blog, Cloud|

It never fails to amaze me that it seems like every vendor in the industry, every hosting company and every virtualization or database vendor that puts something in one of the Public Clouds is quick to claim that “They have a Cloud”. A while back they even invented a term for that. The term is “CloudWashing” (i.e. naming everything that you have as “Cloud”)

Let’s apply some common sense. Back in 2011 the National Institute of Standards (NIST) produced a concise and clear definition of what it means for something to be a Cloud. You can read about it here. This is the standard against which all Cloud pretenders should be measured. In case you don’t have time to read it I will summarize.

The NIST model defines a Cloud as “Enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (The underlining is mine)

It is composed of:

  • 5 Essential characteristics
  • 3 Service models
  • 4 Deployment models

The 5 essential characteristics are:

  • On-demand self-service
  • Broad network access
  • Resource pooling
  • Rapid elasticity
  • Measured service

The 3 Service Models are:

  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Infrastructure as a Service (IaaS)

The 4 Deployment Models are:

  • Public Cloud
  • Private Cloud
  • Community Cloud
  • Hybrid Cloud

Let’s take the definitions apart a bit and see what makes them tick:

The 5 Essential Characteristics

On-demand self-service: Being able to self-provision and de-provision resources as needed, without vendor involvement.

Broad network access: Resources available over a network accessed through standard protocols from all kinds of devices. For Public Clouds, these resources are typically available all over the world with data centers that accommodate all country’s need for data sovereignty and locality.

Resource pooling: Lots of resources that can be allocated from a large pool. Often the user does not know, or have to know, their exact location. Although, in the case of Public Clouds like Microsoft Azure, locality is often under the user’s control. In many cases the pool of resources appears to be nearly infinite.

Rapid elasticity: The ability to scale up and down as needed at any time, in some cases automatically, based on policies set by the user.

Measured service: Providing the ability to track costs and allocate them.  Public Cloud resources are often  funded using an Operating Expenditure (OpEx) rather than the Capital Expenditure (CapEx) accounting model used where hardware purchases are involved.

The 3 Service Models

In the case of Public Clouds these service models are defined by who supports various levels in the technology stack; the customer or the cloud vendor.

This Microsoft Azure diagram has been in existence, in one form or another, at least since the 1970s:

clip_image002[4]

It illustrates who is responsible for what part of each service type the Vendor (in blue) and the customer (in black).

Software as a Service (SaaS) asks the customer to take no responsibility for the underlying hardware and software stack. In fact, the customers only responsibility is to use the service and pay for it.

Platform as a Service (PaaS) lets the vendor manage the lower layers of the technology stack, while the customer is responsible for the Application and Data layers.

Infrastructure as a Service (IaaS) corresponds most closely to what the customer must manages  in a typical data center and what the vendor is responsible for. With IaaS, the vendor takes care of everything from part of the operating system layer to the bottom of the stack. Notice that in Azure the operating system layer is partly the vendor’s responsibility, and partly the customer’s. The customer is still responsible for applying operating system patches, for instance. (This may differ slightly for other Public clouds.)

I often include a slide in my Cloud presentations that addresses the question of what to use when. In my mind the decision tree is fairly obvious:

  • It’s SaaS until it isn’t
    • Then It’s PaaS until it isn’t
      • Then It’s IaaS
  • Or Hybrid with any of the above

If you can find a SaaS service that meets your objectives, use it. Let the vendor have all the stack support headaches. If you can’t, then consider using PaaS by either buying a 3rd party application or building your own.

Finally, if none of these approaches work, you can always take advantage of IaaS since that most closely matches what you have in your own data center. Even there, however, the vendor will take care of a great deal of the plumbing. (As an aside IaaS is often the approach of choice for taking a “lift and shift” approach to migrating what you already have running in your data center up into the cloud.)

And yes, I know we haven’t discussed Hybrid yet, but patience, we will get there.

The 4 Deployment Models

A Public Cloud is a pool of computing resources offered by a vendor, typically, on a “pay as you go” basis. It supports Self-Provisioning by its customers as discussed above. It also often has a massive global presence and appears to have near-infinite capacity. In general, it is available to anyone.

Gartner defines a large number of Leadership Quadrants, but the ones that are most relevant to our discussion are the ones for Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Gartner Leadership Quadrant for IaaS includes just Amazon and Microsoft. The one for PaaS just includes Microsoft Azure and Salesforce. There are other lesser Public Cloud vendors including Google, IBM and Rackspace.

One other point: Unless a company can spend the many billions of dollars necessary to create a global datacenter presence it is hard for them to be considered a Public Cloud leader.

A Private cloud on the other hand is normally local to a single company. It is normally located on their own premises. If a customer creates a data center, or a complex of data centers, that conforms to the NIST definitions discussed herein then it can rightly be called a “Private Cloud”. Be cautious here, however. Remember the term “CloudWashing” as defined above. Just because a customer has a large number of computers in a data center does not make that a Private Cloud no matter how vehemently they insist on calling it one.

Although there is no requirement for the architecture of Public and Private Clouds to be the same most companies agree that compatibility between them would be helpful to support CloudBursting. That is, the ability to move applications and data freely between the data center and the Public Cloud. (See the discussion on Hybrid Cloud below.)

A Community Cloud is a variation of a Public Cloud that restricts its clientele to a specific community of users. For instance, there are Community Clouds for Government and Education as well as for other communities. Each of these may have different levels of security and compliance requirements or other unique characteristics. The NIST document does state that it may be operated for the community by one of the companies in it, however, it is typically operated by a Public Cloud vendor using a walled-off subset of it resources.

A Hybrid Cloud is typically formed by the integration of two or more Public and/or Private Clouds. In the typical case a company utilizes both a Public Cloud and resources in their own data center structured as a private cloud with strong networking integration between them.

A high degree of standardization between them is desirable in order to make it possible to distribute resources across them or to load balance or cloudburst resources between them. This is often done for security and compliance reasons where a company feels that some data and/or processing must remain behind their own firewall, while other data and/or processing can take advantage of a Public Cloud.

In the case of Microsoft technology there is one example where compatibility will be extremely high and advantageous. I expect the Private Cloud space to be dominated, at least in data centers utilizing Microsoft technology, by the new Azure Stack appliance coming out this summer from 5 initial vendors. Make no mistake about it. Azure Stack is Azure running in your own data center. In effect Azure stack is pretty much like Just another Azure Region  (JAAR?) to the Public Azure Cloud. Having that high degree of compatibility will help facilitate the Hybrid Cloud architecture discussed above. I have already blogged about Azure Stack and why I feel that it is so important so I will not go into that in detail here. See this bog post: The Future of Azure is Azure Stack.

We also should distinguish between true Hybrid Cloud and an IT Hybrid Infrastructure In the case of the latter the resources in the data center need not be in the form of a Private Cloud as discussed above. Microsoft is the clear leader in this space now because of its traditional enterprise data center presence and it’s leadership in converting itself to be a Cloud Company.

So, don’t be fooled by Cloud Pretenders. Apply the NIST litmus test.

This article originally posted 5/3/2017 at Bill’s other blog, Cloudy in Nashville.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Why use Cloud App Security when my firewall already does this?

2017-07-27T00:00:58+00:00 April 12th, 2017|blog, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

Microsoft’s new Cloud App Security (CAS) is a new product feature that comes with Microsoft’s Enterprise Mobility + Security E5 line of products.  The solution is a cloud-based application model built on Azure Active Directory but can also be used independently, although the dataset will not be as rich.

The idea is for customers to be able to gain deep insight into what apps their end-users are consuming, identify data drift and leakage, be able to “sanction” or “unsanctioned” applications, and even generate a block script to block those unsanctioned apps at the firewall level.  There are two paths to gathering this data:  Firewall Logs and Connected apps.

DISCOVER

You can discover information by manually importing firewall logs or even setup a connector VM which will gather the logs and upload them to the CAS for you.  This is not much different than technologies offered by the firewall providers themselves and, in many cases, will not provide as quick of a reaction as you’d receive from those vendor provided solutions.  The connector, by default, uploads logs from the firewall every 20 minutes and imports those into the CAS.

What I believe separates Cloud App Security different from the firewall provider solutions is that it integrates with Connected apps.  A Connected app is an application where CAS leverages APIs provided by the cloud provider.  Each provider has its own framework and limitations, so the functionality for each may depend on how much the provider has extended the API.

There are currently few Connected Apps that CAS supports but I’ve found that the biggest bang for your buck will be the Office 365 suite of applications.  This allows the CAS to see usage of the standard suite of Office apps and your Azure AD connected users.

INVESTIGATE

With the data gathered from the Connected apps, you can see information on File usage, owner information, app name, Collaborators, and more.  You will be able to tell who is accessing what files and who those files have been shared with.  You can drill down on particular user activity and see all of the apps and traffic volumes for their particular usage.

The Cloud Discovery Dashboard provides a rich view of information from a graphical perspective including dashboard items like App Categories which highlight usage based on categories such as CRM, Collaboration, Accounting, Storage apps, and more.  Other items on the dashboard show top discovered apps, top users, and even a geographical map of usage based on where the apps are being used.

CONTROL

Through alerts you may be made aware of Risky IP addresses, Mass downloads, New Cloud app usage, and more.  If a particular user is a concern, you even have the ability to suspend usage of a particular connected app for a particular user.  This adds a layer of security that a standard firewall report may not provide – especially if the user roams to another location off-premise where your firewall is not present.

Using the policies feature, I can set an alert, notify the user and CC their manager, or even suspend the user based on the several configuration policies that are available to me via the console.  This allows me to mitigate threats as they happen instead of waiting for a review of alerts or logs, possibly days or even weeks after the events occurred.

To summarize on Microsoft’s Cloud App Security, I would have to say that they are opening the door to rich integration with cloud based apps and providing another avenue to secure your corporate data.  With the deep integration of Office 365, those that have the E5 licensing should definitely take advantage of this product.  Even those interested, but not licensed can subscribe for a trial version and use the firewall discovery solution to get an immediate view of what’s being used internally.  This will allow your organization to have that much-needed discussion on BYOD and the security risks that ultimately partner with an open door policy.

If setup properly, adding Cloud App Security to your environment can greatly increase the level of security your organization has regarding the mobility of your data and users!

For More information, review the following useful links:

https://www.microsoft.com/en-us/cloud-platform/cloud-app-security

https://docs.microsoft.com/en-us/cloud-app-security/enable-instant-visibility-protection-and-governance-actions-for-your-apps

https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/what-is-cloud-app-security.md

 

5 Tips for connecting your Azure space to On-Premises…

2017-07-27T00:00:58+00:00 March 26th, 2017|Azure, blog, Cloud, Micrsoft Cloud Solution Provider|

Often, the easiest thing about using the Azure Cloud is getting started.  Whether it’s creating a VM, establishing a web service, etc., it’s usually as easy as a few clicks and you’ve got some “thing” running in a bubble.  That’s the easy part.

It’s what you do right after that can often be a challenge, since in many cases it involves inter-connectivity with your on-premises environment.  And at that point, whether it’s the early on, or even after long-thought-out deliberative designing, you may want to sit down and have a talk with your firewall/network team (who may never even have heard the word “Azure” before) and talk about exactly how to connect.

Please be mindful that the router/firewall people roll in a different workspace, and may have a different approach to what you’re trying to accomplish.  They may prefer to use third-party firewall/tunnel capabilities with which they are already familiar, or utilize the off-the-shelf options that Microsoft provides.  Note: This article is all about the built-in Microsoft options; we’ll have a discussion about third-party items in a future article.

Specifically when working with the native Azure connectivity options, the first thing you’ll want to do is point yourself and others at this URL, which provides most everything needed to get started:
https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-plan-design

…note that there are some great sub-pages there too, to take the conversation from “Why are we doing this” to “What is Azure” to “Let’s submit a ticket to increase our route limitation“.

But speaking as a server/cloud guy, I wanted to give you some simple but important tips you’ll need to know off the top of your head when speaking to your router people:

Tip #1
There are two types of Gateways for on-prem connections to the cloud: ExpressRoute and VPN.  ExpressRoute is awesome and preferred if you have that option.  If you don’t know what ExpressRoute is already, you probably can’t afford it or don’t need it — which leaves you with VPN.  The good news is that if done right, the VPNs can be perfectly fine for an Enterprise if you set them up right, and mind them well.

Tip #2
I’m mostly writing these VPN options down because I always forget it, but you need to know the definitions too:
“Static Routing” used to be called “PolicyBased” and your router person knows it as IKEv1
“Dynamic Routing” used to be called  “RouteBased” and your router person knows it as IKEv2

Tip #3
PolicyBased can only be used with “Basic” SKU, and only permits one tunnel and no transitive routing.  You probably do not want this except in the most simple of configurations.  Ironically, your router/firewall person will most likely configure your VPN this way if you don’t instruct them otherwise.  Watch out!

Tip #4
The firewall you have may not be supported.  But even if it’s not, that means two things:  you may be forced into PolicyBased (read Tip #3), or in many cases it will work just fine even if it’s not supported.  But you might be on your own if you have a problem, so know what you’re getting into.

Tip #5
Please calculate the total number of routes and gateways and such that you’ll be permitted based on the SKUs you’re chosen.  Make sure that your fanciful networking dreams will all come true when you finally get where you’re going.  Everything in Azure has a quota or limitation of some sort, and you can almost always get them raised from the low original limit, but some things just aren’t possible without changing SKUs.

Extra Bonus Tip
Look into the “Network Watcher” preview for validating some of your networking flow and security, and for an instant dashboard of the quotas (mentioned in Tip #5).  It’s only available in some locations right now, but it’s looks like it will be quite nice.

…and that’s just scratching the surface, but those are some of the things I run into out there, and I thought it might save you a minute… or a day… or more…

Good luck out there!

Happy Holidays from Coretek Services

2017-07-27T00:00:58+00:00 December 16th, 2016|blog, Winter|

dina-francis-award “He’s over here!” Employee RJ Armstrong exclaimed, pointing out a window near the front of the banquet hall. All the children quickly raced to the window with excitement in their eyes.

“I don’t see him,” said his niece Ava. “Are you sure it was Santa?”

Coretek Services hosted their largest Christmas Party and A.I.R. Awards Celebration yet earlier this month; it was a true celebration of the year’s success and of the holidays with coworkers and their families.

 

voltaire-toledo-awardA.I.R. represents Coretek’s values of:

Attitude – Having a servant’s heart

Integrity – We say we will do, and then we do it!

Relationships – Building relationships with clients, partners, our community, and most importantly – each other

The A.I.R. awards are nominated and voted on by Coretek employees and awards are for the following categories below. In tradition, the winners of the previous year announced this year’s recipients.

 

 

Congratulations to:

Rookie of the Year – Resa Abbey

Operations Teammate of the Year – Dina Francis

Managed Solutions Teammate of the Year – Scott Ruoff

Consulting Services Teammate of the Year – Chris Barnes

PMO Teammate of the Year – Heather Spencer

Sales and Marketing Teammate of the Year – Jeff Bollock

The Most Improved – Brett Decker

Sales Win of the Year – Kettering Health Network – Jack Wilson & Voltaire Toledo

Road Warrior of the Year (The Pianki Award) – John Blickensdorf

Teammate of the Year – Chris Shalda

 

santa The night ended with Santa making an appearance! Merry Christmas and a Happy New Year to you and your loved ones.

Let’s slash through the hype and keep I.T. real

2017-07-27T00:00:58+00:00 December 9th, 2016|Announcements, blog|

New Video Series

Hello TekTopics readers!  I am excited to announce we are launching a vlog: “Keeping I.T. Real”

In this new video series, we will slash through the hype and share critical and essential technologies, tools, and tips.  We will bring you only those which are real, worthy of your trust, and upon which you can build and grow your company.  As a trusted and reputable virtualization and cloud company with customers across the globe, Coretek has worked with all kinds of old and new technologies – some good, some bad, some ahead of their time, and some that need to be retired.  Be a part of the discussion of new ideas, trends, and technologies.

Your host, Mitch Howell, will be your guide through each topic.  Mitch will be introducing you to the key players, including guests within Coretek Services and our most valuable partners.  Be sure to stayed tuned… and keep I.T. real!

Get to know the Host

Mitch Howell is a Client Executive with Coretek Services.  Mitch Howell came to Coretek Services in February of 2016, after spending 7+ years in Technical Sales, Sales Management & Solution Architecture within the Data Center Infrastructure industry.  Once Mitch identified the trends towards a Software Defined World, he naturally made the decision to join the growing Coretek Team.  Coretek began as a nationally recognized Virtualization Solutions Integrator, but has evolved into an organization that is transforming the way we leverage Cloud technology today.

mitchimage

Community Service Project with Rebuilding Together

2017-07-27T00:00:58+00:00 October 20th, 2016|blog, Community Project, Giving Back|

Coretek Services and Rebuilding Together Community Service Project 2016Keeping with the annual charitable tradition of our Community Service project for 2016, Coretek jumped in with our hearts and hands to renovate and rejuvenate the home of some local neighbors who do not have the financial resources or manpower to do it themselves.  Partnering with Rebuilding Together again for the second year in a row, Coretek visited the home of a veteran and his wife whose established home was in need of some major landscaping and yard work, along with some indoor repairs and organization.

The day’s weather loomed overcast and rainy, but that did not dissuade nearly 30 Coretek employees and their families from waking up early on a Saturday morning and driving from various parts of metro Detroit to put in their all.  It was a messy and muddy day — full of tree branches, carpentry, and unexpected project delays.  But these are the challenges that bring out the best in all of us!  And in the end, it was nothing but sunshine beaming from the happy home.

The homeowners were grateful and pleased with the work the Coretek team did, and we all were thankful to have the opportunity to give back to this deserving couple.

And we also want to thank the folks at Rebuilding Together for everything they bring to our Community Service projects!  To find out more about who they are, click here to learn more about Rebuilding Together and see if you or your organization can get involved.

The Advantages of Working with a Microsoft Cloud Solution Provider (CSP)

2017-07-27T00:00:58+00:00 October 2nd, 2016|Azure, blog, Cloud, Microsoft, Micrsoft Cloud Solution Provider|

There are many cloud services platforms — and numerous cloud service providers — to assist your organization with the strategy, deployment, and management of your cloud initiative.  In this ever-growing landscape of cloud providers, how do you choose the partner that is best for your business?

We have uncovered the key attributes which will determine your cloud projects’ success when selecting a cloud solution provider: experience, value, and fit.  Evaluating these three credentials of your cloud provider candidates will drive your cloud strategy, deployment, and management success rate.microsoft-cloud-solution-provider

Experience

First, you want a provider that has several cloud veterans that are constantly in touch with the state of the industry.  Coretek Services employs folks that are cloud product veterans in Azure and many of the other cloud technologies.  In fact, members of our team have been instrumental in building the Azure cloud solution when they were employed at Microsoft.

Next, you want to know that your provider isn’t “cloud only” but also has experience in data center infrastructure, virtualization, mobility, security, and your specific business domain such as healthcare, manufacturing, and others.  Few cloud service providers can offer you this additional depth of experience.

Value

You want your provider to provide value beyond just the cloud product being delivered.  This means that you want your new cloud partner to have significant relationships and partnerships with other technology vendors as well as the necessary expertise in those platforms.

As a Microsoft Cloud Solution Provider (CSP), we have a significant value partnership in Azure.  We have relationships with the product development teams and input into the feature development process.  It allows us to represent cloud computing trends that our customers are experiencing to the cloud product development team.

cloud-service-providerWhile you get a great product in Azure because Microsoft is focused on delivering the very best, we are free to build value-added features for our customers.  For example, we can tailor automation to your business to make your cloud usage more efficient, such as decreasing services when your business is closed or by increasing services when demand bursts to higher levels.  This allows you to control your costs and forecast your needs well in advance.

Simply put — you get the best of both worlds.  It allows your organization to receive the best that Azure can provide along

with the detailed focus of your IT business needs, which Coretek Services provides.

Priority and Fitcsp

As a Microsoft CSP, we quickly identify the technical problems and bring the right solutions and people rapidly to your assistance.  Coretek Services makes your organizations needs a top priority.  We will fit into your business in the way that is most appropriate to you providing professional services, managed services, or the mix that you desire.

We believe in one thing.  Customer Success!  No Exceptions!

Coretek Services Picnic 2016

2017-07-27T00:00:58+00:00 August 24th, 2016|blog, Summer|

August 22, 2016 – Last weekend, Coretek Services hosted our annual Coretek Services Summer Picnic.  The event is always a fun opportunity for the Coretek family to enjoy the great summer weather before kids go back to school.

The event was more than just a picnic with great food; it was a time for co-workers and families to have fun.  Attendees were able to compete in a variety of summer games, including water balloon toss, three-legged race, egg and spoon race, corn hole, and gaga pit.  There were also bouncy houses, paddle boats, put-put golf, and LIVE MUSIC from Coretek employees!

Thanks for everyone for attended and helped make the event happen – we look forward to the event next year!  It’s great to be part of the Coretek family.

Theguys-300x200 Slingshot-300x200 JamSession2-1-300x200 Games-300x200 Gaga-Pit-300x200 Gaga-Pit-2-300x200 Cornhole-300x200  Bouncy-House-300x200 Airplanes-300x200CoretekKids-300x225

 

Enterprise Best Practice does not necessarily equal Cloud Best Practice…

2017-07-27T00:00:58+00:00 July 28th, 2016|Azure, blog|

This article might just be restating the obvious for some — but to put it bluntly, a “best-practice” Enterprise Active Directory (AD) design feature may not perfectly translate to a Cloud-based deployment scenario. Let me explain…

When Good Mappings Go Bad

Let’s imagine an enterprise that has done a good job of providing universal access to user Home Folders by using the AD Home Folder attributes on the user objects.  Very common indeed, and very well loved in most cases.  In a well-designed infrastructure, the users get access to the Home Folder from almost anywhere in the world, and from a variety of platforms including local, remote, and thin/terminal access.

On top of that, imagine further that the environment utilized the individual logon script user object attribute to determine group memberships, deliver printers, and maybe even deliver a mapping or two.  All of this is fine (though arguably cumbersome) in a high-speed environment where the network inter-connectivity is not rate-limited or rate-charged.

Now however, let’s imagine being one of those users authenticating to an RDS/Terminal Server (session hosts) farm in a cloud-based domain instead of in the Enterprise.  Hmm.  Suddenly, different access and performance considerations appear when walking through that logon process.  For instance, while that Home Folder server may be reachable from that RDS farm, that lookup and access of the file server might very well be across a VPN pipe that is slow; or even if it’s fast, there may be a charge for egress data transfer as is the case with Microsoft Azure.  Oh, and that logon script will definitely hit the Domain Controller looking for all of what it needs to draw conclusions; and in the end, may attempt to map you to things you cannot even reach.

Can you solve this problem by putting domain controllers in the cloud?  Well, part of it — if you use good AD Site and Subnet configuration.  But you can’t escape the fact that your enterprise user objects may attempt to reach beyond those controllers and into the infrastructure to access what they must, and time-out on what they cannot (read: slow logon).

The GPO is your frienemy

And don’t even get me started on GPOs.  Yes, you know them, and you love them, and you use them to provide a rock-solid enterprise configuration for your users…  But what about those mandatory proxy registry settings that matter in the cloud?  What about those printer map settings?  What about those WMI evaluations?  The Item-Level Targeting?  And so on.

And then one day of course, there’s the one GPO setting that accidentally gets applied to those users that inexplicably wipes out their access to the application in the cloud-based RDS farm.

The bottom line is that again, things that may be prudent and reasonable in the Enterprise may be detrimental to the Cloud users’ experience.

So what can you do?

First, step back.  Ask yourself if your user logon process is clean, lean, and mean, and prudent for a Cloud-based experience.  It may very well be the case, but it likely is not.  So if you find that you’ve been a good and dutiful Enterprise admin and used Active Directory to tightly configure that user, you might be faced with a need to have a separate directory for your Cloud environment that is either replicated, integrated, or federated.  Which, for some organizations, may very well cause them to have to re-think security models (or at least re-imagine the ones they have), evaluate provisioning, and so on, as part of a larger Cloud Strategy.

Or, if your situation permits, you might be able to take advantage of the soon-to-be-released Azure Active Directory Domain Services, as long your design doesn’t run up against some of the limitations (I strongly recommend you read the FAQ and other documentation before deciding it’s right for you).

Now you’ve heard what to watch out for, but the options you utilize going forward depend on what you are trying to achieve.  Good luck out there, and let us know if we can help…

Hyper-V, Windows 10, and Insider Preview…

2017-07-27T00:00:58+00:00 July 21st, 2016|blog, Hyper-V, Microsoft|

I am guilty of running Windows 10 with the Insider Preview “Fast Ring” in production as my day-to-day laptop.  I also maintain a lab of Hyper-V Virtual Machines (VMs) on my laptop that use shared virtual networking with the built-in interfaces, so I can have the equivalent of a NAT environment for my VMs.

Mind you, it’s really been great in almost every way — except that every time I get an update to the Windows 10 Insider Preview (and that is ever few days lately), I have to re-configure my interface sharing and NAT so my VMs can reach the Internet.  So, I thought I’d whip up the steps for you, in case you face the same thing.

So first, after you notice that your VMs don’t have Internet access — and then you remember that you got another Fast Ring update recently, you do this:

Open the Hyper-V Manager on the Windows 10 laptop, and click on “Virtual Switch Manager…” from the Actions area.

Capture1

Select the virtual switch to be fixed, in my case named “Internal-NAT switch”, and change from Internal to Private, and apply.

Capture3

You may notice that the Hyper-V interface disappears from the laptop Interface list.  Select Internal again to change from Private, and click OK.  The Hyper-V interface reappears in the interface list.

Capture2

Right-click on the WiFi interface (or whichever you wish to share networking with the VMs), and choose Properties.  On the Sharing tab, ensure the box is checked for “Allow other network users…” and click the drop-down list under “Home networking connection:”.  Change from “Select a private network connection to choose the Hyper-V interface, and click OK.

Capture4

Note that the previous step has not *always* worked for me, though it usually does.  A couple times, I’ve had to either a.) un-check the check box and save before re-enabling sharing, or in rare cases, b.) go into Device Manger and remove the WiFi interface, reboot, and return to re-enable sharing.  Anyway, if all goes well and you’ve re-enabled sharing, your VM pings will start going through as the networking gets reconnected.

Capture6

I’ve become quite used to doing this series of steps and have got it down to a quick few moments, but it always seems to catch me off-guard each time it happens.  I hope it helps you a bit!

Load More Posts