About Jason.Shaw

This author has not yet filled in any details.
So far Jason.Shaw has created 3 blog entries.

Managing Multiple Azure Subscriptions from PowerShell…

2017-07-27T00:01:01-04:00July 24th, 2014|Uncategorized|

Hi folks, Jason here again – this time with some Azure PowerShell goodness to share.

A while back I set up an Azure trial subscription.  Following Jeremy’s post last year, “How to manage Azure from PowerShell on your PC“, I was able to get PowerShell to connect to my free trial subscription, creatively named “Free Trial”.  Coretek was kind enough to provide me with an MSDN Premium license.  Since the MSDN Azure subscriptions get $99 a month in Azure credit, it was high time to switch over to that Azure account and leverage that credit!  This subscription was also creatively named… “Visual Studio Premium with MSDN”.

Once again I followed Jeremy’s steps to import the Azure Publish Settings file – this time for the new subscription.  I ran Get-AzureVM… but I wasn’t seeing any VMs for my MSDN subscription.  Take a look:


…In the above screen capture, XENAPP1 is a VM in my old, un-loved Free Trial subscription.  Running Get-AzureSubscription showed me that I did indeed have access to two subscriptions, as expected:


So that begged the obvious question… how do I connect to my VMs in my other subscription?  Well that’s easy enough to do.  Just run the following cmdlet:

Select-AzureSubscription –SubscriptionName “Visual Studio Premium with MSDN”

…of course, change the subscription name to match your own.  NOTE: the subscription name is case sensitive!

One more tip for you, Dear Reader.  If you close your Azure PowerShell window and come back, it will revert back to whatever subscription is the default subscription.  That will always be the first subscription you set up.  The fix is simply adding the –Default switch to the end of the above cmdlet.  Now that’ll be where your Azure cmdlets do their magic as you go forward.

Now when I run Get-AzureVM I get the VMs I am looking for:



Certificate Services Did Not Start on a Sub CA…

2017-07-27T00:01:02-04:00June 26th, 2014|Uncategorized|

Hi Internet friends!  I recently came across a two-tier PKI infrastructure (Enterprise-style, with offline root) that was set up for a customer who was transitioning to a brand new AD Forest.  The customer stated the CAs were working just fine when initially built, and had the validation documentation to prove it.

However, a couple months later, the customer was receiving errors when trying to generate a certificate from the subordinate CA for a new application infrastructure, and they asked for help.  They saw this error in Event Viewer:

Event ID 100: Active Directory Certificate Services did not start: Could not load or verify the current CA certificate.  <subCA certificate name>  The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE).


Odd that it worked at implementation, but suddenly stopped working.  It was time to check things out and review the implementation.

The first thing I noticed was that the Active Directory Certificate Services service (certsvc) would not start.  When I tried it from the Certification Authority GUI I saw the error message shown at left.

You’ll note that the GUI message is nearly identical to the Event ID 100 message.  I simply mention this so if you do see the message in the GUI, you don’t need to run to Event Viewer; you’ve got the same issue I experienced at the customer site.

Web searching was only slightly helpful and put me in the ballpark for the fix, but it didn’t bring me all the way home.  And if you simply want my quick steps on how I corrected the issue, here is what I did:

  1. Powered up the offline root CA
  2. Renewed the Subordinate CA certificate with the Root CA and re-installed it on the Subordinate CA
  3. Regenerated the Root CA CRL and copied it to the correct location on the Subordinate CA
  4. Started the AD CS service on CA1, the Subordinate CA.


But if you want more detail on what I discovered, here’s the meat…

First off, the Root CA CRL was set to expire in just seven days.  That’s not going to be good.  That right there prevents the certsvc service from starting after the Root CA Certificate Revocation List (CRL) expires.  The service depends on finding Root CA CRL.  If it is expired – or can’t find it – the service will not start.

Simply regenerating that and replacing it on the Subordinate CA – after adjusting the CRL Publication interval to 1 Year – was the first thing I tried, but it didn’t help.  This led me to investigate if the Subordinate CA could even find the Root CA CRL. 

I opened the Subordinate CA certificate, clicked the Details tab, and checked the value in the CRL Distribution Points entry, shown here at right.

That’s where the light bulbs started to get bright.  The server was named ca1.domain.com, but they had their PKI IIS set up with the URL pki.domain.com/pki.


I checked the CRL Distribution Point settings on the Root CA and the Subordinate CA.  Ah ha!  They were not matching what was in the Subordinate CA cert.

Now take a look at the Root CA CRL CDP Settings in the Properties page at left.  You’ll see that this is set to the URL I mentioned before, pki.domain.com/pki – which is *not* what is in the Subordinate CA CRL Distribution Points property of the certificate!

The fix was now pretty apparent.  All I needed to do was renew the Subordinate CA certificate and replace it on CA1.

Take a look below at the CRL Distribution Point property on the new Subordinate CA cert and you’ll see it now matches what is in the Root CA settings.


From there it was just adjusting the Root CA CRL publication interval in the Certification Authority MMC by right clicking Revoked Certificates, selecting Properties, then setting it according to Microsoft Best Practices of 1 Year, as shown here.

Then, I right clicked on Revoked Certificates, selected All Tasks, Publish, and re-published the CRL and copied it to the /pki folder on the Subordinate CA Server.

Viola!  The services now started on CA1, the Subordinate CA, and everything worked as expected.

Now it’s true that this does mean that the Root CA must be powered up annually so the CRL can be regenerated and copied, but there is no getting around that. 

Yes, of course you can set the CRL publication interval to be longer than a year; but as per best practices, you should set that to expire in the least amount of time that is tolerable.  And in this customer’s case, the recommendation of each year was suitable and appropriate.


Firing up the Root CA and doing this annually isn’t a big deal in my world, hence that recommendation.

In the end, with the certificate service running and the CRLs fixed, the customer was able to mint the certificates necessary for the new application, and I made an appointment with the customer in my calendar for just shy of a year from now…

I hope that helps someone else out there!

The death of non-verifiable subject alternate names in certificates…

2017-07-27T00:01:02-04:00April 23rd, 2014|Uncategorized|

If you’re not aware, 3rd party certificate providers (Verisign, etc.) won’t be allowed to issue certs with non-verifiable Subject Alternative Names (SANs) or Subjects with so-called “internal” server names in the near future.  For example, server.local, servername, server.lan, etc. are *not* valid inclusions in certificates; basically any name that is not verifyable against a public registrar.  Any 3rd party cert with an internal server name that you acquire now will be marked for expiry on November 1, 2015.  Existing certs will be revoked on October 1, 2016 – assuming the provider is following guidelines. 

This can be problematic going forward when it comes to designing Active Directory services for new Forests, since best practices have historically been to try and avoid split-brain/split-horizon DNS, if possible.  Going forward, any new AD designs must include considerations for domains that end in .com or .net, in order to avoid this issue.

And at this very moment, I’m on an engagement that has a non-internet routable AD Forest name; so I am planning for this situation.

If you end up in this situation and you need SANs with internal server names (read: the FQDN of an Exchange Client Access Server), the good news is that DigiCert has a tool and some steps to follow to work around this.  For additional information, please see these great explanations from DigiCert:

Internal Server Names
New gTLDs
DigiCert Internal Name Tool
Redirect Internal Exchange Domains to use External Domains

Microsoft is writing their concerns and tips about this situation into some of their Knowledge Base articles as well (see the “More Information” section in this support guide).  Now hang on…  that doesn’t mean re-work your entire DNS and AD infrastructure.  Instead, they recommend creating an internal DNS zone that matches your external zone and setting up the appropriate Exchange records.  You leave your existing DNS internal namespace alone.  That’s one way to do things if you walk in to *any* environment with a .local (and the like) internal DNS name.

For *new* environments, however, if the customer is using SomeCompanyDomain.com externally, then set up the new AD infrastructure as SomeCompanyDomain.net or SomeCompanyDomain.biz and get past this silliness.  Just add the names to the Subject Alternative Name on the cert, set up the internal Exchange URLs appropriately, and you’re done.

This change can have ramifications beyond Exchange, folks.  Short-names are also disallowed, so you can’t even put things like MYWEBSERVER or MYSHAREPOINTSITE as a SAN now.

Yes, this means I feel this is something *very* important for us to keep in mind as we work with AD, Exchange, Sharepoint, etc.  You’re going to find certs that expire Nov 1, 2015… and that’s less than 18 months from this writing.  That’ll be here before you know it, and we all need to be prepared for these situations. 

If you’re in charge of potentially-affected systems, consider yourself warned!  And if you’re someone who doesn’t have time in your busy schedule to handle this, let us know if we can help you…