Certificate Services Did Not Start on a Sub CA…

2017-07-27T00:01:02+00:00 June 26th, 2014|Uncategorized|

Hi Internet friends!  I recently came across a two-tier PKI infrastructure (Enterprise-style, with offline root) that was set up for a customer who was transitioning to a brand new AD Forest.  The customer stated the CAs were working just fine when initially built, and had the validation documentation to prove it.

However, a couple months later, the customer was receiving errors when trying to generate a certificate from the subordinate CA for a new application infrastructure, and they asked for help.  They saw this error in Event Viewer:

Event ID 100: Active Directory Certificate Services did not start: Could not load or verify the current CA certificate.  <subCA certificate name>  The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE).

/sites/default/files/20140625/1-CaError.png

Odd that it worked at implementation, but suddenly stopped working.  It was time to check things out and review the implementation.

The first thing I noticed was that the Active Directory Certificate Services service (certsvc) would not start.  When I tried it from the Certification Authority GUI I saw the error message shown at left.

You’ll note that the GUI message is nearly identical to the Event ID 100 message.  I simply mention this so if you do see the message in the GUI, you don’t need to run to Event Viewer; you’ve got the same issue I experienced at the customer site.

Web searching was only slightly helpful and put me in the ballpark for the fix, but it didn’t bring me all the way home.  And if you simply want my quick steps on how I corrected the issue, here is what I did:

  1. Powered up the offline root CA
  2. Renewed the Subordinate CA certificate with the Root CA and re-installed it on the Subordinate CA
  3. Regenerated the Root CA CRL and copied it to the correct location on the Subordinate CA
  4. Started the AD CS service on CA1, the Subordinate CA.

/sites/default/files/20140625/2-CertDeets.png

But if you want more detail on what I discovered, here’s the meat…

First off, the Root CA CRL was set to expire in just seven days.  That’s not going to be good.  That right there prevents the certsvc service from starting after the Root CA Certificate Revocation List (CRL) expires.  The service depends on finding Root CA CRL.  If it is expired – or can’t find it – the service will not start.

Simply regenerating that and replacing it on the Subordinate CA – after adjusting the CRL Publication interval to 1 Year – was the first thing I tried, but it didn’t help.  This led me to investigate if the Subordinate CA could even find the Root CA CRL. 

I opened the Subordinate CA certificate, clicked the Details tab, and checked the value in the CRL Distribution Points entry, shown here at right.

That’s where the light bulbs started to get bright.  The server was named ca1.domain.com, but they had their PKI IIS set up with the URL pki.domain.com/pki.

/sites/default/files/20140625/3-CaRootExtensions.png

I checked the CRL Distribution Point settings on the Root CA and the Subordinate CA.  Ah ha!  They were not matching what was in the Subordinate CA cert.

Now take a look at the Root CA CRL CDP Settings in the Properties page at left.  You’ll see that this is set to the URL I mentioned before, pki.domain.com/pki – which is *not* what is in the Subordinate CA CRL Distribution Points property of the certificate!

The fix was now pretty apparent.  All I needed to do was renew the Subordinate CA certificate and replace it on CA1.

Take a look below at the CRL Distribution Point property on the new Subordinate CA cert and you’ll see it now matches what is in the Root CA settings.

/sites/default/files/20140625/4-CertDeets.png

From there it was just adjusting the Root CA CRL publication interval in the Certification Authority MMC by right clicking Revoked Certificates, selecting Properties, then setting it according to Microsoft Best Practices of 1 Year, as shown here.

Then, I right clicked on Revoked Certificates, selected All Tasks, Publish, and re-published the CRL and copied it to the /pki folder on the Subordinate CA Server.

Viola!  The services now started on CA1, the Subordinate CA, and everything worked as expected.

Now it’s true that this does mean that the Root CA must be powered up annually so the CRL can be regenerated and copied, but there is no getting around that. 

Yes, of course you can set the CRL publication interval to be longer than a year; but as per best practices, you should set that to expire in the least amount of time that is tolerable.  And in this customer’s case, the recommendation of each year was suitable and appropriate.

/sites/default/files/20140625/5-RevokedPubParams.png

Firing up the Root CA and doing this annually isn’t a big deal in my world, hence that recommendation.

In the end, with the certificate service running and the CRLs fixed, the customer was able to mint the certificates necessary for the new application, and I made an appointment with the customer in my calendar for just shy of a year from now…

I hope that helps someone else out there!

XP EOS D-9… And Counting…

2017-07-27T00:01:02+00:00 March 30th, 2014|Uncategorized|

It’s Monday.  The last day of March.  Forget the fact that tomorrow is April Fool’s Day.  The Windows XP “End of Service” date is now only 9 days away! 

Before reading on, it might be a good idea to reference my post from last month, “XP Elimination — The looming crush…” and “XP EOS M-9… And Counting…

Now that you’ve caught up on those previous articles, let’s spend a moment catching up on our 3 semi-fictitious companies and see how they are doing.

Organization “A” – What, me worry?

For our fictitious Organization “A”, things are actually getting better – at a price, that is.  You see, they realized they had no hope of making the deadline, and decided to throw buckets of money at the problem.  They brought in consultants, vendors, and staff leaders, and locked them in a room with a blank checkbook.  The 20,000 XP machines are rapidly becoming 7000 or so machines and dropping.  It’s “getting done”, but in a very “machine gun” style that doesn’t lend itself well to on-going management or future enhancements and upgrades.  This just means that after this checkbook is empty, they’ll be setting up for Round 2, in preparation for the next generation.  If they had gotten underway earlier, they could have had at least *some* of the tools and infrastructure in place to carry forward…  But, no…

Organization “B” – Nope.  We don’t wanna.

Well, Organization “B” is now partially integrated into Organization “C”.  They’re not going to make the deadline, I’m afraid; but because Organization “C” is so ruthlessly efficient, they are at least documenting the environment, planning the on-going integration, and actually deploying some elements of the extended infrastructure.  They still have most of the 40,000 XP workstations to get to, but the future looks better.  They’ve got fingers crossed that no calamity will befall them over the next few months as they catch up, and they are considering an alternate (expensive) support strategy.  It’s about the best they can do, given their previous situation.

Organization “C” – The best-laid plans…

For the original Organization “C” side of the C/B acquisition, things actually look pretty good.  Across the infrastructure, there are still a few thousand XP machines – but most of these are documented and/or isolated, or about to be replaced shortly.  It’s down to the wire!  But we can finally say that they’ve booked their project completion dinner party reservations.  Congratulations!  But keep at it with Organization B… 

Organization “D” – Yeah?  So?

Since my last post, I met Organization “D”.   Nice folks; smart too.  But they simply cannot afford to care about the deadline.   After a string of financial hardships, org changes, and so forth, they are only now making enough headway to think they’ll survive.  As a result of the hard past, they are only now putting their heads up for air and exploring option of how to get them from “here” to “there”.  They are numb from the scars of the economy, and they don’t see this XP EOS challenge any differently than the past challenges; they will run headlong into it, and take the blows.  They will come out the other end, but only with more scars.

Honestly, some of these stories are heartbreaking, while others are inspiring.  And mind you, I worked through the 2000 bubble like many of you, so you’d think I’d be less moved by the trials that these folks are going though.  But this one is different.  It didn’t make the news the same way (at least in the build-up), but it hits real folks where it hurts.  And we at Coretek are doing our absolute best to help those that we can, as quickly and effeciently — and as prudently — as we can. 

So good luck, hang in there, and we’ll all be watching the clock tick down to the final day.  And we’ll see you on the other side of the XP EOS…

[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

How to fix Ownerships and Inheritance on NTFS file systems, Pt. 2…

2017-07-27T00:01:03+00:00 September 18th, 2013|Uncategorized|

For background on this post, make sure to see Part 1, here

Picking up where we left off in Part 1, recall that I had a set of folders that were copied with original source permissions, and as a result, had broken permission “inheritance”.  The copy also brought over the various ownerships, which I was seeking to replace with the local “Administrators” group, according to standard practice of the customer.

So in Part 1, I had written a DOS batch script to loop through the folders and 1.) force down ownership in the new folder sub-structures (in order to follow company standard and be able to seize control of them), and 2.) re-apply inheritance of the administrative permissions from the folders above.  While I’d like to think I accomplished this swimmingly with my batch file, the truth is that it really needs to be in PowerShell in order to satisfy the needs of the team (future-proofing, portability, and such).

Let’s step through the PowerShell version now; but watch carefully because even though it is the same in principle as the DOS batch script in Part 1 (same enough to be worthy of comparison), it actually is a tad different in a few of the steps.

First, we clear the screen, set some variables, and clean up any files from the last run:

clear
$StartDate = date
Write-Host "Setting variables..."
$PARENTDRIVE = "S:"
$PARENTFOLDER = "Apps"
$PARENTPATH = "$PARENTDRIVE$PARENTFOLDER"
$LOCALPATH = "c:temp"
$TAKEOWNLOG = "$LOCALPATHtakeown-FixOwnerships.log"
$ICACLSLOG = "$LOCALPATHicacls-FixInheritance.log"
$ICACLSSUMMARYLOG = "$LOCALPATHicacls-FixInheritance-Summary.log"
Write-Host "Cleaning up log files from previous run..."
del $TAKEOWNLOG
del $ICACLSLOG

Now, with this PowerShell version of the script, I don’t need to write my lists out to a temp file for later processing (like I did with DOS in Part 1); I just dump it all into objects in memory.  Far more efficient!  Like so:

Write-Host "Creating Listing..."
$FolderListing = Get-ChildItem $PARENTPATH

Then, we hit the real work part of the script, where we loop through the folder listing, decend though and fix all the ownerships and inheritance.  This part is actually almost the same as the DOS batch script, actually, except that it’s more… um… awesomer:

Write-Host "Process listing..."
foreach ($FolderItem in $FolderListing)
{
  Write-Host "Fixing ownership: $PARENTPATH$FolderItem"
  Write-Output "Fixing ownership: $PARENTPATH$FolderItem" | Out-File $TAKEOWNLOG -append -encoding Default
  takeown /f $PARENTPATH$FolderItem /R /A /D Y  | Out-File $TAKEOWNLOG -append -encoding Default
  Write-Output "" | Out-File $TAKEOWNLOG -append -encoding Default
  #
  Write-Host "Fixing inheritance: $PARENTPATH$FolderItem"
  Write-Output "" | Out-File $ICACLSLOG -append -encoding Default
  Write-Output "Fixing inheritance: $PARENTPATH$FolderItem" | Out-File $ICACLSLOG -append -encoding Default
  ICACLS $PARENTPATH$FolderItem /inheritance:e | Out-File $ICACLSLOG -append -encoding Default
}

And finally, we create a summary log and create some closing comments before cleaning up:

Write-Host ""
Write-Host "Creating summary log..."
Select-String -pattern "Failed" $ICACLSLOG | Out-File $ICACLSSUMMARYLOG -encoding Default
Write-Host ""
$EndDate = date
Write-Host "Started: $StartDate"
Write-Host "Ended:   $EndDate"
Write-Host ""
Write-Host "...Complete!"

So that’s about it, really.  Note that I’m using the -encoding Default flag  at each Out-File so the file gets written in ANSI (on my system) instead of UniCode.   I don’t know about you, but I can’t stand dealing with UniCode files for text parsing and such.

And in case you prefer it in one contiguous list, here ya go:

clear
$StartDate = date
Write-Host "Setting variables..."
$PARENTDRIVE = "S:"
$PARENTFOLDER = "Apps"
$PARENTPATH = "$PARENTDRIVE$PARENTFOLDER"
$LOCALPATH = "c:temp"
$TAKEOWNLOG = "$LOCALPATHtakeown-FixOwnerships.log"
$ICACLSLOG = "$LOCALPATHicacls-FixInheritance.log"
$ICACLSSUMMARYLOG = "$LOCALPATHicacls-FixInheritance-Summary.log"
Write-Host "Cleaning up log files from previous run..."
del $TAKEOWNLOG
del $ICACLSLOG
Write-Host "Creating Listing..."
$FolderListing = Get-ChildItem $PARENTPATH
Write-Host "Process listing..."
foreach ($FolderItem in $FolderListing)
{
  Write-Host "Fixing ownership: $PARENTPATH$FolderItem"
  Write-Output "Fixing ownership: $PARENTPATH$FolderItem" | Out-File $TAKEOWNLOG -append -encoding Default
  takeown /f $PARENTPATH$FolderItem /R /A /D Y  | Out-File $TAKEOWNLOG -append -encoding Default
  Write-Output "" | Out-File $TAKEOWNLOG -append -encoding Default
  #
  Write-Host "Fixing inheritance: $PARENTPATH$FolderItem"
  Write-Output "" | Out-File $ICACLSLOG -append -encoding Default
  Write-Output "Fixing inheritance: $PARENTPATH$FolderItem" | Out-File $ICACLSLOG -append -encoding Default
  ICACLS $PARENTPATH$FolderItem /inheritance:e | Out-File $ICACLSLOG -append -encoding Default
}
Write-Host ""
Write-Host "Creating summary log..."
Select-String -pattern "Failed" $ICACLSLOG | Out-File $ICACLSSUMMARYLOG -encoding Default
Write-Host ""
$EndDate = date
Write-Host "Started: $StartDate"
Write-Host "Ended:   $EndDate"
Write-Host ""
Write-Host "...Complete!"

Enjoy!

 

 

How to Manage Azure from PowerShell on your PC…

2017-07-27T00:01:03+00:00 September 11th, 2013|Uncategorized|

If you use Azure day-to-day like I do…  and you use PowerShell day-to-day like I do…  then it’s time to put them together like chocolate and peanut butter!  What I mean is, let’s use the power of PowerShell to easily manage your Azure services.

I’ll assume that if you’re still reading this, you have an Azure account (if you don’t, you can get a free trial), and you have a Windows 7 or higher PC or server on which to run PowerShell. 

Install the Azure PowerShell modules
Go to the Azure download page, and at the bottom left, download the “Windows Azure PowerShell” bits and install.  Here’s the direct link to the bits, as of this writing.  It’s just a few clicks and a few minutes to let the web-installer do its thing. 

Once Azure PowerShell is installed, hit Start and type PowerShell to see that you now have another option for PowerShell, called “Windows Azure PowerShell”;  click it! 

1-StartAppsAzurePowerShell

Configure the “publishsettings” file

Next we need to link your Azure account with your PowerShell session.  We do this by getting your “publishsettings” file from Azure, and stuffing it into PowerShell.

Run the command: Get-AzurePublishSettingsFile

2-GetFile

…This will launch a browser and you will be prompted to authenticate to Azure (if not already).  You will be prompted for download choices, and you should save file to local folder; something like c:tempAppsAzure that I use in the following example.

Next, we import the settings file with the following command: Import-AzurePublishSettingsFile

3-ImportFile

…of course it depends on what you named the file when you saved it, but this is a standard name format.

Finally, delete the “publishsettings” file — it contains a management certificate file that shouldn’t be left lying around once imported.

4-del

…and that’s about it!  You are now linked to your Azure account and can control your world by command.  Let’s start by taking a look at some of the relevant commands:
get-command *azure*

Mmmm…  Those look like fun commands!

 

Kick the tires

You know, as long as we have an active session, let’s see how I last left my testing lab with a Get-AzureVm command:

5-GetVm

…Hmmm…  It looks like I left my Windows Server 2012 R2 “preview” VM shut off.  Let’s start it up with a Start-AzureVM command, specifying the VM name as well as the Service name:

6-StartVm

Well, that was fun, but now lunch is over and it’s time to shut down the lab “preview” machine again.  But I just want to shut down the VM for later use, not to de-provision the VM and have to re-create it later.  So, I’ll use the Stop-AzureVM command with the -StayProvisioned flag.

7-StopVm

…and so on, and so on.  Now that we’ve got you all set up and have stepped through some basic commands, you should be well on your way to chocolate and peanut butter goodness!

For more detail, make sure to see the Azure PowerShell “Get Started” tutorial:
http://msdn.microsoft.com/en-us/library/windowsazure/jj554332.aspx

And for even more detail, view the Azure PowerShell Cmdlet reference guide:
http://msdn.microsoft.com/en-us/library/jj152841.aspx

Now *you* go have some fun!

 

 

 

How to fix Ownerships and Inheritance on NTFS file systems, Pt. 1…

2017-07-27T00:01:03+00:00 August 28th, 2013|Uncategorized|

Following up on Matt’s post last year, and my post about a similar-but-different situation…  I’m just taking Matt’s original notions to the next level, and making the script he described.

It all started after we ran into a situation where someone was migrating some data from one Windows server to another along with the NTFS permissions (using a 3rd-party copy tool).  It was soon discovered that all the permission inheritance was blocked at the point of the copy, and all the ownerships were set to the original owners (not desired in this case).  The company policy dictated that the administrative permissions from the parent folder must flow down, and that all ownerships for files and folders were set to the Administrators group. 

So, I needed to whip up a little script to fix the permission inheritance for all the new subfolders on the destination under one parent folder, and reset the ownerships to the Administrators group for each subfolder as well.

A couple important criteria I had to work around was that I write it so that it only works on the migrated subfolders, so I don’t affect the parent folders under which these new folders were copied (which actually had *intentionally* blocked inheritance and might possibly have intentional alternate owners in some situations).

Well, since all folders ended up in the same destination parent, I naturally wrote a simple little DOS batch script to fix these.  Why DOS batch, you say?  I don’t know, honestly.  Sometimes when I’m thinking through a Linux/Unix scripting situation, I start thinking of it in bash, and change to Perl when I realize it’s too complex.  And similarly in Windows-based environments I start thinking in DOS batch, and then switch to PowerShell when it gets too complex.  Just an old habit that’s hard to break, I guess.

So, walking through the main sections of the script…

First, we do initial setup, set variables for output files and such, and delete the files from the last run of the script (since it would likely be run repeatedly as a maintenance tool):

@ECHO OFF
@cls
1>&2 Echo Setting variables...
SET PARENTDRIVE=S:
SET PARENTFOLDER=Apps
SET PARENTPATH=%PARENTDRIVE%%PARENTFOLDER%
SET LOCALPATH=c:temp
SET FOLDERLIST=FolderList.txt
SET TAKEOWNLOG=takeown-FixOwnerships.log
SET ICACLSLOG=icacls-FixInheritance.log
SET ICACLSSUMMARYLOG=icacls-FixInheritance-Summary.log
1>&2 Echo Cleaning up log files from previous run...
@del %LOCALPATH%%FOLDERLIST%
@del %LOCALPATH%%TAKEOWNLOG%
@del %LOCALPATH%%ICACLSLOG%
@del %LOCALPATH%%ICACLSSUMMARYLOG%

Then, we create a listing file of the top-level folders, and store them in a file (with quotes):

1>&2 Echo Creating Listing...
@FOR /F "tokens=*" %%a in ('"dir %PARENTPATH% /A:D /B"') do @echo "%%a">>%LOCALPATH%%FOLDERLIST%

Now we change to the destination folder, and begin processing.  This is really the meat of the script.  This is where we use takeown to set Adminstrators group ownership to each folder heirarchy, and ICACLS to force permission inheritance (for administrator permissions from the parent):

1>&2 Echo Changing to %PARENTDRIVE% and %PARENTFOLDER%...
%PARENTDRIVE%
CD %PARENTFOLDER%
1>&2 Echo Process listing...
@For /F "tokens=*" %%Q in (%LOCALPATH%%FOLDERLIST%) Do @(
1>&2 Echo Fixing ownership: %%Q
@takeown /f %%Q /R /A /D Y >> %LOCALPATH%%TAKEOWNLOG%
1>&2 Echo Fixing inheritance: %%Q
ECHO Analyzing: %%Q >> %LOCALPATH%%ICACLSLOG%
ICACLS %%Q /inheritance:e /T /C >> %LOCALPATH%%ICACLSLOG%
)

And here is where we wrap it up.  I provide a brief summary report, and say goodbye:

1>&2 Echo Creating summary log...
FindStr /I /C:"failed" /C:"analyzing" %LOCALPATH%%ICACLSLOG% > %LOCALPATH%%ICACLSSUMMARYLOG%
1>&2 Echo ...Complete!

Now here is the whole script again, in once piece:

@ECHO OFF
@cls
1>&2 Echo Setting variables...
SET PARENTDRIVE=S:
SET PARENTFOLDER=Apps
SET PARENTPATH=%PARENTDRIVE%%PARENTFOLDER%
SET LOCALPATH=c:temp
SET FOLDERLIST=FolderList.txt
SET TAKEOWNLOG=takeown-FixOwnerships.log
SET ICACLSLOG=icacls-FixInheritance.log
SET ICACLSSUMMARYLOG=icacls-FixInheritance-Summary.log
1>&2 Echo Cleaning up log files from previous run...
@del %LOCALPATH%%FOLDERLIST%
@del %LOCALPATH%%TAKEOWNLOG%
@del %LOCALPATH%%ICACLSLOG%
@del %LOCALPATH%%ICACLSSUMMARYLOG%
1>&2 Echo Creating Listing...
@FOR /F "tokens=*" %%a in ('"dir %PARENTPATH% /A:D /B"') do @echo "%%a">>%LOCALPATH%%FOLDERLIST%
1>&2 Echo Changing to %PARENTDRIVE% and %PARENTFOLDER%...
%PARENTDRIVE%
CD %PARENTFOLDER%
1>&2 Echo Process listing...
@For /F "tokens=*" %%Q in (%LOCALPATH%%FOLDERLIST%) Do @(
1>&2 Echo Fixing ownership: %%Q
@takeown /f %%Q /R /A /D Y >> %LOCALPATH%%TAKEOWNLOG%
1>&2 Echo Fixing inheritance: %%Q
ECHO Analyzing: %%Q >> %LOCALPATH%%ICACLSLOG%
ICACLS %%Q /inheritance:e /T /C >> %LOCALPATH%%ICACLSLOG%
)
1>&2 Echo Creating summary log...
FindStr /I /C:"failed" /C:"analyzing" %LOCALPATH%%ICACLSLOG% > %LOCALPATH%%ICACLSSUMMARYLOG%
1>&2 Echo ...Complete!

But you know what?  That’s not good enough for Matt and me.  Nope, Matt basically insisted that it had to be done in PowerShell after all.  So of course I did that too, and I’ll put that up next week.  Why PowerShell you say?  I don’t know… Why not?

😉

 

 

XP EOS M-9… And Counting…

2017-07-27T00:01:03+00:00 July 17th, 2013|Uncategorized|

The Windows XP “End of Service” date is now only 9 months away!  Well, we’re actually just beyond 9-month mark now, but you get the point.

Before reading on, it might be a good idea to reference my post from last month, “XP Elimination — The looming crush…

If you think it’s ridiculous or hilarious that anyone should be concerned about migrating off of XP at this point, then you probably work in a small-to-medium sized company.  You might even be able to consider upgrading all the workstations by yourself (or with a buddy), or maybe you’ve just replaced all the computers with modern devices with updated OS’s.  Easy-peasy.

But many *large* enterprise company/organizations are watching the clock (or should be) for that looming April 8th deadline, for a variety of reasons.  And this is what I really wanted to touch on today — the fact that steering the massive enterprise can be like steering the largest ship in an ocean, but there are other factors to consider in the metaphorical ocean as well.  Like icebergs…  Like other, older ships that require rescuing… 

Okay, I’ve worn out the metaphor, so let’s start discussing some specifics.  Let’s look at the examples of three, ahem, *fictitious* example large organizations that have arrived at three different XP situations.

Organization “A” – What, me worry?

For our fictitious Organization “A”, things are smooth sailing.  Or so they think.  They’ve got only 20,000 XP machines, and they’ve set up a test pilot bed of about 50 Windows 7 machines, and it’s going well.  Well, *that* part’s going well.  What they will soon realize is that their back-end infrastructure isn’t prepared (in design nor scale) for the type of load that their Win7 deployment strategy calls for — and they have only just begun to prepare their applications for re-packaging.  But they aren’t worried.  Well, not as much as they should be, anyway.

Organization “B” –  Nope.  We don’t wanna.

Organization “B” doesn’t have a plan.  It’s not that they don’t have a clue, it’s just that they mostly don’t care.  They have 40,000 workstations, a bunch of old servers, and so on, in a complicated, aging infrastructure.  You see, things don’t really look good for the business end of the company in this age of consolidation, and most folks think they’ll be acquired anyway.  So XP is fine for now.  I guess.  Whatever.

Organization “C” – The best-laid plans…

For Organization “C”, they really have been doing it right.  They jumped in front of the project, and designed/prepared/deployed a sturdy, modern back-end infrastructure.  They rallied the troops and started the application re-packaging very early-on and devised a “just-in-time” strategy to manage application-to-user/workstation tracking and roll out the workstations right behind the infrastructure and apps.  The working schedule seems to indicate that all of their 50,000 workstations should be upgraded/re-deployed right around the the April 8th deadline.  Whew!  It looks like they’re going to make it!  Until…  Uh-oh…  Did we mention that Organization “C” just acquired Organization “B”? 

While these are hypothetical scenarios, I will be re-visiting these imaginary companies over the next few months as we approach the XP EOS date, discussing some of the finer points of their challenges along the way…  Let’s wish them all luck, shall we? 

😉

 

 

 

PowerShell – Detect Group Membership Type…

2017-07-27T00:01:04+00:00 March 6th, 2013|Uncategorized|

It should come as no surprise that adherence to naming conventions and good Active Directory (AD) Organizational Unit (OU) structure are things that can make an Enterprise Administrator’s life much easier. 

Take, for example, the situation of having a naming convention for group objects in AD that dictates a single-letter suffix of either a “C” (to indicate a group of computer objects) or a “U” (for a group of user objects).  In this case, a group might be named something like, “Detroit Application Data U”, or “Chicago Printers Floor2 C”.  And with intentions such as these — and human beings being what they are — it’s inevitable that some users will end up in computer groups, and vice versa.

So how to we check for this messiness?  With PowerShell, of course…

We create a script that will accept an array of our AD OUs (or group-specific OUs if you’re lucky), loop through them, grab all the groups and the memberships, and do a validation to make sure the members are of the correct class (note that I could fill up pages of the lines of code for this, depending on your specifics; so I’ll just stick with the main conceptual points).  Let’s dive into the code snippets!

First, add your OUs into an array, and other variables.  Of course, you might not be able to just scrape a level with PowerShell and grab all your OUs…  Oh, but you *do* have a perfectly-regulated AD hierarchy, don’t you?  Whether it’s perfect or not, AD structure goes a long way here; and my examples show how convenient it is if you have all your groups in a standard ou=GROUPS structure or some predictable way.

$OUs = @("Detroit", "Chicago", "Los Angeles")
$MyDomain = "dc=MyDomain,dc=org"

Then you start to loop and grab all the groups in an OU:

foreach ($OU in $OUs)
{
  #... skipped a few more lines of code here...
# Here we get the list of our groups for the loop
$OuGroupNames = Get-ADObject -Filter {(ObjectClass -eq "group") -and ((name -like "* U") -or (name -like "* C"))} -SearchBase "ou=Groups,ou=$OU,$MyDomain"

And, now that you have the groups, you can start to evaluate each group like this:

  foreach ($OuGroup in $OuGroupNames)
  {
    #...skip more code.. What are we skipping here? Oh, validations, error-checking, and stuff...
# we need the group name and DN    $OuGroupName = $OuGroup.Name     $OuGroupDn = $OuGroup.DistinguishedName

Now, we can truly check the object membership type!

    # If it is a user group...
if ($OuGroupName -like "* U")     {       $MemberList = Get-ADGroupMember -Identity "$OuGroupName"
# ...it had better be a user...       if ($Member.ObjectClass -like "computer")
{
#...or we kick out an error to the report!

And so on.  Of course, you’d do the converse of the snippet above for a user-type object in a “C” group.  By the way, this can lead to all kinds of other error detection too; in fact, the main reason I couldn’t show all my code is that I ended up adding checks for empty groups, groups with members from external OUs, and so on.  Because basically, once you have the group attributes and its membership list in hand, you may as well do some validation while you’re there…

So have fun with it, and see where it leads you…  And make sure to drop me a line if you need any help putting the whole thing together.

🙂

 

 

 

 

AD Attributes – LastLogon vs. LastLogonTimeStamp…

2017-07-27T00:01:04+00:00 February 27th, 2013|Uncategorized|

A little while back I was working at an enterprise that has many locations across the United States.  I had a list of 30 usernames (from one specific out-of-state location) and a couple brand-new test accounts that I wanted to report on their “last logon times” from the Active Directory domain.  I put together a quick PowerShell script to loop through each user and report on the “lastLogon” time and I had (what I thought were) my results in no-time.  Here is a snippet of the code:

Get-ADUser $UserNameToSearch | Get-ADObject -Properties lastLogon

First, I opened up an RDP session to a domain workstation that I had access to at the out-of-state location that I was working with at the time and I went ahead and logged in with my test user.  I waited a few minutes for the replication to occur back to my location’s domain controller and then I ran the script from my local workstation.  Surprisingly, there were no results reported for that test account, but there were current up-to-date results for 95% of the users who were on my list that are said to work at that same location.

So I logged out, logged back in, waited, and ran the script again.  Still, no “logon time” results for my test user account.

Very interesting….

I did a few minutes of research on the “lastLogon” attribute and then I discovered I was searching the wrong attribute, per Microsoft’s MSDN Attribute Library which states: “This attribute is not replicated and is maintained separately on each domain controller in the domain. To get an accurate value for the user’s last logon in the domain, the Last-Logon attribute for the user must be retrieved from every domain controller in the domain. The largest value that is retrieved is the true last logon time for that user.  But since the enterprise I was working with had more domain controllers than I could count on one hand so I chose to find a simple alternative.

After a few more minutes of research I then found the attribute that is replicated across all domain controllers, the “lastLogonTimeStamp” attribute.  I updated my script to:

Get-ADUser $UserNameToSearch | Get-ADObject -Properties lastLogonTimeStamp

I then had the results that I expected and I carried on with my day.

Hopefully this experience will save you time and effort!

 

 

 

DFS Replication Validation Script…

2017-07-27T00:01:04+00:00 December 12th, 2012|Uncategorized|

The other day, while at the enterprise-level customer with whom I’m currently working, I ran into a situation where I needed to validate that certain parts of a DFS hierarchy were properly being replicated across the customer’s AD domain controllers.  As the administrators applied normal, routine DFS changes, the changes sometimes didn’t replicate properly across the enterprise — causing some segments of the DFS structure to not be visible or available. 

Apparently, the DFS problem was a result of using VMware guests as AD DCs.  I understand (from the customer) that a Microsoft hotfix is in the last stages of testing (at the time of this writing) and will be available for release “soon.”   It seemed that even though the DCs in question did not synchronize time with the ESX host upon which they reside, there is a default behavior in VMware Tools that assigns the host time value to the guest — at least up until the “do not sync” routine is processed during startup; after which the guest is then allowed to find its own time.  During this brief time window, the DFS Namespace service sometimes completes assembling its DFS target list and can find itself behind in time, relative to links it has been given by PDCE; which makes no sense to it, and it removes them from its listing.  And as a result, people can’t find their mapped drives or browse some of the DFS Tree.  (Note: I cannot take credit for this timing behavior investigation and results; and while I’d love to credit the folks who are due, I’m not permitted to.)  The customer remedied the situation with a temporary fix, but the real fix is the up-coming aforementioned patch.

Anyway, while the symptoms were being analyzed, I was working on other things and needed to work around the issue as much as possible while the solution was being chased.  So, I whipped up a simple little DOS script to go out and validate the top-levels of the DFS hierarchy across all domain controllers that carry them, in order to find out what would or wouldn’t be properly resolved.

For what it’s worth, I thought I’d pass the script along to you.  Here it is:

 

@SETLOCAL ENABLEDELAYEDEXPANSION
@set AdDomain=MyAdDomain.local
@set DirQuantity=17
@set DestPath=h:DcList.txt
@REM This requires elevated credentials, otherwise will fail...
@ipconfig /flushdns
@REM First we build the input file...
@nslookup %AdDomain% |findstr 
[0-9].*.[0-9].*|findstr /V /C:"Address: " > %DestPath% @ECHO As of 20121212, there should be %DirQuantity% DFS dirs on each server (actual, plus the "." and ".." items). @REM Now loop through the input file and check the DFS at the destination... @For /F "tokens=*" %%Q in (%DestPath%) Do @( @set MYDC=%%Q @set MYDC=!MYDC:Addresses: =! for /f "tokens=* delims=" %%A in ('dir /A:D \!MYDC!Corp ^|findstr /C:"Dir(s)"') do @set MYDIR=%%A for /f "tokens=* delims= " %%G in ("!MYDIR!") do @set MYDIR=%%G @REM Options A: Use this line if you wish to see all DFS sources: @ECHO For: !MYDC! !MYDIR:~0,9! @REM Option B: Use this line if you wish to see only those in violation @REM (note: there's a space and tab separator for spacing alignment): @REM @ECHO For: !MYDC! !MYDIR:~0,9! |findstr /V /C:"%DirQuantity% Dir(s)" )

What it does:

The script builds a domain controller list in a static, external file, then iterates through the list, attempting to quantify the available DFS path branches against a numeric count that you supply in another variable.  I provided two different “ends” to the script (one of them commented out), in order to give you a couple different ways to present the results.  Make sure to “set” the variables in the first few lines, to your locally-relevant information; especially the number of *expected* DFS hierarchies.

Of course, I wanted to write it to do more, but I pretty much ran up against the limits of what I *should* do in a DOS script.  I’ll make another version in PowerShell some day that iterates down the hierarchy and validates the entire structure, instead of just the top level… 

…Unless you beat me to it…  😉

There you go; enjoy!

🙂

 

 

Don’t Rename Vendor-Provided .MSI Files…

2017-07-27T00:01:04+00:00 December 5th, 2012|Uncategorized|

When packaging an application for deployment in the enterprise, you must also identify it’s dependencies — additional software required for the application to successfully install and function.  Often times these dependencies are redistributable run-times for Microsoft Visual Studio.  These redistributables are so common, they are often packaged separately and “chained” to the dependent application.

This method of installing dependencies usually works pretty well; the deployment tool determines whether to install the package, based on it’s previous installation history.  If, however, the dependency was installed outside the deployment tool’s domain — by an application a user downloaded, for example  — you may encounter errors when the dependency is re-run; this could fail your entire application package chain.

Fortunately, many (but not all — always test!) Microsoft redistributables, like “Visual C++ 2008 SP1 Redistributable Package (x86)“,  are authored so that they can install over an existing install — without actually modifying anything or going into “maintenance mode”.  The screen shot below illustrates that the Windows Installer services runs the .MSI package, and verifies that it’s already installed with the same product code, package code, component IDs, etc., and simply exits without modifying the existing install.  This can be a packager’s saving grace in an unpredictable enterprise environment.

 

Wise Compatibility Key

 

Recently, however, I came across an issue with my philosophy of simply letting the redistributable re-run over an existing install. 

A package I had developed started failing.  After checking the logs, I noticed the failure occurred during the install of the dependent runtime  “Visual C++ 2008 SP1 Redistributable Package (x86)”.  The runtime install was exiting with a Windows Installer general failure code of “1603”. 

A look at the detailed installation log shows a more confusing error: “Error 1316.  A network error occurred while attempting to read from the file: C:Userspopper1Desktopvcredist_x86vc_red_x86.msi”.

With some help from my co-worker (Windows Installer guru Urb Bernier) we were able to find the issue: the .MSI that originally installed the Visual C++ runtime was extracted from the vendor’s .EXE bootstrap and…RENAMED!  A grievous offense in the application packaging world!  Well, that may be a bit dramatic, but it certainly violates all “best practices”.  When extracted from the vendor provided setup file “vcredist_x86.exe”, the .MSI is named “vc_red.msi”.  Perhaps the packager may have renamed the file in order to distinguish the 32-bit setup file from the 64-bit setup? 

The error “1316.  A network error occurred while attempting to read from the file: C:Userspopper1Desktopvcredist_x86vc_red_x86.msi” is Windows Installer’s way of saying it can’t access the file in the path; the “cached” path to the .MSI that originally installed the Visual C++ runtime.

You see, the issue is that you can rename the .MSI, install it successfully, re-run it successfully, and uninstall it successfully; provided you do all of these actions using the renamed .MSI.  If, however, the actual vendor install runs on that same machine, whether from the bootstrap .exe or from the extracted .MSI, it will exit with a Windows Installer general error code “1603”.

Packaging applications can sometimes be a frustrating task; because no matter how much forethought and care you put into your package, it can always be thwarted.  To be fair, I guess the same could be said for just about any other job.  

However, I hope this example illustrates why you should not rename vendor provided .MSI files!

 

Load More Posts