Office 365 and Bing Maps – Issue and Fix

2017-07-27T00:00:55+00:00 July 21st, 2017|blog, Office 365|

Bing Maps Add-in to Office 365 changes the message body for emails received with addresses in them

Recently, I noticed that most of the emails I open and read are requesting me to save as the body has changed.  The message text is “The body of the message <subject> has been changed.  Want to save your changes to this message?”

The issue is that I’ve not changed any part of the message.  I’ve simply opened it and then closed it.  After further investigation, I’ve found that the Bing Maps add-in is modifying the body of the message by replacing any address with a link.

To avoid this behavior, and the annoying message for every email that you open with an address, simply open Outlook, Navigate to File > Manage Add-ins, login with your Office 365 account, and disable the Bing Maps Add-in.  This may take a few minutes to take effect, but a restart should not be required.

This applies to, at least, version 1706 (Build 8229.2086) of the Microsoft Office 365 release.  I’ve read this may also happen with some older versions as well but have not tested.

 

Azure – What tags are we using again…?

2017-07-27T00:00:55+00:00 July 7th, 2017|Azure, blog, PowerShell|

Have you wondered what tags are assigned to all your Azure VMs?  Do you not have ARM Policies in place to enforce your preferred tags yet?

I was in just such a situation the other day.  Just like in my previous post on quick Azure-related scripts, I was working with a customer that just wanted a quick utility to report on what VMs are properly tagged (or not) in their implementation, without having to fish around the Portal.  No ARM Policies there yet…  *yet*.

So I whipped this together.  And much like that previous script, you just paste this into a PS1 file, set the subscription name in the variable, and then run it.

# GetVmTags - Jeremy Pavlov @ Coretek Services 
# Setting the subscription here
$MySubscriptionName = "My Subscription"
# Set some empty arrays
$vmobjs = @()
$vmTagsKeys = @()
$vmTagsValues = @()
# Get the VMs...
$vms = Get-AzureRmVm 
#
$NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
if ($NeedToLogin -eq "Y")
{
  Login-AzureRmAccount
  Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
}
elseif ($NeedToLogin -eq "N")
{
  Write-Host "You must already be logged in then.  Fine. Continuing..."
}
else
{
  Write-Host ""
  Write-Host "You made an invalid choice.  Exiting..."
  exit
}
#
foreach ($vm in $vms)
{
    Write-Host ""
    $vmname = $vm.name
    $MyResourceGroup = $vm.ResourceGroupName
    Write-Host "Checking tags for VM $vmname... "
    Start-Sleep 1
    $vmTags = (Get-AzureRmVM -name $vmname -ResourceGroupName $MyResourceGroup).Tags
    $vmTagsCount = $vmTags.Count
    if ($vmTagsCount -gt 0)
    {
      $vmTagsKeys = $vmTags.Keys -split '
[\r\n]' $vmTagsValues = $vmTags.Values -split '[\r\n]' for ($i=0;$i -lt $vmTagsCount; $i++) { $CurrentTagKey = $vmTagsKeys[$i] $CurrentTagValue = $vmTagsValues[$i] Write-Host -ForegroundColor Green "Key : Value -- $CurrentTagKey : $CurrentTagValue" } } else { Write-Host -ForegroundColor Yellow "No tags for $vmname" } }

The results should look something like this, except hopefully a lot more serious and business-y:

Have fun with it, change it up, and let me know what you do with it…   I hope it helps.  Enjoy!

Rolling out the Red Carpet… literally!

2017-07-27T00:00:55+00:00 June 22nd, 2017|blog, Headquarters|

On June 8 Coretek Services hosted our New Headquarters Open House complete with TekTalks, networking, and plenty of food and drink. A big thank you to all who helped celebrate with us, as well as our partner sponsors who made the Open House possible! In case you missed them, watch the highlight video below or read the Open House blog post.

A highlight of the event’s fun was the Red Carpet. Below are photos taken as people stepped up to the red carpet throughout the event to have their five seconds of fame in the spotlight. Thank you to Robert Lovelace for the photography!

 

 

Thank you to our Partner Sponsors:

 

 

 

CORETEK SERVICES LAUNCHES NEW HEADQUARTERS WITH INTERACTIVE OPEN HOUSE

2017-07-27T00:00:55+00:00 June 22nd, 2017|blog, Headquarters|

Technical. Modern. Collaborative.

These three words personify Coretek Services’s new gorgeous headquarters, located just west of downtown Farmington at 34900 Grand River Avenue, Farmington Hills, MI.

On June 8, Coretek showcased the New Headquarters with an Open House with over 300 guests from around the nation, including clients, partners, friends, family and members of our community. We cannot express enough gratitude for everyone who was able to make it, as well as the sponsors who made it happen.

The highlight of the event was a Ribbon Cutting Ceremony, featuring speeches from Coretek Services CEO, Ron Lisch, Drew Coleman, Business Manager of Michigan Economic Development Corporation; Ken Massey, Mayor of Farmington Hills; Mary Martin, Executive Director of the Greater Farmington Area Chamber of Commerce; as well as a State Tribute by State Representative Christine Grieg.

Periodic TEK talks focusing on solutions to solve IT business problems were presented throughout the day including: Azure Management Suite, VDI: Clinical Workflows and Cloud Desktops, Post-Breach Response with Win10, Hyperconvergence with Hybrid Cloud, Transforming Outdated Devices, User-Experience Analytics and Optimized Managed Infrastructure.

Additionally, the Open House featured a red carpet (see photos here), networking, food and drinks!

Ron Lisch had a vision to create a headquarters that is a tech center for the Metro Detroit area. He envisioned a place where employees, clients, prospective clients, and partners can come together to collaborate on creative solutions hands-on in a lab. The headquarters would foster the Coretek philosophy of bringing great people together to do great work.

Ron’s vision came into full fruition in May 2017, as the Coretek family officially lives and breathes in the new Headquarters. Inside the new HQ is an innovation center, which hosts different technology solutions Coretek offers that allows visitors to test the technology hands on. Additionally, there is a large events center where user-group meetings, training sessions and other events will be held, a company fitness center for employee use, and a collaboration center that allows employees and visitors a space to work together, network, or relax.

The modern space is bright with the latest technology implemented throughout the space, highlighting pictures from around our community canvased in conference rooms. The innovative HQ truly is a hub of technology and collaboration for the Metro Detroit area.

We will continue to host upcoming events at our new headquarters and are excited to welcome even more guests in the future to collaborate in the new work space.

Some photographs of the new space and Open House are below to get a sneak peek on what we have in store for you when you visit! Thank you to Resa Abbey for the photography!

Ron Lisch – CEO of Coretek Services

Innovation Center – 360 degree view

 

Clint Adkins providing a demo of Coretek Services Cloud Solutions

 

 

 

Thank you to our Partner Sponsors:

Azure – Next Available IP Address…

2017-06-01T19:53:53+00:00 June 15th, 2017|Azure, blog, Cloud, PowerShell|

The other day, I was working at a customer location with one of their IT admins named Jim, designing an Azure IaaS-based deployment.  During our session, he challenged me to make an easy way for him and his coworkers to see if a particular address is available, or find the next available “local” address in their various Azure VNets.

Because while addressing can be handled with ARM automation — or just dynamically by the nature of IaaS server creation — in an Enterprise it is usually necessary to document and review the build information by a committee before deployment.  And as a result, Jim wanted to be able to detect and select the addresses he’d be using as part of build designs, and he wanted his coworkers to be able to do the same without him present.  So, this one is for Jim.

I wanted to write it to be user-friendly, with clear variables, and be easy-to-read, and easy-to-run.  …unlike so much of what you find these days…  😉

Copy these contents below to a file (if you wish, edit the dummy values for subscription, Resource Group, and VNet) and run it with PowerShell.  It will do a mild interview of you, asking about the necessary environment values (and if you need to change them), and then asking you for an address to validate as available.  If the address you specified is available, it tells you so; if it is not, it returns the next few values in the subnet from which the address you entered resides.

# TestIpaddress - Jeremy Pavlov @ Coretek Services 
# You just want to know if an internal IP address is available in your Azure VNet/Subnet.
# This script will get you there.  
#
# Pre-reqs:
# You need a recent version of the AzureRm.Network module, which is 4.0.0 at his writing.  
# You can check with this command: 
#     Get-Command -Module azurerm.network
# …and by the way, I had to update my version and overwrite old version with this command:
#     Install-Module AzureRM -AllowClobber
#
# Some things may need to be hard-coded for convenience...
$MySubscriptionName = "My Subscription"
$MyResourceGroup = "My Resource Group"
$MyVnet = "My VNet"
#
Start-Sleep 1
Write-Host ""
Write-Host "Here are the current settings:"
Write-Host "Current subscription: $MySubscriptionName"
Write-Host "Current Resource Group: $MyResourceGroup"
Write-Host "Current VNet: $MyVnet"
Write-Host ""
$ChangeValues = read-host "Do you wish to change these values? (Y/N)"
if ($ChangeValues -eq "Y")
{
  Write-Host ""
  $ChangeSub = read-host "Change subscription? (Y/N)"
  if ($ChangeSub -eq "Y")
  {
    $MySubscriptionName = Read-host "Enter subscription name "
  }
  Write-Host ""
  $ChangeRg = read-host "Change resource group? (Y/N)"
  if ($ChangeRg -eq "Y")
  {
    $MyResourceGroup = Read-host "Enter Resource group "
  }
  Write-Host ""
  $ChangeVnet = read-host "Change Vnet? (Y/N)"
  if ($ChangeVnet -eq "Y")
  {
    $MyVnet = Read-host "Enter VNet "
  }
}
#
try
{
  $MySubs = Get-AzureRmSubscription
}
catch
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Unable to retrieve subscriptions."
}
$MySubsName = $MySubs.name
Start-Sleep 1
Write-Host ""
if ($MySubsName -contains "$MySubscriptionName")
{
  Write-Host "You are logged in and have access to `"$MySubscriptionName`"..."
}
else
{
  Write-Host "It appears that you are not logged in."
  Write-Host ""
  $NeedToLogin = Read-Host "Do you need to log in to Azure? (Y/N)"
  if ($NeedToLogin -eq "Y")
  {
    Login-AzureRmAccount
    Select-AzureRmSubscription -SubscriptionName $MySubscriptionName
  }
  elseif ($NeedToLogin -eq "N")
  {
    Write-Host "You must already be logged in then.  Fine. Continuing..."
  }
  else
  {
    Write-Host ""
    Write-Host "You made an invalid choice.  Exiting..."
    exit
  }
}
#
Start-Sleep 1
Write-Host ""
Write-Host "We will now check to see if a given IP address is available from any subnet in VNet `"$MyVnet`" "
Write-Host "...and if it is not available, provide the next few available on that subnet."
Start-Sleep 1
Write-Host ""
$MyTestIpAddress = Read-Host "What address to you wish to test for availability?"
#
$MyNetwork = Get-AzureRmVirtualNetwork -name $MyVnet -ResourceGroupName $MyResourceGroup
$MyResults = Test-AzureRmPrivateIPAddressAvailability -VirtualNetwork $MyNetwork -IPAddress $MyTestIpAddress
$MyResultsAvailableIPAddresses = $MyResults.AvailableIPAddresses
$MyResultsAvailable = $MyResults.Available
#
Start-Sleep 1
if ($MyResultsAvailable -eq $False)
{
  Write-Host ""
  Write-Host -ForegroundColor Yellow "Sorry, but $MyTestIpAddress is not available."
  Write-Host ""
  Write-Host -ForegroundColor Green "However, the following adddresses are free to use:"
  Write-Host ""
  $MyResultsAvailableIPAddresses
}
else
{
  Write-Host ""
  Write-Host -ForegroundColor Green "Yes! $MyTestIpAddress is available."
}
Write-Host ""
Write-Host " ...Complete"

Now, if you know a better way to handle it, or have tips for improvement — or if you find a bug — I’d love to hear them (and so would Jim).  I hope it helps you out there…

Thanks, and enjoy!

Mobile Application Management with Intune

2017-07-27T00:00:55+00:00 June 2nd, 2017|blog, Intune, Mobility|

Mobile Application Management (MAM) is a feature that’s not new.  However, Microsoft is always improving on the MAM capabilities, and today Intune supports multiple operating systems on Mobile devices.  This is not an easy feat; since Microsoft are bound by the APIs that these other platforms offer, such as iOS and Android.  These non-Microsoft operating systems are the most prevalent on mobile devices today; and with greater access to corporate data, this poses a threat to data protection and leakage.

Policy

We’ve all used application policies from Microsoft’s wide range of applications that have been for many years.  For example:

  • GPOs control where icons are, where data is saved, what drives are mapped, etc.
  • Configuration manager is used to push software out to authorized users and remove applications from those who are not
  • Active Directory provides a way to secure data on the network with Groups and Users

…And while Microsoft released Intune quite a few years back, I’ve only recently become a real fan since I’ve started using Mobile Application Management without enrollment.  Let’s take a quick look at how MAM allows you to offer access to corporate data without compromising too much of that flexibility that users enjoy by choosing their own device platform and bringing their own devices to work.

BYOD

There’s nothing new with the concept of “Bring your own device” (BYOD); it’s a concept that’s been around for quite some time.   Users can bring their own devices and use them for daily business when a cell phone is needed to do just that.  Traditionally, users would logon to a segmented Wi-Fi network that has no access to the corporate network.  This allowed IT admins to avoid having to manage additional network access to the company resources and provide an open network for these devices as well as guests visiting their offices.  However, with many companies moving data and apps to “the Cloud”, the focus is no longer about segmenting networks, and is instead more focused on protecting the data.

Traditional office apps like Word, Excel, and PowerPoint have been available on mobile devices for quite some time now too, but they commonly required sending the documents to your phone and then opening them.  With Office 365, SharePoint online, and OneDrive, these apps now have access to a massive amount of your corporate data.  Without protecting this data when accessed on a mobile device, a user could download sensitive company information on their mobile device unencrypted and unprotected from prying eyes.  This is where I think Mobile Application Management really starts to come into play.

A Real-World Example

Intune’s Mobile Application Management provides the capabilities to protect your sensitive information on the device, wherever that device is, whether it is in a hotel half-way across the world, left behind in a taxi cab, or picked from the pocket of your CEO.  The device may be compromised but the data is secure.  This is due to the way application management protects the data on the device.  Let me provide you with an example:

Bob’s a CEO of an organization that provides financial information to customers across the financial markets.  The details of the finances can make or break a company’s stock profile if they were to be leaked.  Bob uses an iPhone to read emails and open documents on his device while traveling the subway in New York City.  During a busy morning, he’s shuffling to make it to his next appointment and accidentally drops his phone while exiting the train.

Because of a rich set of policies that Bob’s admin has configured with MAM, the data Bob accesses is not allowed to be stored on the device; and after 5 attempts to unlock the phone unsuccessfully, the corporate apps and data would be wiped.  Even if they were to guess the PIN on Bob’s phone, they would still have to guess his credentials; which are required to open any of the company apps that Bob uses.  It’s important to understand that:

  • The data is not on the device
  • There’s a high-probability that someone would automatically wipe the device by guessing the PIN wrong 5 times
  • By the time Bob realizes he’s lost his phone, a quick call to his IT Department triggers the admin to send a remote wipe request to his device AND receives a confirmation of success

That was just one example and there are many more features that MAM can enable to protect your data.

Bringing MAM Home

Mobile Application Management is easy to enable and deploy to your users.  With proper communication and process, your company data will be secured.  Don’t wait for one of your end-users to accidentally leak sensitive information that could make or break your organization’s reputation.  Identify those that are using mobile devices and protect them sooner than later.

The Future of Azure is Azure Stack!

2017-07-27T00:00:55+00:00 May 18th, 2017|Azure, blog, Cloud|

image

image

I realize that the title above might be a bit controversial to some. In this blog post I will attempt to defend that position.

The two diagrams above, taken from recent Microsoft public presentations,  symbolically represent the components of Public Azure and Private Azure (Azure Stack).  If you think that they have a lot in common you are right.  Azure Stack is Azure running in your own data center.  Although not every Azure feature will be delivered as part of Azure Stack at its initial release (and some may never be delivered that way because they require enormous scale beyond the reach of most companies) , it is fair to say that they are more alike than they are different.

Back in 2012 I wrote a blog post on building cloud burstable, cloud portable applications.  My theses in that post was that customers want to be able to run their applications on local hardware in their data center, on resources provided by cloud providers and/or even resources provided by more than one cloud provider. And that they would like to have a high degree of compatibility that would allow them to defer the choice of where to run it and even change their mind as workload dictates.

That thesis is still true today.  Customers want to be able to run an application in their data center. If they run out of capacity in their data center then they would like to shift it to the cloud and later potentially shift it back to on-premises.

That blog post took an architectural approach using encapsulation and modularity of design to build applications that could run anywhere.

The Birth of Azure

A bit of additional perspective might be useful.  Back in 2007 I was working as an Architect for Microsoft and I came across what would eventually become Azure. (In fact that was before it was even called Azure!) I had worked on an experimental cloud project years before at Bell Labs called Net-1000. At the time AT&T was planning on turning every telephone central office into a data center providing compute power and data storage at the wall jack.  That project failed for various reasons some technical and some political, as documented in  the book The Slingshot Syndrome. The main technical reason was that the computers of the day were minicomputers and mainframes and the PC was just emerging on the scene.   So the technology that makes today’s cloud possible was not yet available.  Anyway, I can say that I was present at the birth of Azure.  History has proven that attaching myself to Azure was a pretty good decision. Smile

The Azure Appliance

What many do not know is that this is actually Microsoft’s third at tempt at providing Azure in the Data Center. Back in 2010 Microsoft announced the Azure Appliance which was to be delivered by a small number of Vendors . It never did materialize as a released product.

Azure Pack and the Cloud Platform System

Then came Windows Azure Pack and the Cloud Platform System in 2014 to be delivered, also in appliance form, by a small number of selected vendors.  Although it met with some success, is still available today, and will be supported going forward, its clear successor will be Azure Stack.   (While Azure Pack is an Azure-like emulator built on top of System Center and Windows Server Azure Stack is real Azure running in your data center.)

Because of this perspective I can discuss how Azure Stack is Microsoft’s third attempt at Azure in the Data Center and one that I believe will be very successful. Third times a charm Smile

Azure Stack

The very first appearance of Azure Stack was in the form of a private preview, and later a public preview: “Azure Stack Technical Preview 1”.  During the preview it became clear that those attempting to install it were experiencing difficulties, many of them related to the use of hardware that did not match the recommended minimum specifications.

Since Azure Stack is so important to the future of Azure Microsoft decided to release it in the form of an Appliance to be delivered by three vendors (HP, Dell & Lenovo) in the Summer of 2017.  According to Microsoft that does not mean that there will be no more technical previews, or that no-one will be able to install it on their own hardware.   (It is generally expected that there will be additional Technical Previews, perhaps even one at the upcoming Microsoft Ignite! conference later this month.) It simply means that the first generation will be released in controlled fashion through appliances provided by those vendors so that  so that Microsoft and those vendors can insure its early success.

You may not agree with Microsoft (or me), but I am 100% in agreement with that approach.  Azure Stack must succeed if Azure is to continue to succeed.

This article originally posted 9/20/2016 at Bill’s other blog, Cloudy in Nashville.

How to protect against the next Ransomware Worm

2017-07-27T00:00:58+00:00 May 15th, 2017|blog, Ransomware, Security|

Hopefully you were one of the prepared organizations who avoided the latest Ransomware worms that made its way around the globe this past week.  This worm crippled dozens of companies and government entities, as it impacted over 230K computers in 150 countries.  Most of the infections were in Europe, Asia, and the Middle East, so if you did not get hit, you were either prepared, or lucky.  This blog post will help you be prepared for when this happens again, so that you don’t have to rely on luck.

Patch everything you can, as quick as you can

The exploit at the root of this Ransomware worm was resolved in MS17-010, which was released in March of 2017, giving organizations more than enough time to download, test, pilot through your UAT (User Acceptance Testing), and deploy to Production.  While introducing new patches and changes to your environment carries risk of breaking applications, there is far more risk in remaining unpatched – especially security specific patches.  Allocate the proper resources to test and roll out patches as quickly as you can.

Run the newest OS that you can

While the EternalBlue exploit that was patched by MS17-010 was applicable to every Windows OS, you were safe if you were running Windows 10 due to a security feature called ELAM (Early Launch Anti-Malware).  Many of the infected machines were running Windows XP, or Server 2003, that did not get the MS17-010 patch (Microsoft has released a patch for these OS variants after the infection, please patch if you still have these in your environment).  It is not possible to secure Windows XP or Server 2003.  If you insist on running them in your environment, assume that they are already breached, and any information stored on them has already been compromised (You don’t have any service accounts logging into them that have Domain Admin privileges, right?).

Firewall

Proper perimeter and host firewall rules help stop and contain the spread of worms.  While there was early reports that the initial attack vector was via E-mail, these are unconfirmed.  It appears that the worm was able to spread over the 1.3 Million Windows devices that have SMB (Port 445) open to the Internet.  Once inside the perimeter, the worm was able to spread to any device that had port 445 open without MS17-010 installed.

Turn off Unnecessary Services

Evaluate the services running in your desktop and server environment, and turn them off if they are no longer necessary.  SMB1 is still enabled by default, even in Windows 10.

Conclusion

These types of attacks are going to be the new normal, as they are extremely lucrative for the organizations who are behind them.  Proper preparation is key, as boards are starting to hold both the CEO and CIO responsible in the case of a breach.  While you may have cyber-security insurance, it may not pay out if you are negligent by not patching or running an OS that stopped receiving security updates 3 years ago.  I would recommend to be prepared for the next attack, as you may not be as lucky next time.

Additional Layers of Defense to Consider

For those over-achievers, additional layers of defense can prove quite helpful in containing a breach.
1.    Office 365 Advanced Threat Protection – Protect against bad attachments
2.    Windows Defender Advanced Threat Protection – Post-breach response, isolate/quarantine infected machines
3.    OneDrive for Business – block known bad file types from syncing

Good luck out there.

Disaster Recovery: Asking the wrong question?

2017-07-27T00:00:58+00:00 May 11th, 2017|Azure, blog, Cloud, Disaster Recovery|

image

In my role as an Azure specialist I get asked a lot of questions about Disaster Recovery. IMHO they almost always ask the wrong question.

Usually it goes something like “I need Disaster Recovery protection for my data center. I have N VMs to protect. I have heard that I can use Azure Site Recovery either to facilitate Disaster Recovery to my backup data center, or even use Azure as my backup data center.” That is true. Smile

In a previous lifetime I used to work on a Disaster Recovery  testing team for a major New York based international bank. We learned early on two major principles:

1. It is all about workloads since different application workloads have different Disaster Recovery  requirements. Every workload has unique Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Not all workloads are created equal.   For instance email is a critical workload in the typical enterprise. An outage of more than a few hours would affect business continuity significantly.  Other application workloads (such Foreign Exchange Trading, Order Processing, Expense Reporting, etc.) would have more or less stringent RTO and RPO requirements.

2. So it is really is all about Business Continuity. Disaster Recovery is a business case. It is clear that providing perfect Disaster Recovery for all application workloads would be extremely expensive; and in many cases not cost effective. In effect there is an exponential cost curve. So it is all about risk management and cost/benefit.

So where does that leave us?

1. Evaluate your Disaster recovery requirements on a workload by workload basis.

2. Plan how to implement it considering the objective of Business Continuity, RTO and RPO.

3. Use Azure Site Recovery to make it happen. Smile

This article originally posted 4/1/2016 at Bill’s other blog, Cloudy in Nashville.

Walk for Wishes 2017…

2017-07-27T00:00:58+00:00 May 8th, 2017|Announcements, blog|

We did it again!  As the 19th Annual Walk For Wishes ® approached, fellow Coreteker Sarah Roland (a Walk veteran since 2011) assembled set up her annual “Sarah’s Dream Team” and put out the call for participation, as she has done every year since 2013.  And again this year, fellow Walk veterans Avi Moskovitz (since 2016) and I (since 2013) got our families involved and headed along to the Detroit Zoo and joined in the effort.

We are proud to have been a small — but important — part of the more than $500,000 and counting because of the efforts of more than 6,500 walkers combined!  It was a really great crowd, and there were tons of interesting things to do and see along the way (it’s the Detroit Zoo, after all), supported by care and food volunteers along the way.   And given some threatening weather, we were amazed that we also managed to get the whole day in with no rain, and just a bit of cold.  That’s 5 years in a row of good walking weather; someone must be watching out for us…

We are truly thankful to the folks at Coretek who donated so generously to us and our team, both individually and as a company, raising over $1400 with us by walk time.  We hope you are as moved as we are by the work that is done by the organization and the good that it brings to those that — in many cases — don’t have a whole lot of good in their lives at the moment the wish fulfillment is granted to them.  For instance, here’s the moment that Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

Wish kid Emily discovered at the Walk that she was heading to Walt Disney World® Resort in about a month!

And did I mention that you can still donate to our team!?!?  Just click here go to Sarah’s Dream Team page and follow the process to donate to the team or any of us individually.

Oh, and by the way… we decided that we are on a mission to get more members of our team for 2018, so you may as well just plan to join now…  😉

See you then!

Load More Posts