Visual Studio Online + VSCode = Easy PowerShell Source Control

In my daily work at CTGlobal we are all working a lot with PowerShell and therefor we have a big need to share code between each other and also having some version control in place when we are making changes. So for the last few months I has been playing a lot around with Visual Studio Code and Git repositories in Visual Studio Online to solve this.

Pre-reqs

To create a new code repository in VSTS go to the project you want the repository to reside in and select the Code tab. Click the repository drop down and click New repository. A box appears, enter the name for the new repository and select Git type. You now have a new repository.

clip_image001

To clone the repository to your local computer and work with the code click Clone in the right corner and copy the URL.

clip_image002

Open Visual Studio Code, press F1 to open the command palette and write Git: Cloneclip_image003

You will then be asked to enter the URL you just copied, and then the parent path where the local copy if the repository will be placed.

imageimage

The repository will then be cloned to your local computer and you can start to add files to it. When you have added some files or done any changes and you want to save it to the repository you have to commit it. You commit changes by going to the Source Control tab on the left in Visual Studio Code, review the changes and click Commit.

image

The changes are then committed to the local copy of the repository. To sync the changes back into Visual Studio Online press F1 to open the command palette again and write Git: Sync and press enter, the changes is now pushed back to master repository and your code is in control.

Happy source controlling!

Optimizing Cisco UCS Management Pack for larger(any) environments

Some months ago Cisco released a brand new management pack for the UCS hardware platform, this update was more or less a rewrite of the entire solution, including switching all alarms from monitor based to rule based. This was done in order to minimize the number of discoveries needed and therefor the objects created by the management pack, this could have quite a performance impact since every little memory module and fan was discovered as objects, and this could be many fans and memory blocks in a large UCS environment.

Since it is all rule based now we would miss the auto-resolve feature usually used by monitors, but to solve this Cisco created a rule using a simple schedule to close and update alerts as needed based on the event entries on the server running the Cisco UCS monitoring service. But with the first release (4.0.1) there was a problems, it was only checking for alerts with resolution state New (0) and i guess in most production environments many are using resolution states to route alerts to correct teams or set the progress of the alert, therefor when you changed resolution state on your alerts they would never update or close. This problem is now solved in version 4.1.1 but now a new problem appears! Cisco choose to use “Where-Object {$_.ResolutionState -ne 255” to find all the non-closed alerts, using this method in our large environment it takes 10-12 seconds to find all the closed alerts and to make it even worse this command is running for every event collected by the rule “Cisco.Ucs.Watcher.UCSFault.Event.Collection.Rule” for the selected interval meaning that the script would never complete in our case and end on timeout.

To resolve all this I found all the elements needed for the rule “Cisco.Ucs.Library.UpdateAndCloseAlert.Rule” in the original Cisco Management pack and created my own fix for this problem.

Instead of

$scomActiveAlerts = Get-SCOMAlert | Where-Object {$_.ResolutionState -ne 255}

I changed it to

$ActiveAlerts = Get-SCOMAlert -ResolutionState (0..254)

This simple change gives the same result but in 0,5 seconds instead of 10-12. And instead of running the command for each event collected I changed it to run one time for the script (just before the ForEach) and save the result in a variable $ActiveAlerts and use that in the ForEach instead.

This just shows how easy you can improve (and decrease) the performance of your PowerShell scripts.

I uploaded my fix management pack to GitHub, if you import this one remember to disable the original rule.

GitHub Download

 

 

MPTool: Automate Windows Service Monitoring

Monitoring a Windows Service running state is a common request at my work, and therefor a obvious thing to automate and win back some free time. Thanks to the MPTool PowerShell module this can easily done with four lines of PowerShell

First you need a management pack to place it in, either use a existing or create a new. In this example we will place all resources, class, discovery and monitor, in the same management pack, for more complex management packs you will usually split into separate management packs.

New-MPToolManagementPack -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-ManagementPackDisplayName "CM MyApp Service Monitoring" `
-ManagementPackDescription "Contains monitor for MyApp Windows service Monitor" 

Once you have a management pack to place the monitor in, you need a class for the specific Windows service, you don’t need to add any properties for just basic service monitoring since the name of the Windows service cannot exist more than once on a Windows computer.

New-MPToolLocalApplicationClass -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-ClassName "CM.MyApp.WindowsService" `
-ClassDisplayName "CM MyApp Windows Service" `
-ClassDescription "CM MyApp Windows Service - Created with MPTool" 

We now have a class for our Windows service and we can then go ahead and create the discovery to find where the service is existing. All Windows Services has a key under HKLM:SYSTEM\CurrentControlSet\Services\[NameOfTheService] so this can be used to discover the service. Note that this should be the service name and not the display name.

New-MPToolFilteredRegistryDiscovery -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-DiscoveryName "CM.MyApp.WindowsService.Discovery" `
-TargetClassName "Microsoft.Windows.Computer" `
-RegistryPath "SYSTEM\CurrentControlSet\Services\MyApp\" `
-DiscoveryClassName "CM.MyApp.WindowsService" `
-IntervalSeconds 300 

Now the final step is to create a monitor for the discovered instances of the Windows service. Here you simply just need to target the newly created class, select the state of the alert if it triggered, Error or Warning, and last again defined the name of the Windows service to monitor the state of.

New-MPToolWindowsServiceMonitor -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-TargetClassName "CM.MyApp.WindowsService" `
-UnhealthyState "Error" `
-ServiceName "MyApp" 

Of course now we want to put it all together in a script that we can use for self-service scenarios! I have created one example and placedĀ  my GitHub for MPTool.

Happy automating!