Continuous Delivery WebApps with ARM Templates, Part 2

Previous: Continuous Delivery WebApps with ARM Templates, Part 1

So it has been some busy months and therefor a bit delayed with this second post, but now I finally got a moment to finish it, so here we go!

In the previous post we created and tested the continuous delivery pipeline for the Azure resources using a ARM template, and with the pipeline for deploying the Azure resources we are now ready to create the pipeline to deploy the application.

image

First you should go and grab the latest version of the ARM template and the Build and Release definition files from my GitHub repo here https://github.com/AndreasSobczyk/Continuous-Delivery-WebApps-Demo, and add them to the ARM template repo in your VSTS project.
Once you have the latest version in your repo you should trigger a build and release of the Azure resource using the pipeline created in the previous blog post.

Now we can start the pipeline creation for the application.
In VSTS go to Build and Release and select Builds, click the import button the top-right and browse to the local copy of the folder, and import DotNetAppSqlDb_App – CI.json, this will import the build definition for the application.
In the build definition on Process select  Hosted VS2017 for agent queue, go to Get sources and ensure that the selected repository is the one that contains the application and not the template. That is it for the build definition.
Like in the previous post, if you want to enable Continuous Integration for the application build go to the Triggers tab and enable it.
When done with everything save & queue the build definition, this will trigger a build of the application to validate the build definition is working and the application code is valid.

image

Now that the application build is done it is time to deploy it to Azure with the release pipeline.
Go to releases and import the release definition by clicking the + on the top-left corner and select Import release definition, browse to the BuildAndRelease Files and import the file DotNetAppSqlDb_App– CD.json.

clip_image001

The import should now give a release pipeline looking like the picture below. As this is only the shell of the pipeline we need to add in some information, I have again added some numbers to easier identify where to click.
First we need to add the artifact to deploy, click the box +Add artifact at 1 and select the build definition for the application (DotNetAppSqlDb_App – CI) previously created, ensure Default version is set to Latest.

image

Next click the lightning at 2. and ensure that After Release is selected to trigger the deploy to Dev when release is started, again you can also add Pre-deployment approvers if you want someone to review the deployment first.
Now click the 1 phase 3 tasks at 3. to configure the Dev deployment. Select Run on Agent phase and set Agent Queue to Hosted VS2017.
Go to Deploy Azure App Service and select the same Azure Subscription as in the release definition for the ARM template, enter the App Service name and Resource Group for the development environment, if you use same parameters as I do it should be DotNetAppSqlDb-dev for both fields, also enter the Deployment Slot name, default is Staging.

image
Next go to the step PowerShell: Test URL, this step contains a PowerShell Script that will validate that the newly deployed code is responding to with HTTP code 200, no additional configuration is needed.
Last in the Dev deployment is the step Swap slot to Dev, again select the same Azure subscription as in the release definition for the ARM template, and enter the App Service name and Resource Group for the development environment. For the source slot select Staging again.
This will swap the staging slot with the development production slot.
The Dev deployment is now configured and we can go to the User Acceptance Testing deployment,  to switch to the UAT deployment click on Tasks in the top menu and select UAT

image

Also for the UAT slot, select Agent phase and set Agent Queue to Hosted VS2017.
Go to the step Deploy Azure App Service and select the same Azure Subscription as before, but enter the App Service name and Resource Group for the production environment, this would be DotNetAppSqlDb-prd if you use the same as I do, for the Slot select UAT.
Again there is a PowerShell step for validating response from the application after deployment.

Now is only left to configure the Production Deployment, click on Tasks in the top menu and select Production.
Again select Run on Agent phase and set Agent Queue to Hosted VS2017.
Select the step Swap slot to Prd and again select the same Azure Subscription as before, but enter the App Service name and Resource Group for the production environment, for the Source Slot enter the name for the previously select deployment slot UAT.

Everything for the release pipeline is now configured and you can click Save, and then release to trigger a deployment.

Again if you want to enable Continuous deployment to trigger the release automatically after each build you can click the lighting at the artifact and click Enabled, if you enable CD you can trigger the build instead and that will trigger the release when done.

clip_image001[3]

With that we now have the entire build and deploy cycle automated in VSTS!
If both CI and CD is enabled you will now trigger a new build and release every time new code is committed to the repositories.
With everything defined in code you can now also easily roll back to a previous version, deploy more instances or to new regions.

Next up will be using this same pattern to deploy to Azure Stack, but more on that next year.

Continuous Delivery WebApps with ARM Templates, Part 1

The boss words these days is all about DevOps, Everything as Code, Continuous Delivery, but how do you actually do it? And why should you do it? Hopefully this post will help you getting started, and by the end of the post provide you with a complete working scenario. So lets get started!

First let me describe the scenario. This case will deploy a simple To-do List .NET WebApp using a Azure SQL Database and monitored with Application Insight.

All code needed for this is provided doing the article, so don’t worry you don’t need to know anything about .NET to test it.

To get that working we need to deploy the following Azure resources, a Azure SQL Server with a database, a AppService Plan with AppService including a Deployment Slot and Connection String, we also want the AppService to automatically scale according to load and to send email alerts for some common errors. Finally we want to deploy a Application Insight instance to monitor it all. And then we want to duplicate it all to separate Development and Production, but hey.. That shouldn’t take long to duplicate since we are using templates right?

clip_image001

We want to use Visual Studio Online build and release management to control the entire deployment of both the Azure resource and the web application in two fully automated flows from code commit to production. This will include some simple automated tests and some manual approvals for going between deployment stages.

image

In this first post I will show you how to create the pipeline for the Azure Resources.

The first thing we need to do is to create a Visual Studio Online Project to store and manage our solution in. Go to the Azure portal (https://portal.azure.com) and provision a resource of the type Team Project, like you create any other resources in Azure. If you don’t already have a Visual Studio Online account it will ask you to create one, in Version Control select GIT and leave the rest default. Currently the VSTS providers in Azure is in preview and you might see that it will load forever, try to just go to the URL of your newly created account directly instead and you will see it is working.

clip_image002

Once you have created the VSTS project you will need to create two repositories within the project, one for the ARM template you are using to deploy the Azure resources, and one for the actual web application code. The reason for splitting the ARM template and the application code in two separate repositories is that in the most cases I see two different persons working on each of the components. A developer team is coding the application and then a cloud engineer or DevOps engineer is creating the ARM template need to provision the Azure resource right. Also it will make it possible to differentiate permissions and branch policies.

clip_image003

When the repositories are created it is time to add some code into them. The application we are using in this case is a sample from Microsoft, they are providing a lot of sample applications for all sorts of things so this is a good place to go if you need some inspiration or just some for testing. https://github.com/Azure-Samples/dotnet-sqldb-tutorial.

For the ARM template I have already created one to deploy the Azure resources I described in the start, you can go and grab it here https://github.com/AndreasSobczyk/Continuous-Delivery-WebApps-Demo.

To add the files to the repositories, you can use a Git client like Visual Studio Code to push the files into the it. See Visual Studio Online + VSCode = Easy PowerShell Source Control for how to.

With all the source code we want to deploy for the solution in place we can now start to create the build and release pipeline. The definitions files for the build and release pipeline is placed in the folder ‘BuildAndRelease Files’ in the same project as the ARM templates.
In VSTS go to Build and Release and select Builds, click the import button the top-right and browse to the local copy of the folder, and import DotNetAppSqlDb_Template – CI.json, this will import the build definition for the ARM template.
On the build definition select Process and select  Hosted VS2017 for agent queue, go to Get Sources and ensure the selected repository is the one for the ARM template. Next select the Validate Template step, select the Azure Subscription to deploy to (you maybe need to authorize VSTS to the subscription), enter a name for the resource group or select a existing (if not existing it will create one), you should use the same resource group for the build as you use for the Dev environemtn, and select a location.
If you want to enable Continuous Integration go to the Triggers tab and enable it. When done with everything save & queue the build definition, this will trigger a build of the ARM template to validate everything is working.

image

With the ARM template build done we can create the release definition to deploy it, go to Build and Release and select Releases. If you have no release definitions yet you have to create new empty one to be able to import, just click the + New definition, select empty process, and save it. You can now go back to Releases and import the definition by clicking the + on the top-left corner and select Import release definition, browse to the BuildAndRelease Files and import the file DotNetAppSqlDb_Template – CD.json.

image

This should import a release pipeline looking like the picture below. As this is only the shell of the pipeline we need to add in some information, I have added some numbers to easier identify where to click.
First we need to Add the artifact to deploy, click the box to and select the build definition previously created, ensure Default version is set to Latest.

image

Next click the lightning at 2. and select After release to trigger the deploy to Dev when release is started, you can also add Pre-deployment approvers if you want someone to review the deployment first.
Now click the 1 phase 1 task at 3. to configure the Dev deployment. Select Agent phase and set Agent Queue to Hosted VS2017. Go to Azure Deployment: DotNetAppSqlDb-tst and select the same Azure Subscription, Resource Group and Location as in the Build definition.
The Dev deployment is now configured and we can go to the Production deployment,  to switch to the production deployment click on Tasks in the top menu and select Production

image

Also for the Production slot, select Agent phase and set Agent Queue to Hosted VS2017. Go to Azure Deployment: DotNetAppSqlDb-prd and select the same Azure Subscription and Location, but enter a different Resource Group for the production workloads, I use DotNetAppSqlDb-dev for dev and DotNetAppSqlDb-prd for production.
Everything for the release pipeline is now configured and you can click Save, and then release to trigger a deployment. If you want to enable Continuous deployment to trigger the release automatically after each build you can click the lighting at the artifact and click Enabled, if you enable CD you can trigger the build instead and that will trigger the release when done.

image

When the release has finished deploying both environment, go into your Azure subscription and verify you have two have two resource groups looking like this.

imageimage

For testing if you want to save some money you can delete both resource groups and redeploy everything at anytime by triggering the build and release.

That is it for now! Next time we are going to create the build and release pipeline for the actual application.

Visual Studio Online + VSCode = Easy PowerShell Source Control

In my daily work at CTGlobal we are all working a lot with PowerShell and therefor we have a big need to share code between each other and also having some version control in place when we are making changes. So for the last few months I has been playing a lot around with Visual Studio Code and Git repositories in Visual Studio Online to solve this.

Pre-reqs

To create a new code repository in VSTS go to the project you want the repository to reside in and select the Code tab. Click the repository drop down and click New repository. A box appears, enter the name for the new repository and select Git type. You now have a new repository.

clip_image001

To clone the repository to your local computer and work with the code click Clone in the right corner and copy the URL.

clip_image002

Open Visual Studio Code, press F1 to open the command palette and write Git: Cloneclip_image003

You will then be asked to enter the URL you just copied, and then the parent path where the local copy if the repository will be placed.

imageimage

The repository will then be cloned to your local computer and you can start to add files to it. When you have added some files or done any changes and you want to save it to the repository you have to commit it. You commit changes by going to the Source Control tab on the left in Visual Studio Code, review the changes and click Commit.

image

The changes are then committed to the local copy of the repository. To sync the changes back into Visual Studio Online press F1 to open the command palette again and write Git: Sync and press enter, the changes is now pushed back to master repository and your code is in control.

Happy source controlling!

Optimizing Cisco UCS Management Pack for larger(any) environments

Some months ago Cisco released a brand new management pack for the UCS hardware platform, this update was more or less a rewrite of the entire solution, including switching all alarms from monitor based to rule based. This was done in order to minimize the number of discoveries needed and therefor the objects created by the management pack, this could have quite a performance impact since every little memory module and fan was discovered as objects, and this could be many fans and memory blocks in a large UCS environment.

Since it is all rule based now we would miss the auto-resolve feature usually used by monitors, but to solve this Cisco created a rule using a simple schedule to close and update alerts as needed based on the event entries on the server running the Cisco UCS monitoring service. But with the first release (4.0.1) there was a problems, it was only checking for alerts with resolution state New (0) and i guess in most production environments many are using resolution states to route alerts to correct teams or set the progress of the alert, therefor when you changed resolution state on your alerts they would never update or close. This problem is now solved in version 4.1.1 but now a new problem appears! Cisco choose to use “Where-Object {$_.ResolutionState -ne 255” to find all the non-closed alerts, using this method in our large environment it takes 10-12 seconds to find all the closed alerts and to make it even worse this command is running for every event collected by the rule “Cisco.Ucs.Watcher.UCSFault.Event.Collection.Rule” for the selected interval meaning that the script would never complete in our case and end on timeout.

To resolve all this I found all the elements needed for the rule “Cisco.Ucs.Library.UpdateAndCloseAlert.Rule” in the original Cisco Management pack and created my own fix for this problem.

Instead of

$scomActiveAlerts = Get-SCOMAlert | Where-Object {$_.ResolutionState -ne 255}

I changed it to

$ActiveAlerts = Get-SCOMAlert -ResolutionState (0..254)

This simple change gives the same result but in 0,5 seconds instead of 10-12. And instead of running the command for each event collected I changed it to run one time for the script (just before the ForEach) and save the result in a variable $ActiveAlerts and use that in the ForEach instead.

This just shows how easy you can improve (and decrease) the performance of your PowerShell scripts.

I uploaded my fix management pack to GitHub, if you import this one remember to disable the original rule.

GitHub Download

 

 

MPTool: Automate Windows Service Monitoring

Monitoring a Windows Service running state is a common request at my work, and therefor a obvious thing to automate and win back some free time. Thanks to the MPTool PowerShell module this can easily done with four lines of PowerShell

First you need a management pack to place it in, either use a existing or create a new. In this example we will place all resources, class, discovery and monitor, in the same management pack, for more complex management packs you will usually split into separate management packs.

New-MPToolManagementPack -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-ManagementPackDisplayName "CM MyApp Service Monitoring" `
-ManagementPackDescription "Contains monitor for MyApp Windows service Monitor" 

Once you have a management pack to place the monitor in, you need a class for the specific Windows service, you don’t need to add any properties for just basic service monitoring since the name of the Windows service cannot exist more than once on a Windows computer.

New-MPToolLocalApplicationClass -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-ClassName "CM.MyApp.WindowsService" `
-ClassDisplayName "CM MyApp Windows Service" `
-ClassDescription "CM MyApp Windows Service - Created with MPTool" 

We now have a class for our Windows service and we can then go ahead and create the discovery to find where the service is existing. All Windows Services has a key under HKLM:SYSTEM\CurrentControlSet\Services\[NameOfTheService] so this can be used to discover the service. Note that this should be the service name and not the display name.

New-MPToolFilteredRegistryDiscovery -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-DiscoveryName "CM.MyApp.WindowsService.Discovery" `
-TargetClassName "Microsoft.Windows.Computer" `
-RegistryPath "SYSTEM\CurrentControlSet\Services\MyApp\" `
-DiscoveryClassName "CM.MyApp.WindowsService" `
-IntervalSeconds 300 

Now the final step is to create a monitor for the discovered instances of the Windows service. Here you simply just need to target the newly created class, select the state of the alert if it triggered, Error or Warning, and last again defined the name of the Windows service to monitor the state of.

New-MPToolWindowsServiceMonitor -ManagementServerFQDN scomms01.cloudmechanic.net `
-ManagementPackName "CM.MyApp.ServiceMonitoring" `
-TargetClassName "CM.MyApp.WindowsService" `
-UnhealthyState "Error" `
-ServiceName "MyApp" 

Of course now we want to put it all together in a script that we can use for self-service scenarios! I have created one example and placed  my GitHub for MPTool.

Happy automating!

 

Orchestrator PowerShell Template

I recently did a webinar together with Cireson on CMDB, automation and self-service using System Center Service Manager, System Center Orchestrator and the Cireson Portal. In the webinar I showed how i always use only the .NET Script Activity in all our runbooks, and the same PowerShell “Template” to invoke scripts. The reason for only using PowerShell scripts instead of the Orchestrator activities is that you will need to add a lot of activities, which can get quite messy, to achieve the same that you can do with a few lines of PowerShell. Secondly you will also quickly realize that the Orchestrator activities are pretty limited, and finally if you want at some point to move to Azure Automation or SMA then you can quickly just copy and reuse all your scripts.

The template below is made to invoke the scripts on a targeted computer, and collect any errors in the Invoke-Command. When running a script block in Invoke-Command it will not return anything back so that is why the Try, Catch, Finally is there. Also if you want to return some data from your script you can add it in the Finally block to the $return and then split it afterward, below you can see an example on returning output from Get-NetIPAddress.

#Credentials and computer name where you wish to execute your script.
$secpasswd = ConvertTo-SecureString "PASSWORD" -AsPlainText -Force
$Creds = New-Object System.Management.Automation.PSCredential ("USERNAME", $secpasswd)

$Computer = "Server01.domain.local"

$return = Invoke-Command -ComputerName $Computer -credential $Creds  -ScriptBlock {
#Ensures that script will fail on error
$ErrorActionPreference = "Stop"

Try{
#CODE HERE, MUST NOT RETURN ANY OUTPUT, Note: Use "| Out-Null" if cmdlet returns output
#CODE HERE, MUST NOT RETURN ANY OUTPUT, Note: Use "| Out-Null" if cmdlet returns output
#CODE HERE, MUST NOT RETURN ANY OUTPUT, Note: Use "| Out-Null" if cmdlet returns output
#Example
$IPAddress = Get-NetIPAddress
#Example

#If all code succesfully executed then $status will be Success
$Status = "Success"
}

Catch{
#On script failure $Status will be Failed and the error message sent to $ErrorMessage
$ErrorMessage = $_.Exception.Message
$Status = "Failed"
}

Finally{
#Will always return $Status and $ErrorMessage
$return = @($Status, $ErrorMessage, $IPAddress)
$return
}
}
#Get Return values
$Status = $Return.Get(0)
$ErrorMessage = $Return.Get(1)
$IPAddress = $Return.Get(2)

You then need to create a published data for each variable you want to return from the activity. I always put the $Status and $ErrorMessage as published data as minimum to keep some sort of consistency.

runbokpub

Then make a new Run .Net Script activity below with the code to throw the error message from your script in the runbook.

runbookthrow

And make a link that will be triggered if $Status is not equal Success.

runbooklink

You could also add a invoke runbook after the Error activity if you have a runbook to add something to a error log of some sort, in my case i use the Analyst log in Service Manager.

runbookcomment

Here is the webinar recording

MPTool, Automate and Simplify Management Pack Development

This is something I have been looking forward to for a long time! The first public release of the PowerShell module MPTool, it is a module for automating and simplify OpsMgr management pack development I have been working on for almost a year together with Martin Dyrlund Arnoldi

Back in the end of 2014 we started introducing automation and self-service into our company and spend the most of 2015 automating common tasks around our daily server provisioning and de-provisioning work. In the end of 2015 we looked into automation for SCOM as we couldn’t keep up with the work the way we did it at the current point. We needed a way to automate our management pack creation to be able to create them faster and more standardized, and so we came by the awesome PowerShell module OpsMgrExtended from Tao Yang. We started to play around with Tao’s module and quickly figured out that the concept of module was exactly what we needed, but also that we required the ability to create more advanced monitoring solutions, so we decided to build our own PowerShell module, and the result is this, MPTool.

MPTool is created with the goal that if you know PowerShell you should pretty easily be able to create management packs. It contains a lot of build-in logic in each function, it automatically adds management pack references if needed, for a PowerShell discovery you only need to create a PowerShell array with the discovery data and the function it self will create the SCOM specific code needed, and so on.

We have now with great success used it internally for more than 6 months and is now ready to share it with the community, I hope you will all find it as useful as us!

Over the next weeks I will post some more articles on how you can use this module in different examples

Please report if you hit any Bugs or troubles so we can improve where it is needed, also if you have any idea for future releases we will take it in and evaluate.

Detailed documentation is available at GitHub

Download Link

GitHub Repository

16 CmdLets is available now, and a few more is already in testing and will be added soon.

New-MPToolManagementPackAlias
Get-MPToolManagementPackReferenceAlias
New-MPToolManagementPack
New-MPToolOverrideManagementPack
Add-MPToolManagementPackReference
New-MPToolApplicationComponentClass
New-MPToolComputerRoleClass
New-MPToolLocalApplicationClass
New-MPToolClass
New-MPToolWindowsEventAlertRule
New-MPToolFilteredRegistryDiscovery
New-MPToolPSDiscovery
New-MPToolPSStateMonitor
New-MPToolWindowsServiceMonitor
New-MPToolDependencyMonitor
New-MPToolHostingRelationship