Category Archives: CI/CD

ALM For Power Platform/CDS In Azure-Devops/VSTS Using Power Platform Build Tools – Part 2

In the previous post, I showed you how to setup the Azure DevOps and install Power Platform Build Tools to Azure DevOps. In this blog post, I will show you how we can export from the Dynamics 365 CE instance and commit to VSTS/Azure DevOps.

Below is the VSTS Build-pipeline and will include the following steps

How to setup VSTS Build Definition

In Build Pipeline, we will be exporting the CRM solution, unpack it using the Power Platform Build Tools and store the solution file in Artifacts. You can create a build definition directly from Visual Studio Online (VSTS/Azure-DevOps) or from within Visual Studio. Firstly, I will show you how to create a build definition from within Visual Studio, navigate to the Builds tab in the Team Explorer:

Once there, you can click New Build Definition to be taken directly to Visual Studio Online. This is where you would start if you had decided to create the build definition directly from Visual Studio Online instead of starting in Visual Studio.

On the dialog box that pops up in the browser, we’ll select Visual Studio as our build template, but you can see there are other templates for use, such as Xamarin for Android or iOS and the Universal Windows Platform. The default settings for your build definition should be correct for the most part, but you will need to check the Continuous Integration checkbox. Here is what they look like for this example:

Because this is a simple example and we don’t need the additional flexibility the Default queue provides, we can leave the default Hosted option selected in the Default agent queue field. For more information on the restrictions on the Hosted pool.

 
You can see the checkbox for CI at the bottom of the dialog. This is enabled so that Visual Studio Online will execute the defined build definition for each check-in. The build definition will define whether this build code is published to an environment. Since we want to continually build and deploy to our web environment, we will check this box.

We can create the build definition from Azure Dev Ops too by following the below steps.

Navigate to Pipelines -> Builds -> Click New Pipeline

Click the Visual Designer, which will allow you to perform using GUI.

Select the Team Project, Repository, Branch and click on the continue button. For Demo purpose, I have given the name of the build definition as ALM – PowerApps and Dynamics 365-CI

In the next step, select an empty job as shown below:

After selecting an empty job, you can see the empty Agent Job and Select the Agent Pool as Hosted

Next step is we need to configure the VSTS build definition as shown in the below diagram.

1. PowerPlatform Tools Installer

Every pipeline that uses the PowerPlatform Build Tools must install them as a first step. This ensures that they are really installed on the Agent.

2. PowerPlatform Publish Customizations

As a second step, we need to publish all customizations. In this step, you only must choose your connection.

Please refer my previous blog or video to understand how we need to create the service connection for the Power Platform Build Tools.

Blog post Reference: https://d365dotblog.com/2020/06/01/alm-how-to-create-service-connection-for-the-powerapps-build-tools/

Video Reference: https://d365dotblog.com/2020/06/16/alm-power-platform-tips-tricks-create-service-connection-for-the-powerapps-build-tools-in-azure-devops-vsts/

3. PowerPlatform Export Solution

As a next step, we need to export the solution from the Power Platform source instance. We are using “Export Solution” to export the solution from the Power platform instance.

Note: Please update the solution name and solution output file in the Power Platform Export solution task

Solution Name: $(SolutionName)

Solution Output File: $(Build.ArtifactsStagingDirectory)\$(SolutionName).zip

4. PowerPlatform Unpack Solution

As a next step, we are going to unpack the solution using Power Platform Unpack solution ask as below:

Solution Input File should be the same as the output in the last step. In our case:

$(Build.ArtifactsStagingDirectory)\$(SolutionName).zip

Target Folder to Unpack Solution should be the folder where you would like to store your unpacked solution in the repo. In our case, we will have a folder in the root which has the name of the solution.

$(Build.ArtifactsStagingDirectory)\$(SolutionName)

5 – Commit solution to repo

Final step is to add the extracted solution to our repo. To do this we will add a standard “Command Line” step. There you will add the following code to “Script” field:

echo commit all changes
git config user.email "<email>"
git config user.name "Automatic Build"
git checkout master
git add --all
git commit -m "solution init"
echo push code to new repo
git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" push origin master

You must replace “<email>” with the email of the user you would like to use to push your changes.

6 – General Pipeline configuration

Below are the some of the general configurations you need to enable in this pipeline.

For the agent, we need to allow scripts to use the OAuth token. If this is not configured our command-line script will not be able to connect to the repo and push our solution. The configuration should look like this

In our PowerPlatform Export Solution steps, we have used a variable called “SolutionName”.  We need to make sure we are updating the solution name before running the VSTS Build pipeline. Now you can test the pipeline by running it. This can either be done via the “Queue” text if you are still in the edit mode or by using the “Run pipeline” button.

In my next blog, we will see how to pack the solution and deploy it in the Target instance using Power Platform Build Tools.

If you are interested in this topic and would like to do some further self-study, I encourage you to check out my blog on this.

ALM – Power Platform – Tips & Tricks – Create Service Connection for the PowerApps Build Tools in Azure DevOps/VSTS

Here is my next video on “Tips & Tricks in ALM with PowerApps Build Tools”.

Please do subscribe my channel.

In this video, I have explained how to Create Service Connection for the PowerApps Build Tools in Azure DevOps/VSTS.

Here is the link for my course on Power Platform – ALM Fundamentals https://powerspark.newzenler.com/

ALM – Power Platform – Copy Azure DevOps Build/Release to New Project

Here is my next video on “Tips & Tricks in ALM with PowerApps Build Tools”.

Please do subscribe my channel.

In this video, I have explained how to Copy Azure DevOps Build/Release Pipeline to New Project.

Here is the link for my course on Power Platform – ALM Fundamentals https://powerspark.newzenler.com/

ALM- How to create Service Connection for the PowerApps Build Tools

In this blog, we will see how to create the service connection for PowerApps Build Tools.

Pre-requisite: Azure DevOps

The service endpoint for the source and the target environment that you want to export and import the solution to. For example, https://powerappsbuildtools.crm.dynamics.com. Service endpoints can be defined under Service Connections -> Generic Service Connection in Project Settings.

Navigate to Project Settings -> Service connections -> Click on New service connection.

Select Generic in Service connection

Provide the below details for the generic service connection as follows:

  • Username – Username for connecting to the endpoint
  • Server URL – example. https://powerappsbuildtools.crm.dynamics.com
  • Password – Password/Token Key for connecting to the endpoint
  • Service connection name  – Give the service connection as meaningful name.
  • Description (optional)
  • Security – Grant access permission to all pipelines – Enable

NOTE: MFA is not yet available. May be in the upcoming release Microsoft will add MFA to the service connection.

Once the above step is done, you will see your service connection in the list. You could add multiple other connections to various other environments.

Once the connection is established, you will be able to see this service connection in the PowerApps Build tools task like exporting and importing the solution as follows:

I hope this helps. If you have any challenges in creating “service connection”then please feel free to reach me out to me I am also always happy to assist you on your queries.

If you are interested in this topic and would like to do some further self-study, I encourage you to check out my blog on this.

ALM – Power Platform – Tips & Tricks -1.Using secrets from Azure Key Vault in a pipeline

I have started my new series of videos “Tips & Tricks in ALM with PowerApps Build Tools” staring this week.

Please do subscribe my channel.

In this video, I have explained how to use the secrets from the azure key vault in VSTS Build & Release definition.

Here is the link for Power Platform – ALM Fundamentals https://powerspark.newzenler.com/

Powershell scripts to export and import the Dynamics 365 CE solution

In this blog, we will see how to export and import the Dynamics 365 CE solution using the PowerShell script.

In my PowerShell script, I am using Microsoft.Xrm.Data.Powershell.

InstallRequiredModule

This function will check for this module (Microsoft.Xrm.Data.Powershell) in your machine. if not present then it will install this module

EstablishCRMConnection

This function will establish the connection to the CRM by passing the Dynamics 365 CE instance URL, username and password.

Before running the PowerShell scripts please make sure you have updated the below values.

$solutionName ="BuildAutomation"
$SolutionFilePath="C:\Users\DBALASU\Source\Workspaces\BuildAutomation\BuildAutomation.Solutions\BuildAutomation"
$CRMSourceUserName=""
$CRMSourcePassword=""
$CRMDestinationUserName=""
$CRMDestinationPassword=""
$CRMSourceUrl="https://instancename.crm.dynamics.com"
$CRMDestinationUrl="https://instancename.crm5.dynamics.com"
  1. $SolutionName – Name of the solution which you want to export the source instance.
  2. $SolutionFilePath – Desired folder path to export the solution
  3. $CRMSourceUserName – Username of the Dynamics 365 CE source instance
  4. $CRMDestinationUserName – Username of the Dynamics 365 CE destination instance
  5. $CRMDestinationPassword – Password of the Dynamics 365 CE destination instance
  6. $CRMDestinationPassword – Password of the Dynamics 365 CE source instance
  7. $CRMSourceUrl – URL of the Dynamics 365 CE source instance
  8. $CRMDestinationUrl – URL of the Dynamics 365 CE destination instance.

After updating the values in the below script, you can copy and paste in Windows PowerShell and run it. This will establish the connection to the Dynamics 365 CE source instance and export the solution mentioned and import it into the Dynamics 365 CE destination instance.

$solutionName ="BuildAutomation"
$SolutionFilePath="C:\Users\DBALASU\Source\Workspaces\BuildAutomation\BuildAutomation.Solutions\BuildAutomation"
$CRMSourceUserName=""
$CRMSourcePassword=""
$CRMDestinationUserName=""
$CRMDestinationPassword=""
$CRMSourceUrl="https://instancename.crm.dynamics.com"
$CRMDestinationUrl="https://instancename.crm5.dynamics.com"

Set-StrictMode -Version latest
function InstallRequiredModule{
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass -Force
$moduleName = “Microsoft.Xrm.Data.Powershell”
$moduleVersion = “2.7.2”
if (!(Get-Module -ListAvailable -Name $moduleName )) {
Write-host “Module Not found, installing now”
$moduleVersion
Install-Module -Name $moduleName -MinimumVersion $moduleVersion -Force
}
else
{
Write-host “Module Found”
}
}
function EstablishCRMConnection{
param(
[string]$CRMUserName,
[string]$CRMSecPasswd,
[string]$crmUrl)
Write-Host "UserId: $CRMUserName Password: $CRMSecPasswd CrmUrl: $crmUrl"
$CRMSecPasswdString = ConvertTo-SecureString -String $CRMSecPasswd -AsPlainText -Force
write-host "Creating credentials"
$Credentials = New-Object System.Management.Automation.PSCredential ($CRMUserName, $CRMSecPasswdString)
write-host "Credentials object created"
write-host "Establishing crm connection next"
$crm = Connect-CrmOnline -Credential $Credentials -ServerUrl $CrmUrl
write-host "Crm connection established"
return $crm
}

InstallRequiredModule

#Update Source CRM instance details below:
Write-Host "going to create source connection"
$CrmSourceConnectionString = EstablishCRMConnection -user "$CRMSourceUserName" -secpasswd "$CRMSourcePassword" -crmUrl "$CRMSourceUrl"
Write-Host "source connection created"
Set-CrmConnectionTimeout -conn $CrmSourceConnectionString -TimeoutInSeconds 1000

Write-Host "going to create destination connection"
$CrmSourceDestinationString = EstablishCRMConnection -user "$CRMDestinationUserName" -secpasswd "$CRMDestinationPassword" -crmUrl "$CRMDestinationUrl"
Write-Host "destination connection created"
Set-CrmConnectionTimeout -conn $CrmSourceDestinationString -TimeoutInSeconds 1000

Write-Host "Publishing Customizations in source environment"
Publish-CrmAllCustomization -conn $CrmSourceConnectionString
Write-Host "Publishing Completed in source environment."

Write-Host "Exporting Solution"
Export-CrmSolution -conn $CrmSourceConnectionString -SolutionName "$solutionName" -SolutionFilePath "$SolutionFilePath" -SolutionZipFileName "$solutionName.zip" 
Write-host "Solution Exported."

Write-host "Importing Solution"
Import-CrmSolution -conn $CrmSourceDestinationString -SolutionFilePath "$SolutionFilePath\$solutionName.zip"
Write-host "Solution Imported"

Write-Host "Publishing Customizations in destination environment"
Publish-CrmAllCustomization -conn $CrmSourceDestinationString
Write-Host "Publishing Completed in destination environment"

I hope this helps.

If you are interested in this topic and would like to do some further self-study I encourage you to check out my blog on this.

CI/CD & Test Automation for Dynamics 365 in Azure DevOps/VSTS -Part 5 – Master Data Deployment

In my previous blog, I wrote about how to set up a gated check-in, In this blog, we will see how to move the master data from source to target instance using our CI/CD pipeline.

Generally, we use the configuration migration tool to move the master data across multiple environments and organizations. Configuration data is used to define custom functionality in model-driven apps in Dynamics 365, such as Dynamics 365 Sales and Customer Service, and is typically stored in custom entities. Configuration data is different from end-user data (account, contacts, and so on). A typical example of configuration data is what you define in the Unified Service Desk for Dynamics 365 to configure a customized call center agent application. The Unified Service Desk entities, along with the configuration data that is stored in the entities, define an agent application.

Note: Disable plug-ins before exporting data and then re-enable them on the target system after the import is complete for all the entities or selected entities.

Master data/Configuration data deployment

undefinedDefine the schema of the source data to be exported: The schema file (.xml) contains information about the data that you want to export such as the entities, attributes, relationships, definition of the uniqueness of the data, and whether the plug-ins should be disabled before exporting the data.

undefinedUse the schema to export data: Use the schema file to export the data into a .zip file that contains the data and the schema of the exported data.

undefinedImport the exported data: Use the exported data (.zip file) to import into the target environment. The data import is done in multiple passes to first import the foundation data while queuing up the dependent data, and then import the dependent data in the subsequent passes to handle any data dependencies or linkages.

Instead of moving it manually we are going to automate this above process using Azure DevOps.

Pre-requisites

  • Please make sure the latest configuration.xml(generated using DataMigrationUtility.exe tool) is generated and placed in the desired input location.
  • Please make sure we have updated the variables (correct connection string, CRM username, CRM password) in both the VSTS build and release definition.               

VSTS Build Definition

We have to create a separate Build Definition for moving the master data from source to target instance. Once the solution movement is done this below build definition should trigger.

In my previous blog, I have explained how to create a new build definition. Please refer to that for creating a new build definition.

What will do – It will connect to the source instance and export the master data using the configuration.xml and push it to artifacts repository

In this Build Definition we have used the following MSCRM Build Tools tasks:

  • MSCRM Tool Installer – Installs the Dynamics 365 tools required by all of the tasks
  • MSCRM Export config migration data – Exports data from a CRM instance using a Configuration Migration schema file (How to prepare configuration schema file).

You have to update the connection string variable name and select the configuration.xml input location.

  • Publish build artifacts – Publish build artifacts to Azure Pipelines

Release Definition

Once the above definition gets succeded, this Release Definition will trigger automatically and performs the following tasks:

  • MSCRM Tool Installer – Installs the Dynamics 365 tools required by all of the tasks
  • MSCRM Import config migration data – Import data exported using Configuration Migration Tool into a CRM instance

You have to update the connection string variable name and select the exported data zip from the artifacts repository(drop).

This will import the master data/configuration data into our Dynamics Online sandbox instance.

In my next blog, we will see how to integrate the unit testing framework with the VSTS Build definition.

If you are interested in this topic and would like to do some further self-study I encourage you to check out my blog on this.

CI/CD & Test Automation for Dynamics 365 in Azure DevOps/VSTS -Part 4 – Gated Check-in

In my previous blog, I wrote about how to set up a VSTS Release definition. In this blog, I am goint to explain the gated-check-in but before heading to it. We must know why we need gated check-in

Gated check-in helps to restrict developers from checking in a broken code into a source control system and thus helps to avoid blocking your team. With gated check-in when check-in is initiated by a developer, it will build the project and will check-in the code only if the build is successful. Gated check-in is suitable for projects whose overall build time is less than a few minutes.

C# we have used StyleCop and FxCop and for JavaScript and Jquery we have used JSHint

StyleCop for the custom code such as Plugins, Workflows, Actions, and WebApi.

Consider there is a group of developers working together and each one writes the code in the exact same way.

More often than not, one isn’t better than the other and it’s just a matter of taste. In a team or in a single project, it’s more important to be consistent than it is to choose the right style.

Agreeing on a style can be hard enough, but enforcing it shouldn’t be something you do manually. It will be tedious and error-prone.

StyleCop is a tool that can automate this. Let’s have a look at how to set it up.

What is StyleCop?

StyleCop analyzes C# source code to enforce a set of style and consistency rules.

StyleCop used to be a Visual Studio plugin and a NuGet package. You can still use this in Visual Studio 2019, 2017 etc.

Installing StyleCop

To add StyleCop to your project, right-click your project in Visual Studio’s Solution Explorer, and choose “Manage NuGet Packages?”:

https://i0.wp.com/blog.submain.com/wp-content/uploads/2018/04/ManageNugetPackages.png?w=328&ssl=1

Search for “StyleCop.Analyzers” and install the latest stable version:

Installing-NuGet-package

Once it is installed, you build the project solution. You might get the below StyleCop warnings.

  • Add XML comments
  • Generate an XML documentation file (this can be set in the project properties)
  • Add a file header (e.g., copyright information)
  • Put the “using” statements inside the “namespace” block
  • Put braces on a new line
  • Add an empty line between the two method definitions (Output2 and Output3)

Setup

The first step in integrating StyleCop into an MSBuild system is to obtain the default StyleCop MSBuild targets file. To do so, run the StyleCop installer, and select the MSBuild files option on the Custom Setup page. This will install the StyleCop MSBuild files into the {Program Files}\MSBuild\StyleCop folder.

Adding the Import Tag

Once the StyleCop MSBuild files are installed, the next step is to import the StyleCop targets file into your C# projects. This is done by adding an Import tag to each C# project file.

For example, to integrate StyleCop to the project SampleProject, open the project file SampleProject.csproj within your favorite text editor. Scroll down to the bottom of the file and add a new tag to import the StyleCop.targets file. This import tag should be added just below the import of Microsoft.CSharp.targets:

Code
<Project DefaultTargets=”Build” xmlns=”http://schemas.microsoft.com/developer/msbuild/2003″&gt;   …Contents Removed…   <Import Project=”$(MSBuildBinPath)\Microsoft.CSharp.targets” />   <Import Project=”$(ProgramFiles)\MSBuild\StyleCop\v4.4\StyleCop.targets” />   …Contents Removed… </Project>          

Save the modified .csproj file. The next time you build this project either within Visual Studio or on the command line, StyleCop will run automatically against all of the C# source files within the project.

Build Warnings Vs Errors

By default, StyleCop violations will show up as build warnings. To turn StyleCop violations into build errors, the flag StyleCopTreatErrorsAsWarnings must be set to false. This flag can be set as an environment variable on the machine, or within the build environment command window. Setting the flag this way will cause StyleCop violations to appear as build errors automatically for all projects where StyleCop build integration is enabled.

Alternately, this flag can be set within the project file for a particular project. Open the .csproj file for your project again, and find the first PropertyGroup section within the file. Add a new tag to set the StyleCopTreatErrorsAsWarnings flag to false. For example:

Code
<Project DefaultTargets=”Build” xmlns=”http://schemas.microsoft.com/developer/msbuild/2003″&gt;   <PropertyGroup>     <Configuration Condition=” ‘$(Configuration)’ == ” “>Debug</Configuration>     <Platform Condition=” ‘$(Platform)’ == ” “>AnyCPU</Platform>     <ProductVersion>8.0.50727</ProductVersion>     <SchemaVersion>2.0</SchemaVersion>     <ProjectGuid>{4B4DB6AA-A021-4F95-92B7-B88B5B360228}</ProjectGuid>     <OutputType>WinExe</OutputType>     <AppDesignerFolder>Properties</AppDesignerFolder>     <RootNamespace>SampleProject</RootNamespace>     <AssemblyName>SampleProject</AssemblyName>     <StyleCopTreatErrorsAsWarnings>false</StyleCopTreatErrorsAsWarnings>   </PropertyGroup>            

The configuration described above will suffice to enable StyleCop build integration on an individual development machine. However, development teams working within a well-defined development environment can set up the build integration in a more global way, so that each developer does not have to manually install StyleCop on his machine.

To do this, copy all of the files from {Program Files}\MSBuild\StyleCop into a custom folder within your build environment, and check all of these files into your source control system. Next, define an environment variable within your development environment which points to the location of the StyleCop targets file. For example:

set StyleCopTargets=%enlistmentroot%\ExternalTools\StyleCop\v4.4\StyleCop.targets

With this configuration in place, it is simply a matter of adding the following import tag to each .csproj file within your development environment:

Code
<Import Project=”$(MSBuildBinPath)\Microsoft.CSharp.targets” /> <Import Project=”$(StyleCopTargets)” />          

StyleCop will automatically run each time this project is built, no matter which developer is building the project. There is no need for each developer to install StyleCop manually, since the StyleCop binaries are checked directly into your source control system and are centrally integrated into your build environment.

What is CodeAnalysis?

To integrate Code Analysis in build, unload and edit project and add following tags. Note that paths might be different depending on Solution configuration.

<ItemGroup>
    <CodeAnalysisDictionary Include="$(SolutionDir)\CodeAnalysisDictionary.xml" />
</ItemGroup>
<Import Project="$(SolutionDir)\ExternalDlls\StyleCop 4.7\StyleCop.targets" /> 

Code Analysis configuration

Configure project Debug configuration to use Code Analysis rules in Solution root. Do the same for the Release configuration. Only difference is that in debug mode Code Analysis should not be run because it slows down the build. We keep CA running in Release to get error report from continues integration and to allow easily turning CA on by altering solution mode from Debug to Release.

Once the above step is done, please commit the project solution files in Azure DevOps/VSTS repository.

How to enable Gated Check-in VSTS build definition.

Go to Build definition -> Triggers-> you can see the gated check-in as follows:

Check the gated check-in checkbox. Now gated check-in is enabled for this particular build definition.

In my next blog, we will see how to move the master data from source to target instance using our CI/CD pipeline.

If you are interested in this topic and would like to do some further self-study I encourage you to check out my blog on this.

CI/CD & Test Automation for Dynamics 365 in Azure DevOps/VSTS- Part 3 – Release Definition

In my previous blog, I wrote about how to set up a VSTS build definition. This blog will continue on that by setting up a VSTS release pipeline in Azure DevOps/VSTS. I will assume you got a QA, UAT and a production environment. The package(CRM solutions) from the build automation blog will be deployed to these environments. It will be set up in a very basic way. After that I will elaborate on different other options you can use in Azure DevOps to include in your pipeline.

Dynamics CRM CICD process

The below diagram illustrates the basic flow of the Dynamics 365 CE Workflow process.

In this part, we are going to see the above-highlighted one in detail.

Pre-requisites

  • Visual Studio Team Services (VSTS)/ Azure DevOps
  • Dynamics 365 Build Tools by Wael Hamze

Setting up the variables for the connection to CRM

You can create variable groups in VSTS. This is useful for variables who are related. You will use this to create a group for credentials for your development environment. Later if you implement the automated deployment, you can store the credentials to other environments there too.

In VSTS go to “Build and Release” and select “Library”. Here you can create variable groups.

Next click on “+ Variable Group”. This will take you to a form where you can create a variable group.

Now you give your variable group a name. I will assume you got a test, UAT and a production environment. The package from the build automation blog will be deployed to these environments.So, we will create three connection string for the test, UAT and production environments.

  • Connection string – AuthType=$(AuthType);Username=$(Username);Password=$(Password);Url=$(Url)
  • URL- Enter your instance URL.
  • Username – Enter the username of your instance
  • Password – Enter the password of your instance
  • Authtype – Office365

Below is the sample connection string details for your reference

AuthType=Office365;Username=jsmith@contoso.onmicrosoft.com; Password=passcode;Url=https://contoso.crm.dynamics.com

Reference Link https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/mt608573(v=crm.8)?redirectedfrom=MSDN

VSTS Release Definition

This release definition will import our packed solution file and then publish the customizations to our Dynamics 365 CE Online sandbox instance(QA, UAT & Production environment).

Once the build definition gets succeded VSTS release definition will trigger automatically and perform the following steps:

You can create a release definition directly from Visual Studio Online(VSTS/Azure-DevOps). To create a new pipeline, you go to Pipelines and then Releases. After that, you click on new and then new release pipeline.

You will get asked to select a template. Those templates are really useful for setting up deployments for azure. For Dynamics CRM deployments there is no template, so you select an empty job.

Next, you will get asked to provide a stage name and stage owner. A stage is an environment, so in our case, we name it “QA”. The owner can be left at default (your account). After you did that you can press the ‘X’ on the top right of the stage tab to close it.

Now you will see the pipeline as below.

  1.  You can edit the name of the pipeline and provide the name of your choice. Below that we can see a navigation bar with different tabs.
  2. Artifacts: An artifact is usually the output of a VSTS build definition, but it also has other options like an Azure Container Repository. If you click on “Add an artifact” you can see all options available. In this blog, we will only use the default, which is a build output.
  3.  Next is the stages. Here you can copy and add new stages for QA, UAT and production environment.

Adding an artifact

To add an artifact. Click on the “Add an artifact’ button. Now select all the correct details. For Project select your current team project. For source select the build pipeline you made based on the previous blog. Set default version to ‘Default’. Finally, provide the source alias name and click ‘Add’ to finish.

Now, we need to add the task for stage QA. Click on stage “QA”. It will navigate you to the task page as follows:

Now you will be in the task editor. Here you can add tasks similar to the VSTS build definition task editor.

To add a Task click on the + icon next to “Agent Job”. You will get a list of tasks. Search for the “MSCRM Tool installer” and add the task.

Next, add the MSCRM Import solution task and click on the task and add the details like below:

Display Name: Provide the name of your choice(Example: Import Solution to QA)
CRM Connection String: Provide the QA connection string variable which we have created earlier.
Solution File: Use the three dots at the right corner to select the correct solution (it will show the artifacts of the last successful build, so make sure your solution is there).
Checkboxes: According to your needs you can select the below checkboxes

Then next add the MSCRM Publish Customization task as follows:

Setting up the environments

Now that we have set up the QA environment, we need to set up the same for the UAT and production environment as well. Click on “Pipeline” on the navigation bar to go back to the overview. The UAT environment is the same as the QA environment, just different variables (remember we added those in the library). An easy way to create that environment is to clone the QA environment. To do that, hover over the QA test environment and click the icon with the 2 papers as shown below:

It will create a “Copy of QA” stage and you see it connected to the QA stage.  Click on it, change the name to “UAT ” and close the tab by pressing the ‘X’ button. Now you have the UAT stage too. The fact that the 2 environments are connected, means that the UAT deployment will automatically start when the QA stage is successfully deployed. Similarly, we need to clone and update the values for the production environment too.

Connecting variables

One more thing we need to do is to connect the variables we created at the start of the stages. You do that by clicking ‘Variables’ in the navigation bar and then “Variable Groups”

After that click on the “Link variable group”. Select the variable group and click on Link button as shown below:

VSTS release definition setup is done. Now, we will quickly explore the advanced options in the VSTS release definition.

Advanced Options

Triggers

Triggers define when a stage will be deployed. You can open it by clicking the lightning icon next to a stage.

There are 3 options you can select:

After Release

This means the stage will be deployed right after the release is created. This is used to automatically start the first stage in the pipeline after a release is created. In build pipelines, you can set to automatically create a release when the build is completed. If your pipeline has these options enabled, then a successful build will automatically deploy to the first stage.

After Stage

This means your stage will deploy whenever 1 or more stages successfully deployed. If you select multiple stages, then every stage needs to complete successfully in order to start this stage. Optionally you can select the checkbox to also deploy if the previous stages are “partially succeeded” instead of just “succeeded”

Manual Only

This means the deployment of the stage has to be manually started.

Approvals

In addition to triggers, you can set approvals. You can configure them by clicking the person icon next to a stage. There are 2 types.

Pre-deployment approvals

Somebody needs to approve that a stage will be deployed. This approval will trigger when the deployment of a stage is about to start. Either via an automatic trigger or a manual start. Approving the deployment results in the start of the deployment of that stage. Rejecting sets the status of the stage to “not deployed”.

Post-deployment approvals

Somebody needs to approve that the deployment of a stage is successful. This approval will trigger after the last deployment task is successfully completed. Approving this results in a succeeded deployment and rejecting it in a failed deployment.

For both options, you can select the approvers and a timeout before it automatically rejects. Also, you can set that if a user manually starts deployment of a stage, that user will not be able to approve the stage. Finally, you can select that approval is skipped when the previous stage was approved by somebody who is an approver of this stage.

Scheduling

Sometimes you may want to start releases on a specific day/time. For that, you can set a schedule. There are 2 options for scheduling resulting in different behavior.

Release Trigger Scheduling

You can set this schedule by clicking the scheduling button below the artifacts. A new release will be created at specific times that are configured. To also deploy to the first stage, make sure that at least one stage has the trigger “After Release”, otherwise it will create a new release but it won’t deploy anything. Also keep in mind a new release will be created, even if there is no new artifact available.

Stage Schedules

You can set this schedule by selecting the pre-deployment conditions and enabling the ‘Schedule’ option. Here you can define 1 schedule of when to deploy this stage.

I hope now you could have got clear information on how to create the VSTS release definition.

In my next blog, we will see how to enable the gated check-in.

If you are interested in this topic and would like to do some further self-study I encourage you to check out my blog on this.

 

« Older Entries