Home - About me - Browse by categories

Terraform on Microsoft Azure - Part 7: Continuous Deployment using Azure Pipeline

This blog post is part of the series about using Terraform on Microsoft Azure. In the previous article, I explained how to use Docker and Azure Pipeline to continuously integrate and tests Terraform infrastructure modules / deployments. In this new blog post, I will discuss about using Azure Pipeline to actually deploy the infrastructure continuously.

Note: this blog post series comes with a reference implementation hosted on my GitHub. Do not hesitate to check it out to go deeper into the details, fork it, contribute, open issues… :)

So far, I’ve discussed about the different modules that compose the reference architecture independently one from each other. The idea of this post is to explain how we can create a deployment pipeline that will deploy the whole infrastructure.

Azure multi-stages pipelines

To achieve this goal, I have used the new YAML Azure multi-stages pipeline:

Azure Multi-Stages Pipeline

Azure multi-stages pipelines allow to specify the whole deployment pipeline using a YAML file that lives inside the source code repository and evolve with your infrastructure. This is sometime referenced as pipeline as code. If you are not familiar with Azure DevOps YAML pipeline, I strongly recommend that you give it a try before continuing to read this article. It’s really awesome!

Note: at the time I am writing this article, multi-stages pipelines are still in preview and need to be activated on your Azure DevOps account / organization. Check out this documentation for more information.

The main repository of the Terraform on Azure reference implementation contains the definition of what we want to deploy using Azure Pipeline. Each directory contains the Terraform code used to deploy the three modules: common, core networking and Azure Kubernetes Service.

I’ve chosen to deploy each module into a separate stage. In Azure Pipeline, a stage is a way to organize the different jobs and tasks that are going to be executed. One advantage of splitting into different stages is that you can use conditions to execute or not a given stage, depending on what you want to deploy. For example, you can have a condition that will skip some deployment steps if they are already done.

Deploy a Terraform module as a pipeline stage

Because we are using Terraform, we are going to repeat the exact same steps to deploy any module to Azure:

Each step is identified as a task in the YAML pipeline:

- stage: CommonModule
    displayName: Common Module
    jobs:
    # Common Module
    - job: CommonModule
      displayName: Deploy the Terraform Common module
      pool:
        vmImage: $(vmImageName)
      steps:
      - task: [email protected]
        displayName: 'Install an SSH key'
        inputs:
          knownHostsEntry: $(sshKnownHostsEntry)
          sshPublicKey: $(sshPublicKey)
          sshKeySecureFile: $(sshKeySecureFileName)
      - task: charleszipp.azure-pipelines-tasks-terrafo[email protected]0
        displayName: 'Use Terraform $(terraformVersion)'
        inputs:
          terraformVersion: $(terraformVersion)
      - task: charleszipp.azure-pipelines-t[email protected]0
        displayName: 'terraform init'
        inputs:
          command: init
          workingDirectory: '$(System.DefaultWorkingDirectory)/tf/common'
          backendType: azurerm
          backendServiceArm: $(azureSubscriptionServiceConnectionName)
          ensureBackend: true
          backendAzureRmResourceGroupLocation: $(location)
          backendAzureRmResourceGroupName: $(tfStateResourceGroupName)
          backendAzureRmStorageAccountName: $(tfStateAzureStorageAccountName)
          backendAzureRmStorageAccountSku: $(tfStateAzureStorageAccountSku)
          backendAzureRmContainerName: $(tfStateContainerName)
          backendAzureRmKey: 'common.tfstate'
      - task: charleszipp.azure-pipelines-t[email protected]0
        displayName: 'terraform validate'
        inputs:
          workingDirectory: '$(System.DefaultWorkingDirectory)/tf/common'
      - task: charleszipp.azure-pipelines-t[email protected]0
        displayName: 'terraform apply'
        inputs:
          command: apply
          workingDirectory: '$(System.DefaultWorkingDirectory)/tf/common'
          environmentServiceName: $(azureSubscriptionServiceConnectionName)
          commandOptions: '-auto-approve -var location=$(location) -var tenant_id=$(tenantId)'

Note: I have used this Azure DevOps task from the marketplace to be able to work with Terraform inside the pipeline. You need to install it on your Azure DevOps organization, as a prerequisite.

This task allows to manage Terraform remote state management inside the pipeline, by specifying the information about the Azure storage account to use to store the state:

backendAzureRmResourceGroupLocation: $(location)
backendAzureRmResourceGroupName: $(tfStateResourceGroupName)
backendAzureRmStorageAccountName: $(tfStateAzureStorageAccountName)
backendAzureRmStorageAccountSku: $(tfStateAzureStorageAccountSku)
backendAzureRmContainerName: $(tfStateContainerName)
backendAzureRmKey: 'common.tfstate'

The complete Azure Pipeline is available here.

Azure DevOps pipelines and variables

Azure DevOps allows to use different kind of variables inside a pipeline, by referencing them using the $(VARIABLE_NAME) notation.

It can be:

variables:
  vmImageName: 'ubuntu-latest'
  terraformVersion: 0.12.3
  azureSubscriptionServiceConnectionName: 'jucoriol'
  tfStateResourceGroupName: 'terraform-ref-fr-rg'
  tfStateAzureStorageAccountSku: 'Standard_LRS'
  tfStateAzureStorageAccountName: 'tfstate201910'
  tfStateContainerName: 'tfstate-ref'

Azure Pipeline Variables

Note: for more information about variables, check this page of the documentation.

Triggering the pipeline

The pipeline also contains the definition for the trigger that will (or not) deploy the infrastructure as soon as a new commit is done on a given branch. In that case, I’ve configured it to be trigger manually (i.e. to not be triggered :-)):

trigger: none

Note: check this page of the documentation to learn more about Azure Pipelines triggers.

Running the pipeline

Once the YAML pipeline has been defined, you can run it from the Azure DevOps portal or from a trigger.

Run Azure Pipeline

That’s all! Having your pipeline defined in YAML right inside your code repository will make really easy to make the infrastructure evolving continuously.

Conclusion

In this blog post, I explained how to implement continuous infrastructure deployment using Terraform and Azure Pipeline. It concludes (for now :-)) this series about using Terraform on Microsoft Azure. I tried to explain what best practices you can implement in term of Terraform source code organization and modularization, remote state management, continuous integration and testing and finally in this post, continuous deployment.

I really hope that you have enjoyed read it and that it will help you to implement infrastructure as code best practices in your projects!

Terraform on Microsoft Azure

Cheers!


Any question about this post? Feel free to drop a comment below or contact me on Twitter @jcorioland