Posted in:

I was recently writing a .NET console app as a test harness for a web API, and I wanted to do some load testing. For this I wanted to (a) run from the cloud so the upload speed of my local internet connection would not be a limiting factor and (b) run multiple instances of my test harness in parallel.

It's actually very easy to achieve this by containerizing your .NET app and running it in Azure Container Instances, and so in this post I'll explain the steps I took to achieve this.

Containerizing your .NET console app

Visual Studio makes it super easy to containerize a .NET console app. Simply right-click the project and select Add | Docker Support and it will automatically create a basic Dockerfile for you. You'll also be able to run and debug the Docker container locally which was useful for me to ensure that my file path was working as expected on a Linux environment. (Note that you will need Docker Desktop or Rancher Desktop and WSL2 configured to run Linux containers on Windows).

To build the container image, I used the docker build command from the folder containing my .sln file:

$IMAGE_NAME = "mytestapp"
docker build -t "$($IMAGE_NAME):1.0" -f src/mytestapp/Dockerfile .

Making it configurable

My test application needed a lot of configurable settings, some of which were secrets. I wanted to allow myself to use dotnet user-secrets for local running (to avoid needing to check any secrets into source control), and environment variables in production. There's a few ways you can set this up, but here's what I ended up using.

IConfiguration config = new ConfigurationBuilder()
    .SetBasePath(AppDomain.CurrentDomain.BaseDirectory)
    .AddJsonFile("appsettings.json")
    .AddEnvironmentVariables()
    .AddUserSecrets<Program>()
    .Build();

This meant I could load in all my settings which would be sourced from appsettings.json, and overridden via environment variables or user secrets.

var loadTestOptions = config.Get<LoadTestOptions>();

Uploading to an Azure Container Registry

When using Azure Container Instances, it's recommended to store your images in an Azure Container Registry, which you can create with the az acr create command.

Then we just need to correctly tag our container image with docker tag, log in with az acr login and then use docker push to push our image to the ACR.

$ACR_SERVER = "myacr"
docker tag "$($IMAGE_NAME):1.0" "$ACR_SERVER.azurecr.io/$($IMAGE_NAME):1.0"
az acr login -n $ACR_SERVER
docker push "$ACR_SERVER.azurecr.io/$($IMAGE_NAME):1.0"

Running an Azure Container Instance

To enable our Azure Container Instance to access the container image from the ACR, we will need some credentials. This is possible because I created my ACR with the --admin-enabled flag set. Note that there is a better way to configure this using a service principal that you can read about here. But for this test I just used the admin credentials which I can fetch like this:

$ACR_SUBSCRIPTION = "539a3d8c-68c8-4f4b-9cae-f10e0d73dcdd"
$ACR_PASSWORD = az acr credential show --name $ACR_SERVER `
    --query "passwords[0].value" --subscription $ACR_SUBSCRIPTION -o tsv
$ACR_USERNAME = az acr credential show --name $ACR_SERVER `
    --query "username" --subscription $ACR_SUBSCRIPTION -o tsv

And I need a resource group to keep my Azure Container Instances in. Note that if you're doing performance testing, it makes sense to keep all the Azure resources in the same region.

$RES_GROUP = "myacitest"
$LOCATION = "westeurope"
az group create -n $RES_GROUP -l $LOCATION

And now we are ready to create a container instance, which we can do with the az container create command. There are lots of options you can configure. Here I'm setting the restart policy to "Never" as I want my containerized test app to run once and then stop.

I am also using -e to set some environment variables, which can be used to override the default values in appsettings.json. This allows me to pass in secrets. Note that if you are supplying multiple environment variables, you should just provide them space separated after each other and not repeat the -e flag which results in you only getting one

$CONTAINER_INSTANCE = "mytestapp"
az container create --resource-group $RES_GROUP --name $CONTAINER_INSTANCE `
    --image "$ACR_SERVER.azurecr.io/$($IMAGE_NAME):1.0" `
    --registry-username $ACR_USERNAME --registry-password $ACR_PASSWORD `
    --restart-policy Never -e "ClientSecret=$MY_SECRET" "NumberOfThreads=6"

Viewing container output

My test app simply wrote its output to the console, so to find out how my test run went I simply needed to view the container logs. The az container logs command allows us to do this.

az container logs -n "$CONTAINER_INSTANCE" -g $RES_GROUP

Bonus - mounting a file share

For one of my tests I needed to mount an Azure File Share to my container instance. Assuming you already have a storage account with a file share created we need to gather the details of that storage account and the access key:

$STORAGE_ACCOUNT_NAME = "mystorageaccount"
$STORAGE_RESOURCE_GROUP = "mystorageaccount-resgrp"
$FILE_SHARE_NAME = "myfilesharename"
$STORAGE_ACCOUNT_KEY = az storage account keys list `
    --account-name $STORAGE_ACCOUNT_NAME `
    --resource-group $STORAGE_RESOURCE_GROUP `
    --query "[0].value" -o tsv

And then with a few additional parameters to az container create we can mount that file share to the desired location in our container instance.

$VOLUME_MOUNT_PATH = "/mnt/upload"
az container create --resource-group $RES_GROUP --name $CONTAINER_INSTANCE `
    --image "$ACR_SERVER.azurecr.io/$($IMAGE_NAME):1.0" `
    --registry-username $ACR_USERNAME --registry-password $ACR_PASSWORD `
    --restart-policy Never -e "ClientSecret=$MY_SECRET" "NumberOfThreads=6"
    --azure-file-volume-account-name $STORAGE_ACCOUNT_NAME `
    --azure-file-volume-account-key $STORAGE_ACCOUNT_KEY `
    --azure-file-volume-share-name $FILE_SHARE_NAME `
    --azure-file-volume-mount-path $VOLUME_MOUNT_PATH

Running multiple containers in parallel

Part of the reason for wanting to use container instances was that I wanted to run several in parallel for a load test. Although you could just create the container instances one after the other with multiple PowerShell commands, it does take a minute or so for the az container create command to complete, so your first instances might start running several minutes before the final ones do.

This gave me the opportunity to learn about the Start-Job command in PowerShell which enables us to run the az container create command for each instance in parallel. In this example I am creating three container instances. Note that one gotcha I ran into was that the -ScriptBlock cannot see PowerShell variables you defined outside, so I worked round this by passing them in as parameters to the script block.

for ($i = 1; $i -le 3; $i++) {
    Start-Job -ScriptBlock { param($RES_GROUP, $CONTAINER_INSTANCE_NAME, $ACR_SERVER, $IMAGE_NAME, $ACR_USERNAME, $ACR_PASSWORD, $MY_SECRET )  
        az container create --resource-group $RES_GROUP --name $CONTAINER_INSTANCE_NAME `
        --image "$ACR_SERVER.azurecr.io/$($IMAGE_NAME):1.0" `
        --registry-username $ACR_USERNAME --registry-password $ACR_PASSWORD `
        --restart-policy Never `
        -e "ClientSecret=$MY_SECRET" "NumberOfThreads=6" } `
    -Arg "$RES_GROUP", "$CONTAINER_INSTANCE-$i", "$ACR_SERVER", $IMAGE_NAME, $ACR_USERNAME, $ACR_PASSWORD, $MY_SECRET
}

You can check on the progress of the jobs with the Get-Job command, which will give you a list of jobs, each with an id, and whether it is still running or has completed.

Get-Job

To see the result of a job (e.g. if the az container create command failed for some reason), you can use Receive-Job and pass in the id of the job (you can find with Get-Job).

# see the result of a completed job
Receive-Job 6

And you can clean up completed jobs with Remove-Job (or delete them individually by id). Here I'm removing all completed jobs:

Remove-Job -State Completed

Cleaning up Azure resources

We can delete individual container instances with the az container delete command, or just delete them all in one go with az group delete on the resource group containing them all.

az container delete --resource-group $RES_GROUP --name $CONTAINER_INSTANCE
az group delete -n $RES_GROUP

Summary

Azure Container Instances are brilliant for running simple workloads in the cloud, and its very straightforward to containerize your .NET application, and use the Azure CLI to automate the whole process. Azure Container Instances are very quick to spin up and have a serverless pricing model, meaning you pay only for the time they run, which is great for use-cases like mine where the container only needs to run for a few minutes.

Want to learn more about how easy it is to get up and running with Azure Container Instances? Be sure to check out my Pluralsight course Azure Container Instances: Getting Started.