0 Comments Posted in:

Waiting for external events

Azure Durable Functions makes it really easy to wait for an event from an external system with the DurableOrchestrationContext.WaitForExternalEvent method. A common use case is when you are waiting for manual approval, but it is also very useful for calling any external system that has its own bespoke way of reporting completion (e.g. a webhook). That message can then be passed onto the Durable Functions orchestration with DurableOrchestrationClient.RaiseEventAsync.

It's also possible to time out waiting for external events, which is especially important when waiting for human interaction where you might never get a response, but it's also very useful for integrating with slow or misconfigured third party systems, where a response may not come back quickly enough.

I've blogged before about how you can wait for external events with a timeout, and in fact the technique I show in that article has now been baked into the framework so the WaitForExternalEvent method now offers additional overloads that take a timeout which greatly simplifies your code.

Awaiting multiple external events

In this post I want to consider a slightly more complex scenario. Let's suppose that we want to wait for approval from at least three people before we can proceed with a workflow, but there are five people who are able to provide approval. And we'd also like to time out if we don't get the required number of approvals within a certain timeframe so we can take a mitigating action.

The basic approach we are going to use is to create a single timeout task with DurableOrchestrationContext.CreateTimer, and then use WaitForExternalEvent to receive the approval events. Now, it would be possible to create a bunch of WaitForExternalEvent tasks at the same time, one for each required approval, and so when they all complete, we've got the required number of approvals. However, I decided to take a slightly different approach which would allow for the scenario were an single approver accidentally provided more than one approval response.

So I have a loop in which I use Task.WhenAny to see what finishes first - the timeout task, or the WaitForExternalEvent task. If we receive an event, we update a HashSet of all the people who have approved so far, and if the number of approvers reaches the threshold then we can proceed. But if the timeout task wins, or one of the approvers rejects the message, then we exit the loop. If we receive an approval but haven't yet reached the threshold, then we simply loop back round and start another WaitForExternalEvent task.

Here's the code for my orchestrator function.

public static async Task<string> GetApprovalOrchestrator([OrchestrationTrigger]
            DurableOrchestrationContextBase ctx, ILogger log)
{
    var approvalConfig = ctx.GetInput<ApprovalConfig>();
    string result;
    var expireAt = ctx.CurrentUtcDateTime.AddMinutes(approvalConfig.TimeoutMinutes);
    for(var n = 0; n < approvalConfig.ApproverCount; n++)
    {
        // todo: send a message to each approver
        if (!ctx.IsReplaying) log.LogInformation($"Requesting approval from Approver {n + 1}");
    }

    var cts = new CancellationTokenSource();
    var timerTask = ctx.CreateTimer(expireAt, cts.Token);

    var approvers = new HashSet<string>();
    while(true) // slightly dangerous - we could count iterations and abort if we go round a very high number of times
    {
        var externalEventTask = ctx.WaitForExternalEvent<ApprovalResult>(ApprovalResultEventName);
        var completed = await Task.WhenAny(timerTask,externalEventTask);
        if (completed == timerTask)
        {
            result = $"Timed out with {approvers.Count} approvals so far";
            if (!ctx.IsReplaying) log.LogWarning(result);
            break; // end orchestration - we timed out
        }
        else if (completed == externalEventTask)
        {
            var approver = externalEventTask.Result.Approver;
            if (externalEventTask.Result.Approved)
            {
                approvers.Add(approver);
                if (!ctx.IsReplaying) log.LogInformation($"Approval received from {approver}");
                if (approvers.Count >= approvalConfig.RequiredApprovals)
                {
                    result = $"Approved ({approvers.Count} approvals received)";
                    if (!ctx.IsReplaying) log.LogInformation(result);
                    break;
                }
            }
            else
            {
                result = $"Rejected by {approver}";
                if (!ctx.IsReplaying) log.LogWarning(result);
                break;
            }
        }
        else
        {
            throw new InvalidOperationException("Unexpected result from Task.WhenAny");
        }
    }
    cts.Cancel();
    return result;
}

Is it safe?

There are two potential issues with the orchestrator I showed above.

First, you'll notice that I have a while(true) in my orchestrator, which is potentially dangerous, as it could allow the event sourcing history Durable Functions uses to grow very large. But that's highly unlikely to happen in this particular scenario as it's only possible if the same approver kept submitting endless approvals - which we could easily protect against in other ways. In my demo app, my approvers use a HTTP triggered function to send their approval response to the workflow, so I could block repeat approvals at that level if I wanted to before they reach the orchestrator.

Here's the function I use to pass on the approval to the workflow:

[FunctionName("SubmitApproval")]
public static async Task<IActionResult> SubmitApproval(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "SubmitApproval/{id}")] HttpRequest req,
    [OrchestrationClient] DurableOrchestrationClientBase client, string id, ILogger log)
{
    log.LogInformation("Passing on an approval result.");

    
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    var approvalResult = JsonConvert.DeserializeObject<ApprovalResult>(requestBody);
    if (string.IsNullOrEmpty(approvalResult.Approver))
        return new BadRequestObjectResult("Invalid Approval Result");
    if (string.IsNullOrEmpty(id))
        return new BadRequestObjectResult("Invalid Orchestration id");

    await client.RaiseEventAsync(id, ApprovalResultEventName, approvalResult);

    var status = await client.GetStatusAsync(id, false, false);
    return new OkObjectResult(status);
}

The second issue is that Durable Functions used to have some race condition issues where external events could be dropped in some scenarios, making code like this risky. But the recent v1.8.0 release of Durable Functions has resolved these outstanding issues, giving us confidence that the external events sent to our orchestration will all be received safely by our orchestrator function.

Try it out

I've uploaded my sample application to GitHub so feel free to check that out. You can easily configure how many approvers are asked for approval, and how many actual approvals are required before proceeding, as well as being able to configure the timeout. The readme provides PowerShell instructions for testing the workflow.

Summary

Durable Functions not only makes implementing a "wait for external event" pattern with timeout really straightforward to achieve, but is flexible enough to allow us to wait in parallel for multiple events to be received before proceeding. The demo app I created shows one way of achieving this.

Want to learn more about how easy it is to get up and running with Durable Functions? Be sure to check out my Pluralsight course Azure Durable Functions Fundamentals.

0 Comments Posted in:

With Azure App Service, you can host multiple "Web Apps" in a single "App Service Plan". The App Service Plan governs how much you pay. There are multiple pricing tiers, allowing you to host your websites on more powerful VMs, but you can also scale out your App Service Plan to multiple servers.

When you scale out an App Service Plan, each Web App hosted on the plan is replicated across all the servers. For example, if you had three Web Apps, and an App Service Plan with three instances, then all three of your Web Apps would be running on each instance, and traffic would be load balanced between them:

App Service Multiple Instances

But what if you had different scaling requirements for each web app? Maybe for one web app you only want a single instance, while another web app you'd like multiple instances? That way you could make more efficient use of the instances in your App Service Plan by only running as many instances of each website that you actually need.

Per-App Scaling

This is possible thanks to the "per-app-scaling" feature of App Service, which is available on the Standard pricing tier and above, and enables "high density hosting"

In this post I'll show you how to use Azure PowerShell to configure a simple scenario where we have three web apps (creatively named App 1, App 2 and App 3), and we want 1, 2, and 3 instances respectively on each one, with our App Service Plan scaled out to three instances. So we'd like the web apps to be arranged something like this:

App Service Multiple Instances

Creating an App Service Plan

First, we need to create our App Service Plan and enable per-app scaling. I'm going to use the new Azure PowerShell Az module to configure this. Normally, I like to use the Azure CLI, but we're still waiting for it to support per-app scaling, so this was a good opportunity for me to try the Az module for the first time.

Just like with the Azure CLI, if we're using the PowerShell Az module for the first time, we need to log into Azure with Connect-AzAccount and make sure we're using the correct subscription with Set-AzContext

# Get logged into Azure
Connect-AzAccount

# make sure we've selected the right subscription
Set-AzContext -SubscriptionName "My Subscription"

Next, we'll create a resource group for our App Service Plan with New-AzResourceGroup, and then create the App Service Plan itself with New-AzAppServicePlan, choosing the Standard pricing tier, setting up three workers, and enabling per site scaling with the -PerSiteScaling flag:

$ResourceGroup = "HighDensityTest"
$Location = "westeurope"
New-AzResourceGroup -Name $ResourceGroup -Location $Location

$AppServicePlan = "HighDensityTest"
New-AzAppServicePlan -ResourceGroupName $ResourceGroup -Name $AppServicePlan `
                            -Location $Location `
                            -Tier Standard -WorkerSize Small `
                            -NumberofWorkers 3 -PerSiteScaling $true

Create Web Apps

To help me test I built a very simple ASP.NET Core website with a single Razor webpage, that reads an app setting called "AppName" and also shows the machine name of the server instance that responded to the request:

@page
@using Microsoft.Extensions.Configuration
@inject IConfiguration Configuration

<h1>Web App: @Configuration["AppName"]</h1>
<h2>Served by @Environment.MachineName</h2>

I then created the following PowerShell function to create and configure a WebApp for high density hosting. It first creates the web-app with New-AzWebApp, then it publishes the application code (which I created with dotnet publish on my ASP.NET Core app and then zipped up the publish folder). Next, we use Get-AzWebApp to get details of that web app and update the SiteConfig.NumberOfWorkers to the desired number of instances for this web app. I also add a new application setting containing the site name, which my web app is going to display when the page loads. Finally, we write those settings back to the web app with Set-AzWebApp.

function New-HighDensityWebApp {
    param( [string]$ResourceGroupName, 
           [string]$AppServicePlanName, 
           [string]$WebAppName,
           [int]$NumberOfWorkers,
           [string]$ArchivePath)

    New-AzWebApp -ResourceGroupName $ResourceGroup -AppServicePlan $AppServicePlan `
        -Name $WebAppName
    
    Publish-AzWebApp -ArchivePath $ArchivePath -ResourceGroupName $ResourceGroup -Name $WebAppName -Force
    
    # Get the app we want to configure to use "PerSiteScaling"
    $newapp = Get-AzWebApp -ResourceGroupName $ResourceGroup -Name $WebAppName

    # Modify the NumberOfWorkers setting to the desired value.
    $newapp.SiteConfig.NumberOfWorkers = $NumberOfWorkers
    $newapp.SiteConfig.AppSettings.Add( [Microsoft.Azure.Management.WebSites.Models.NameValuePair]::new("AppName",$WebAppName))
 
    # Post updated app back to azure
    Set-AzWebApp $newapp
}

Now with this function in place, I can easily create my three web apps, each with a different number of workers

$ArchivePath = "publish.zip"
New-HighDensityWebApp -ResourceGroupName $ResourceGroup -AppServicePlanName $AppServicePlan `
                      -WebAppName "mheath-hd-1" -NumberOfWorkers 1 -ArchivePath $ArchivePath
New-HighDensityWebApp -ResourceGroupName $ResourceGroup -AppServicePlanName $AppServicePlan `
                    -WebAppName "mheath-hd-2" -NumberOfWorkers 2 -ArchivePath $ArchivePath
New-HighDensityWebApp -ResourceGroupName $ResourceGroup -AppServicePlanName $AppServicePlan `
                    -WebAppName "mheath-hd-3" -NumberOfWorkers 3 -ArchivePath $ArchivePath

Testing it out

To test it out, let's make a request to my first web application that should have one instance:

(iwr "https://mheath-hd-1.azurewebsites.net/").content

No matter how many times I issue the command, I should always see the same instance name in the response:

<h1>Web App: mheath-hd-1</h1>
<h2>Served by RD0003FF55813B</h2>

However, if I do the same for the web app configured for three workers:

(iwr "https://mheath-hd-3.azurewebsites.net/").content

Then I'll see it cycling through each of the three instances in my web app as I make requests:

<h1>Web App: mheath-hd-3</h1>
<h2>Served by RD0003FF8F5D22</h2>

...
<h1>Web App: mheath-hd-3</h1>
<h2>Served by RD0003FF55813B</h2>

...
<h1>Web App: mheath-hd-3</h1>
<h2>Served by RD0003FF8F4B8F</h2>

That's great - it's working!

Limitations

So it's really nice and easy to configure per-app scaling. But there are a few limitations to be aware of.

First, the Azure Portal doesn't have any UI that lets us view or configure these settings, so you will need to script this with PowerShell or ARM templates.

Second, there is no control over scheduling of the web apps onto the individual nodes like you would have with a container orchestrator like Kubernetes, where you have concepts like "affinity" and "anti-affinity". For example, if I have four web apps: A, B, C and D, and two server instances, and I'd like A and B to be hosted together on one instance, and C and D hosted together on the other, there is no way to request that.

I also tried seeing what would happen if I asked for four workers for a particular web app on my three instance App Service Plan. It didn't error, but it seemed that only two servers were hosting my web app. Maybe it put two instances of the web app on two of the three servers, but I had no easy way of telling whether that was the case.

Summary

Per-app scaling is a welcome addition to Azure App Service that could help you make more efficient usage of an App Service Plan cluster (which could be very valuable for expensive hosting plans like ASE). But there is still a need for improved tooling and visibility, and this feature currently lacks the flexibility to control exactly how you want the web apps distributed across the server instances.

The PowerShell script and sample web app code are available on GitHub at markheath/app-service-per-app-scaling


0 Comments Posted in:

I just got back from Microsoft Ignite the Tour London, where I spent most of my time talking with attendees about the various ways you can run containers in Azure.

Ignite the Tour London

Many people I spoke to were already on the journey to containerizing their apps and some were already using Kubernetes. But there were also a lot of people who were very new to the concepts of Docker and Kubernetes and were simply interested in finding out whether they needed to containerize, and what the best way to run those containers on Azure is.

So in this post, I want to summarize some of my thoughts on whether you should consider containerizing your Azure applications.

What are your pain points?

The first, and most important question, is "what are your pain points"? Docker and Kubernetes offer solutions to many common problems, but if you don't have those problems, then you may not need to transition to containers at the moment.

To determine if containers would bring value to your scenario, consider questions like is it difficult to build my application? Maybe your app requires a lot of custom SDKs and tools to be installed on developer and build machines. If so, containerizing the build environment with techniques like multi-stage builds can be a great way to simplify building the code and ensuring that you get consistent output.

Is it difficult to run my application locally? Many cloud applications can pose a real problem for development teams who need to run multiple services - maybe several microservices, and other third party applications like RabbitMQ or Elasticsearch, just to be able to test the application locally. By containerizing your application, you can make it much simpler to run locally, by defining a simple Docker compose file

Is it difficult to deploy my application? Does deploying your application involve running complex setup scripts to get a Virtual Machine set up in just the right way to be able to run the app? Even if you've automated that with tools like Puppet or Chef, containerization often proves to be a much simpler solution to ensuring that your application's dependencies are met, and gives you greater flexibility to where you run that application.

Is it difficult to orchestrate my application? If you have built a microservices application, then that brings a whole host of new challenges. All the microservices need to be able to communicate with each other, you want to be able to monitor their health, to scale them independently, and perform more advanced deployment patterns like blue-green swaps or rolling upgrades. Rolling your own solutions to these problems is complex and error-prone, which is why container orchestration platforms like Kubernetes have become so popular in recent years - they provide solutions to many of the operational challenges that microservices bring.

Is it difficult to modernize my application? I had quite a lot of people asking me whether they should rewrite their .NET Framework apps in .NET Core or just containerize them? My answer was that containerizing them would very likely be much quicker. Of course the resulting containers would be Windows containers which places some constraints on what services they can run on, but one of the great benefits of containerization is that once you've taken the trouble to create a Dockerfile for your legacy app (which ordinarily shouldn't take more than a day or two), you've opened the door to all the other benefits that containers offer.

My app is running in a VM

If your application is currently running in a Virtual Machine in Azure, then there are a couple of ways in which containerization might help you. The first we've already mentioned - it removes the need for any pre-requisites other than Docker itself to be installed on the host machine. So all the complexity of configuring a special snowflake VM that knows how to run your app is taken away, and it can now run on any VM with Docker installed, and could easily be moved in the future to run on a Kubernetes cluster.

A second key benefit is to do with density. A virtual machine has a fixed monthly cost whether it is working hard or sitting idle. So if your application spends a lot of its time idle, there can be a strong temptation to host additional applications on the same VM for cost reduction purposes. But now we have to install dependencies for both applications on the same server, with potential for conflicts between them. By containerizing applications it becomes relatively trivial to deploy several containers to the same host VM, without worrying about their dependencies interfering with one another.

My app is a website running on App Service

One very common scenario I encountered at Ignite was people running web applications on Azure App Service. App Service is a very flexible hosting platform that supports the majority of the most popular web programming frameworks including Node.js, PHP, Java and of course ASP.NET and ASP.NET Core. You don't need to containerize your application to run it on App Service.

However, App Service does support containerized web apps using what's known as Web App for Containers. This supports both Linux and Windows (currently in preview) containers. This means you can easily run already-containerized apps (like WordPress for example), as well as use other frameworks that aren't directly supported by App Service, or simply to take greater control over what exact version of a framework you want to use.

Many of the people I spoke to assumed that if they were running say a regular .NET Framework ASP.NET application on App Service, then they ought to containerize it. But my advice was that actually if you're happy on App Service, and the features it offers (which are particularly great for running publicly facing websites and APIs), then there isn't a huge need to containerize. In fact, if you did containerize your .NET Framework ASP.NET app, you'd end up with a Windows container, which is currently significantly more expensive to host on App Service (requiring the Premium Container (Windows) Plan) than running the same application directly as a regular Web App.

So you might not want to containerize in this scenario. One of the key reasons that you would consider doing so, is when you've got multiple web apps that all form part of a larger microservice application, maybe some of which are "front-end" and others "back-end" services. In that scenario, it's likely that before too long you'll start to want some of the more advanced orchestration capabilities that a platform like Kubernetes offers. So you might want to start creating Dockerfiles for all your web apps, even if you currently continue to host them on App Service.

My app is serverless running on Azure Functions

Another question I was asked at Ignite, was whether it would be a good idea to run Azure Functions in a container. Whilst that's completely possible, as Microsoft provide a base image that has the Azure Functions runtime installed, meaning you just need to add in your Function App code.

However, this is another scenario in which containers might not make sense. If you deploy your Function App in a container, you're losing the benefits of the "consumption pricing plan", where you pay only for duration your functions run and you get automatic scale out. By containerizing your function app, you'd need to keep it running yourself, and write your own logic to scale out to multiple instances of the container.

So containerized Function Apps probably only make sense if you want to run outside Azure (although you may also need to avoid integrating with other Azure services like App Insights or Storage Accounts), or maybe if you had an AKS cluster and wanted everything to run inside that for consistency or security reasons.

What might make more sense would be if you could implement a single Azure Function as a container, maybe because it was long-running or had very specific dependencies. That's not currently possible, but I've been experimenting with creating ACI containers from Azure Functions, and hopefully will have something to share on that front soon.

What about my databases?

One question that came up very frequently at Ignite was how to run a SQL Server database in a container. Now it is possible to run databases in containers, assuming that you understand that they should write their data to a volume so that its lifetime is not tied to that particular container instance.

But personally, if I was running a containerized application in Azure and needed a SQL Server database, I would just use Azure SQL Database instead, which offers me a "Platform as a Service database" experience. And there are multiple PaaS database offerings in Azure including Cosmos DB, Azure Database for PostgreSQL, and Azure Database for MySQL. So there is no need to containerize your database in production.

You might however want to use containerized databases for development/test environments, as this can be a way of keeping costs down, as well as allowing you to spin up container images pre-populated with test data. And that's another benefit of containerizing your applications - it is very easy to support multiple configurations for the different environments into which you are deploying it.

What if my app requires Windows?

Another question that came up very frequently, was to do with Windows containers support on Azure Kubernetes Service. If you have legacy Windows apps (e.g. regular .NET Framework apps), they can't run inside Linux containers, so you need a platform that can host Windows containers. Currently, Azure Container Instances, and Azure Service Fabric support Windows containers, with Web App for Containers offering preview support. But unfortunately, Azure Kubernetes Service (AKS) still doesn't offer direct Windows support (unless you count adding Azure Container Instances as a virtual node with the Virtual Kubelet).

This isn't due to a limitation in Kubernetes - in fact, you can already configure a Kubernetes cluster to include Windows nodes, but we're still waiting for AKS to support this. I'm afraid I don't have any insider information about when this might be coming, but I know it is being worked on, and I'm hoping its something we see this year.

Summary

The question "should I containerize my cloud application" doesn't have a straightforward yes or no answer, but it depends on what pain points you are experiencing, and what technologies you are currently using. Having said that, there's a good reason why Docker and Kubernetes are exploding in popularity - the problems they solve are ones that are common to many software projects, particularly distributed cloud applications.

If you'd like to learn more about running containers on Azure, I've created an introductory course on Pluralsight (available to everyone for free as part of Microsoft Learn), called Deploying and Managing Containers on Azure.

I can also highly recommend a great book "Docker on Windows" by my friend Elton Stoneman, which provides excellent guidance for containerizing your legacy Windows applications.

Want to learn more about how easy it is to get up and running with containers on Azure? Be sure to check out my Pluralsight courses Microsoft Azure Developer: Deploy and Manage Containers