0 Comments

Like many .NET developers I’ve been keeping an eye on .NET Standard, but so far haven’t had much cause to use it for my own projects. My NAudio open source library is heavily dependent on lots of Windows desktop APIs, so there isn’t much incentive to port it to .NET Standard. However, another of my open source audio projects, NLayer, a fully managed MP3 decoder, is an ideal candidate. If I could create a .NET standard version of it, it would allow it to be used in .NET Core, UWP, Xamarin and Mono platforms.

The first step was to move to VS 2017 and replace the NLayer csproj file with one that would build as a .NET Standard package. The new csproj file format is delightfully simple, as it no longer requires us to specify each source file individually. I went for .NET Standard 1.3 and told it to auto-create NuGet packages for me, another nice capability of VS 2017:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netstandard1.3</TargetFramework>
    <GeneratePackageOnBuild>True</GeneratePackageOnBuild>
  </PropertyGroup>
</Project>

Then I attempted to compile. There were a few minor errors to be fixed. Thread.Sleep wasn’t available – I switched to Task.Delay instead. And Stream.Close needed to be replaced with Stream.Dispose. But those changes aside, it was relatively painless.

Next I wanted to use the new .NET Standard version of NLayer within my NLayer.NAudioSupport project. This project references both NLayer and NAudio, and is itself a .NET 3.5 library. Unfortunately, when I tried to build I was told that a .NET 3.5 project cannot reference a .NET Standard 1.3 library. Now because NAudio is .NET 3.5, it wasn’t an option to convert NLayer.NAudioSupport to .NET Standard, so I needed another solution.

I consulted the compatibility matrix which made it clear that I needed to be on at least .NET 4.6 to be able to reference a .NET Standard 1.3 project. So I changed NLayer.NAudioSupport to target .NET 4.6 and sure enough, everything compiled and worked.

However, it seemed a shame that now I was forcing a major .NET version upgrade on all users of NLayer.NAudioSupport. NAudio is used a lot by companies who lag long way behind the latest versions of .NET. So is there any way to keep support for .NET 3.5 for those who want it, in addition to supporting .NET Standard 1.3?

Well, we can multi-target .NET frameworks. This is very easily done in the new csproj syntax. Instead of a TargetFramework node, we use a TargetFrameworks node, with a semi-colon separated list of frameworks. So I just added net35.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFrameworks>netstandard1.3;net35</TargetFrameworks>
    <GeneratePackageOnBuild>True</GeneratePackageOnBuild>
  </PropertyGroup>
</Project>

Now when we build, it creates two assemblies – a .NET Standard library and a .NET 3.5 one. And the auto-generated NuGet package contains them both. But what happens if the same code won’t compile for both targets? In our case, for the .NET 3.5 build we needed to revert back to Thread.Sleep again.

We can do this by taking advantage of conditional compilation symbols, which will be NETSTANDARD1_3 or NET35 in our instance. This allows me to use the API available on the target platform:

#if NET35 
    System.Threading.Thread.Sleep(500);
#else
    System.Threading.Tasks.Task.Delay(500).Wait();
#endif

And with that, I now have a NuGet package containing versions of NLayer that can be used on a huge range of .NET platforms. If you’re the maintainer of an open source library, and you’ve been ignoring .NET Standard so far, maybe its time for another look.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

In serverless architectures, its quite common to use a file storage service like Azure Blob Storage or Amazon S3 instead of a traditional web server. One of the main attractions of this approach is that this can work out a lot cheaper, as you pay only for how much data you store and transfer, and there are no fixed monthly fees to pay.

To get started with this approach, we need to create a storage account, copy our static web content into a container, and make sure that container is marked as public.

In this post, I’ll show how that can be done with a mixture of PowerShell and the AzCopy utility.

The first task is to create ourselves a storage account

# Step 1 - get connected and pick the subscription we are working with
Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName "MySubscription"

# Step 2 - create a resource group in our preferred location
$resourceGroupName = "MyResourceGroup"
$location = "northeurope"

New-AzureRmResourceGroup -Name $resourceGroupName -Location $location

# Step 3 - create a storage account and put it into our resource group
$storageAccountName = "mytempstorage" # has to be unique
New-AzureRmStorageAccount -ResourceGroupName $resourceGroupName -AccountName $storageAccountName -Location $location -Type "Standard_ZRS"

# Step 4 - get hold of the storage key, we'll need that to call AzCopy
$storageKeys = Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName
$key = $storageKeys.value[0]

Now, we want to copy our static web content into a container in our storage account. There are PowerShell commands that will let us do this file by file. But a super easy way is to use the AzCopy utility which you need to download and install first.

Next we need to specify the source folder containing our static web content, the destination address in blob storage, and the access key for writing to that container. We need some flags as well – /S to recurse through folders, /Y to confirm we do want to overwrite and /SetContentType to make sure the MIME types of our html, javascript and css are set to sensible values instead of just application/octet-stream.

$azCopy = "C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe"
$websiteFolder = "D:\Code\MyApp\wwwroot\"
$containerName = "web"

. $azCopy /Source:$websiteFolder /Dest:https://$storageAccountName.blob.core.windows.net/$containerName/ /DestKey:$key /S /Y /SetContentType

You might think we’re done, but we do need to ensure our container is set to “blob” mode so that its blobs are publicly accessible without the need for SAS tokens. We can do this with Set-AzureStorageContainerAcl, but that command works on the “current” storage account, so first we need to call Set-AzureRmCurrentStorageAccount to specify what the current storage account is.

Set-AzureRmCurrentStorageAccount -StorageAccountName $storageAccountName -ResourceGroupName $resourceGroupName
Set-AzureStorageContainerAcl -Name $containerName -Permission Blob

Now we launch our website, and we should see it running in the browser, downloading its assets directly from our blob storage container:

Start-Process -FilePath "https://$storageAccountName.blob.core.windows.net/$containerName/index.html"

The next step you’d probably want to take is to configure a custom domain to point to this container. Unfortunately, Azure Blob Storage doesn’t directly support us doing this (at least not if we want to use HTTPS), but there are a couple of workarounds. One is to use Azure Functions Proxies and the other approach is to use Azure CDN. Both will add a small additional cost, but its still a serverless “pay only for what you use” pricing model and so should still work out more cost effective than hosting with a traditional web server.

Hopefully this tutorial gives you a way to get started automating the upload of your SPA to blob storage. There are plenty of alternative ways of achieving the same thing, but you may find this to be a quick and easy way to get started with blob storage hosting of your static web content.

When you use queues, messages are read off in the order they are placed into the queue. This means that if there are 1000 messages in your queue, and now you want to send another message that is top priority, there’s no easy way to force it to the front of the queue.

The solution to this problem is to use “priority queues”. This allows high priority messages to get serviced immediately, irrespective of how many low priority messages are waiting.

There’s a few different options for how to implement priority queues in Azure Service Bus. We can choose how we partition the messages into priorities, either by using multiple queues, or multiple subscriptions on a topic. And we can also choose how we read from the queues – either with multiple concurrent listeners, or with a round-robin technique.

Sending Technique 1: Multiple Queues

A very simple way to achieve priority queues is to have two (or more) queues. One queue is for high priority messages, and the other for low priority. Whenever you send a message, you pick which queue to send it to. So this technique assumes that the code sending the message knows whether it should be high priority or not, and also knows how many priority queues there are.

In this simple code sample, we send three messages to the low priority queue and two to the high. We need two queue clients and to know the names of both queues to achieve this:

var clientHigh = QueueClient.CreateFromConnectionString(connectionString, "HighPriorityQueue");
var clientLow = QueueClient.CreateFromConnectionString(connectionString, "LowPriorityQueue");
clientLow.Send(new BrokeredMessage("Low 1"));
clientLow.Send(new BrokeredMessage("Low 2"));
clientLow.Send(new BrokeredMessage("Low 3"));
clientHigh.Send(new BrokeredMessage("High 1"));
clientHigh.Send(new BrokeredMessage("High 2"));

Sending Technique 2: One Topic with Multiple Subscriptions

An alternative approach is to make use of Azure Service Bus topics and subscriptions. With this approach, the messages are all sent to the same topic. But a piece of metadata is included with the message that can be used to partition the messages into high and low priorities.

So in this case we need a bit more setup. We’ll need a method to send messages with a Priority property attached:

void SendMessage(string body, TopicClient client, int priority)
{
    var message = new BrokeredMessage(body);
    message.Properties["Priority"] = priority;    
    client.Send(message);
}

And this allows us to send messages with priorities attached. We’ll send a few with priority 1, and a couple with priority 10:

var topicClient = TopicClient.CreateFromConnectionString(connectionString, "MyTopic");

SendMessage("Low 1", topicClient, 1);
SendMessage("Low 2", topicClient, 1);
SendMessage("Low 3", topicClient, 1);
SendMessage("High 1", topicClient, 10);
SendMessage("High 2", topicClient, 10);

But for this to work, we also need to have pre-created some subscriptions, that are set up to filter based on the Priority property. Here’s a helper method to ensure a subscription exists and has a single rule set on it:

SubscriptionClient CreateFilteredSub(string topicName, string subscriptionName, RuleDescription rule)
{
    if (!namespaceManager.SubscriptionExists(topicName, subscriptionName))
    {
        namespaceManager.CreateSubscription(topicName, subscriptionName);
    }
    var rules = namespaceManager.GetRules(topicName, subscriptionName);
    var subClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName);
    foreach (var ruleName in rules.Select(r => r.Name))
    {
        subClient.RemoveRule(ruleName);
    }
    subClient.AddRule(rule);
    return subClient;
}

Now we can use this method to create our two filtered subscriptions, one for messages whose priority is >= 5, and one for those whose priority is < 5:

var subHigh = CreateFilteredSub("MyTopic", "HighPrioritySub", new RuleDescription("High", new SqlFilter("Priority >= 5")));
var subLow = CreateFilteredSub("MyTopic", "LowPrioritySub", new RuleDescription("Low", new SqlFilter("Priority < 5")));

Note that you must take care that your filters result in every message going to one or the other of the subscriptions. It would be possible if you weren’t careful with your filter clauses to lose messages or to double-process them.

So this technique is more work to set up, but removes knowledge of how many priority queues there are from the sending code. You could partition the priorities into more subscriptions, or change the rules about how priorities were determined to use different message metadata without necessarily having to change the code that sends the messages.

Receiving Technique 1: Simultaneous Listeners

We’ve seen how to partition our messages into high and low priority queues or subscriptions, but how do we go about receiving those messages and processing them?

Well, the easiest approach by far is simply to simultaneously listen on both queues (or subscriptions). For example, one thread is listening on the high priority queue and working through that, while another thread is listening on the low priority queue. You could assign more threads, or even separate machines to service each queue, using the “competing consumer pattern”.

The advantage of this approach is that conceptually it’s very simple. The disadvantage is that if both high and low priority queues are full, we’ll be simultaneously doing some high and some low priority work. That might be fine, but if there could be database contention introduced by the low priority work, you might prefer that all high priority messages are handled first, before doing any low priority work.

Receiving Technique 2: Round Robin Listening

So the second technique is simply to check the high priority queue for messages, and if there are any, process them. Once the high priority queue is empty, check the low priority queue for a message and process it. Then go back and check the high priority queue again.

Here’s a very simplistic implementation for two QueueClients (but it would be exactly the same for two SubscriptionClients if you were using a topic)

void PollForMessages(QueueClient clientHigh, QueueClient clientLow)
{
    bool gotAMessage = false;
    do
    {
        var priorityMessage = clientHigh.Receive(gotAMessage ? TimeSpan.FromSeconds(1) : TimeSpan.FromSeconds(30));
        if (priorityMessage != null)
        {
            gotAMessage = true;
            HandleMessage(priorityMessage);
        }
        else
        {
            var secondaryMessage = clientLow.Receive(TimeSpan.FromSeconds(1));
            if (secondaryMessage == null)
            {
                gotAMessage = false;
            }
            else
            {
                HandleMessage(secondaryMessage);
                gotAMessage = true;
            }
        }
    } while (true);
}

The only complex thing here is that I’m trying to change the timeouts on calls to Receive to avoid spending too much money if both queues are empty for prolonged periods. With Azure Service Bus you pay (a very small amount) for every call you make, so checking both queues every second might get expensive.

No doubt this algorithm could be improved on, and would need to be fine tuned for the specific needs of your application, but it does show that it’s not too hard to set up listeners that guarantee to process all available high priority messages before they begin working on the low priority messages.

Last week I wrote about so-called “best practices”, and one coding style that’s often promoted as a “best practice” is to “prefer convention over configuration”.

What exactly does this mean? Well the idea is that wherever possible we attempt to remove the need to explicitly configure things, and instead rely on sensible (but overridable) defaults.

A good example of this approach is ASP.NET. If I want to serve a page at /products then I create a class called ProductsController that inherits from Controller and add a method with the signature public ActionResult Index(). And the ASP.NET framework simply uses reflection to work out that any HTTP GET requests coming into that URL should create an instance of ProductsController and call the Index method. I don’t need to add a line of configuration somewhere to explicitly specify that this should happen.

There are a lot of advantages to this convention over configuration approach. First of all, it makes it very easy to add new components, simply by copying examples already present in the code. If I need a new orders controller, I can easily understand who to do it by looking at the other controllers. This can make it very easy for people new to the project to extend it.

This is also an excellent example of the “open closed principle” in action. A convention over configuration approach means that you can add a new feature without having to change any existing code at all. You simply add a new class that implements a certain interface or is named in a particular way. This has the side benefit of eliminating merge conflicts in classes or files that contain a lot of configuration, which usually experience a lot of “code churn”.

Convention over configuration is also used commonly with setting up message or command handlers. Simply implement the IHandle<T> interface, and some reflection code behind the scenes will discover your handler and wire it up appropriately. Again this makes a developer’s job very easy – need to add a new message or command handler? Just follow the pattern of the other ones.

So it would seem that “convention over configuration” makes a lot of sense. But are there any drawbacks?

One criticism is that this kind of approach can seem like “magic”– making it hard for new starters on a project to understand how it works. Often the IDE support to “find all references” will return nothing when these conventions are being used because reflection is typically used at run-time to discover the methods to be called. It can leave developers wondering “how on earth does this even work”?

Generally speaking, the more ubiquitous a convention is, the easier it will be for developers to learn and understand. The conventions in a framework like ASP.NET make sense because they are used over and over again – meaning the time invested in learning them is well spent. But beware of creating lots of conventions that only get used in one or two places. This introduces unnecessary additional learning for developers with minimal benefit.

A particularly painful point can be the mechanism by which you override the “sensible defaults” of the convention. How is a developer supposed to know how to do that? In ASP.NET there are attributes that can be used to override the route used by an action on a controller, which is fine because ASP.NET is well documented, but if it’s a bespoke convention you’ve invented for your project, you’ll need to make sure all the information about how your convention works is readily available.

Another disadvantage is that conventions can sometimes produce unexpected results. A good example is the fact that because ASP.NET uses reflection to find all classes that inherit from a base Controller class, if someone happens to reference an assembly from another project that also contains some controllers, you might find you’re exposing new endpoints that you weren’t intending to. This happened once on a project I worked on and opened up a serious security hole. This is the reason why some developers prefer the competing “best practice” of “explicit is better than implicit. So whenever you use conventions, try to build in protections against these types of unintended consequences.

So, should you adopt “convention over configuration” for your own frameworks? Well it comes back to the question of what problem you are trying to avoid. If it’s about eliminating repetitive and redundant configuration code, then it only makes sense to introduce if the convention is going to be applied many times. If its just once or twice, it may not be worth it.

As I said in my “best practices” post, there isn’t one clear right way to approach every problem in software development. Conventions remove some problems, but introduce others. So you need to understand the trade-offs in order to make a decision about what makes sense for you. Used judiciously, conventions can help developers fall into the “pit of success” – it should be easier to get it right than to get it wrong.

Let me know in the comments how you’ve got on with custom conventions in your own frameworks? Did they make things better? Or did every new developer complain that they couldn’t understand how the code works?

0 Comments

There’s no such thing as a “best practice”. At least in software development. I’ve read countless articles on the “best practices” for database design, software architecture, deployment, API design, security, etc and it’s pretty clear that (a) no one can agree on what the best practices actually are, and (b) last year’s “best practice” frequently turns into this year’s “antipattern”.

I can however understand the motivation for wanting to define “best practices”. We know that there are a lot of pitfalls in programming. It’s frighteningly easy to shoot yourself in the foot. It makes sense to know in advance of starting out, how best to avoid making those mistakes. We’re also in such a rapidly moving industry, that we’re frequently stepping out into uncharted territory. Many of the tools, frameworks and technologies I’m using today I knew nothing about just five years ago. How can I be expected to know what the best way to use them is? I frequently find myself googling for “technology X best practices”.

So-called “best practices” emerge not by being definitively proved to be the “best” way of doing something, but simply by virtue of being better than another way. We tried approach A and it went horribly wrong after two weeks, so we tried approach B and got further. Now approach B is the “best practice”. But before long we’re in a real mess again, and now we’re declaring that approach C is the “best practice”.

A better name for “best practices” would be “better practices”. They emerge as a way of avoiding a particular pitfall. And because of this, it’s very unhelpful to present a set of “best practices” without also explaining what problem each practice is intended to protect us from.

When we understand what problem a particular best practice is attempting to save us from, it allows us to make an informed decision on whether or not that “best practice” is relevant in our case. Maybe the problem it protects us from is a performance issue at massive scale. That may not be something that needs to concern us on a project that will only ever deal with small amounts of data.

You might declare that a best practice is “create a nuget package for every shared assembly”. Or “only use immutable classes”. Or “no code must be written without writing a failing test for it first”. These might be excellent pieces of guidance that can greatly improve your software. But blindly followed without understanding the reasoning behind them, they could actually result in making our codebase worse.

Most “best practices” are effective in saving you from a particular type of problem. But often they simply trade off one type of problem for another. Consider a monolithic architecture versus a distributed architecture. Both present very different problems and challenges to overcome. You need to decide which problems you can live with, and which you want to avoid at all costs.

In summary, even though I once created a Pluralsight course with “best practices” in the title, I don’t really think “best practices” exist. At best they are a way of helping you avoid some common pitfalls. But don’t blindly apply them all. Understand what they are protecting you from, and you will be able to make an informed decision about whether they apply to your project. And you may even be able to come up with even “better practices” that meet the specific needs and constraints of your own project.