0 Comments

The Azure Functions team recently asked everyone to check the versions of their Function Apps in preparation for the forthcoming 2.0 release.

Basically you need to check the app settings for each function app and specifically see if the FUNCTIONS_EXTENSION_VERSION setting has any value other than ~1.

Now you can do this fairly easily in the portal, but if, like me, you have a lot of function apps, you might be wondering if there is a way to automate this. And, thanks to the Azure CLI (about which I recently posted a short introductory video), you can do just that.

The main two commands we need are az functionapp list to list all function apps and az functionapp config appsettings list to list all the app settings for a function app.

However, the output of az functionapp list is a very large JSON array as it contains all kinds of information about each application. And I only need to know the name and resource group of each function app.

So I can use the --query switch which allows me to narrow down the output to just the data I want using a powerful query language called JMESPath.

In this case I’ve got a JSON array of objects each of which has a name and resourceGroup property and so I can use the following syntax to trim down the output to just the properties I’m interested in with the following command: az functionapp list --query "[].{Name:name,Group:resourceGroup}"

This produces the following output (showing some function apps I created for my Building Serverless Applications in Azure course):

[
  {
    "Group": "TestDeploy1",
    "Name": "whosplayingdeploy-zq2obg"
  },
  {
    "Group": "WhosPlayingAdmin",
    "Name": "whosplayingadminfuncs"
  },
  {
    "Group": "WhosPlayingFuncs",
    "Name": "whosplayingfuncs"
  }
]

Now I could manually use the values here to get the settings with commands like az functionapp config appsettings list -g WhosPlayingAdmin -n whosplayingadminfuncs, but wouldn’t it be nice if I could automate this and loop through the output?

Well, if we can parse the JSON we can do that, but my bash skills are somewhat limited so I wanted to find an easier way, and we can make the output of az functionapp list even easier to parse by switching to tab separated output with the --output tsv switch.

Now when we run we get the following output:

PS C:\Users\markh> az functionapp list --query "[].{Name:name,Group:resourceGroup}" --output tsv
whosplayingadminfuncs    WhosPlayingAdmin
whosplayingfuncs    WhosPlayingFuncs
whosplayingdeploy-zq2obg    TestDeploy1

By the way, another format you should know about if you’re using the Azure CLI is the --output table format, which gives you a nicely formatted table like this:

PS C:\Users\markh> az functionapp list --query "[].{Name:name,Group:resourceGroup}" --output table
Name                      Group
------------------------  ----------------
whosplayingadminfuncs     WhosPlayingAdmin
whosplayingdeploy-zq2obg  TestDeploy1
whosplayingfuncs          WhosPlayingFuncs

But the tab separated output is what we want. This allows us to parse them into a $name and $resourceGroup variable and get the app settings for that app with the az functionapp config appsettings list -n $name -g $resourceGroup command.

But when we run this we have the same problem: too much output. And the --query flag again comes to our rescue. We can use it just to narrow it down to the setting we are interested in:

PS C:\Users\markh> az functionapp config appsettings list -n $name -g $resourceGroup --query "[?name=='FUNCTIONS_EXTENSION_VERSION']"
[
  {
    "name": "FUNCTIONS_EXTENSION_VERSION",
    "slotSetting": false,
    "value": "~1"
  }
]

And we can even pick out just the value of that setting with another change to the JMESPath query and choosing tab separated output:

PS C:\Users\markh> az functionapp config appsettings list -n $name -g $resourceGroup --query "[?name=='FUNCTIONS_EXTENSION_VERSION'].value" --output tsv
~1

So we have all the bits in place for automating checking the versions of all the function apps in our subscription. I decided to do this with the Windows Subsystem for Linux (WSL or Bash on Windows), since the Azure CLI is designed to be used from bash (although you are perfectly free to use it from PowerShell too if you prefer). Instructions for installing the WSL can be found here and you then use apt-get to install the CLI as described here.

With that in place, we can write a simple bash script to loop through all the function apps, query for the value of the FUNCTIONS_EXTENSION_VERSION app setting and print them out. My bash skills are very basic, so do let me know in the comments if there are better ways to accomplish this:

#!/bin/bash
az functionapp list --query "[].{Name: name,Group: resourceGroup}" -o tsv |
while read -r name resourceGroup; do
   version=$(az functionapp config appsettings list \
      -n $name -g $resourceGroup \
      --query "[?name=='FUNCTIONS_EXTENSION_VERSION'].value" -o tsv)
   printf 'N:%s R:%s V:%s\n' "$name" "$resourceGroup" "$version"
done

When we run this in our bash shell, we see the following output for this subscription. Looks like all my settings had the right value anyway:

[email protected]:~$ ./funcappversion.sh
N:whosplayingfuncs R:WhosPlayingFuncs V:~1
N:whosplayingdeploy-zq2obg R:TestDeploy1 V:~1
N:whosplayingadminfuncs R:WhosPlayingAdmin V:~1

But my point in sharing this post wasn’t really to show you how to check function app versions, but to highlight just how powerful using the --query switch and JMESPath query language along with the  --output tsv switch to get tab separated output. The JMESPath website has plenty of great examples and I found it surprisingly quick to learn the syntax I needed for my queries.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

If you’re like me, you may have ignored the Azure CLI, since I had invested a fair amount of time in learning the Azure PowerShell commandlets, and there didn’t seem to be much benefit in learning another command line approach.

But I gave it a try a while back, and was really impressed. It’s got a very simple, easily discoverable syntax, and it has the benefit of being cross platform. Another great thing about it is that there is an Azure CLI console built right into the Azure Portal, so you don’t even have to install it to give it a try.

In this short video I show the basics of using it both from the command line and the Azure portal.

0 Comments

Today, someone asked how they could play segments of audio from a WAV file. So for example, they wanted to play a 5 second segment of audio that started 1 minute into the source file, and then a 30 second segment that started 10 seconds into the source file, and so on.

There’s lots of ways you could tackle this. One approach (which I don’t recommend) is to have some kind of a timer that detects where playback is up to in a file, and then jumps to a new position if you’ve gone past the end of the currently playing segment. The reason I don’t recommend this is that its horribly inaccurate. With NAudio, it’s actually possible to get accuracy right down to the sample level, so the instant we reach the end of the first segment, we jump seamlessly to the next.

Let’s see how we can do this.

The key is to create our own custom IWaveProvider. In NAudio an IWaveProvider is a simple interface that provides audio. You just need to implement the Read method to fill a new buffer of sound, and the WaveFormat property to indicate the format of the audio provided by the Read method. When you reach the end of the audio, Read should return 0.

IWaveProvider has no concept of current “position” or overall “length” – you can implement WaveStream if you need those. But for this example, I’m assuming that we know in advance what “segments” we want to play and we just need to play them each through once.

So let me first show you the code for the SegmentPlayer which is our custom IWaveProvider, and then I’ll explain how it works and how to use it.

class SegmentPlayer : IWaveProvider
{
    private readonly WaveStream sourceStream;
    private readonly List<Tuple<int,int>> segments = new List<System.Tuple<int, int>>();
    private int segmentIndex = -1;
    
    public SegmentPlayer(WaveStream sourceStream)
    {
        this.sourceStream = sourceStream;        
    }
    
    public WaveFormat WaveFormat => sourceStream.WaveFormat;
    
    public void AddSegment(TimeSpan start, TimeSpan duration)
    {
        if (start + duration > sourceStream.TotalTime) 
            throw new ArgumentOutOfRangeException("Segment goes beyond end of input");
        segments.Add(Tuple.Create(TimeSpanToOffset(start),TimeSpanToOffset(duration)));        
    }
    
    public int TimeSpanToOffset(TimeSpan ts)
    {
        var bytes = (int)(WaveFormat.AverageBytesPerSecond * ts.TotalSeconds);
        bytes -= (bytes%WaveFormat.BlockAlign);
        return bytes;
    }
    
    public int Read(byte[] buffer, int offset, int count)
    {
        int bytesRead = 0;
        while (bytesRead < count && segmentIndex < segments.Count)
        {
            if (segmentIndex < 0) SelectNewSegment();
            var fromThisSegment = ReadFromCurrentSegment(buffer,offset+bytesRead,count-bytesRead);
            if (fromThisSegment == 0) SelectNewSegment();
            bytesRead += fromThisSegment;
        }
        return bytesRead;
    }
    
    private int ReadFromCurrentSegment(byte[] buffer, int offset, int count)
    {
        var (segmentStart,segmentLength) = segments[segmentIndex];
        var bytesAvailable = (int)(segmentStart + segmentLength - sourceStream.Position);
        var bytesRequired = Math.Min(bytesAvailable,count);
        return sourceStream.Read(buffer, offset, bytesRequired);        
    }
    
    private void SelectNewSegment()
    {
        segmentIndex++;
        sourceStream.Position = segments[segmentIndex].Item1;
    }
}

The first thing to notice is that we need a WaveStream to be passed to us as an input. This is because although our SegmentPlayer won’t support repositioning, we do need to be able to support repositioning from the source file to get the audio for each segment. Since WaveFileReader and Mp3FileReader both implement WaveStream, you could use a WAV or MP3 file as the source of the audio.

Now of course, you could dispense with the WaveStream altogether and just pass in a byte array of audio for each segment. That would perform better at the cost of potentially using a lot of memory if the segments are long.

The next thing to point out is that we have a list of “segments”, which are tuples containing the start position (in bytes) and duration (also in bytes) of each segment of audio within the source file. We have an AddSegment method that allows you to more conveniently specify these segments in terms of their start time and duration as TimeSpan instances. Notice in the TimeSpanToOffset method that we are very careful to respect the BlockAlign of the source file, to ensure we always seek to the start of a sample frame.

The bulk of the work is done in the Read method. We’re asked for a certain number of bytes of audio (count) to be provided. So we read as many as we can from the current segment, and if we still need some more, we move to the next segment. Moving to the next segment requires a reposition within the source file. Only when we reach the end of the segment list do we return less than count bytes of audio from our Read method.

Now this is a very quick and simple implementation. We could improve it in several ways such as caching the audio in each segment to avoid seeking on disk, or by upgrading it to be a WaveStream and allow repositioning. With a bit of multi-threading care we could even support dynamically adding new segments while you are playing. But I hope that this serves as a good example of how by implementing a custom IWaveProvider you can powerfully extend the capabilities of NAudio.

Let’s wrap up by seeing how to use the SegmentPlayer. In this example our source audio is an MP3 file. We set up four segments that we want to be played back to back and use WaveOutEvent to play them. We could have used WaveFileWriter.CreateWaveFile instead had we wanted to print the output to a WAV file instead of playing it.

using (var source = new Mp3FileReader("example.mp3"))
using (var player = new WaveOutEvent())
{
    var segmentPlayer = new SegmentPlayer(source);
    segmentPlayer.AddSegment(TimeSpan.FromSeconds(2), TimeSpan.FromSeconds(5));
    segmentPlayer.AddSegment(TimeSpan.FromSeconds(20), TimeSpan.FromSeconds(10));
    segmentPlayer.AddSegment(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(15));
    segmentPlayer.AddSegment(TimeSpan.FromSeconds(25), TimeSpan.FromSeconds(5));
    player.Init(segmentPlayer);
    player.Play();
    while(player.PlaybackState == PlaybackState.Playing)
        Thread.Sleep(1000);
}

The duration of the audio will be exactly 35 seconds in this instance, as each segment will be played instantaneously after the previous one ends.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments

Recently, Microsoft announced that the Azure Functions tooling for Visual Studio 2017 was available. This makes it really easy to create new function apps, test and debug your functions locally as well as publishing directly from within Visual Studio. This makes use of the “precompiled functions” capability of Azure Functions.

I thought that since this tooling appeared too late to feature in either of my Pluralsight courses on Azure Functions, I’d make a quick video showing off how the tooling works, and also what actually ends up on disk when you create these precompiled functions.

Of course, you don’t have to use Visual Studio if you don’t want to. You’re still free to use a combination of a text editor like Visual Studio Code and the Azure Functions CLI to build and debug your function apps.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

Every few months someone asks how you can “normalize” an audio file with NAudio. And I’m usually quite reluctant to answer, because often the person asking doesn’t understand the limitations of normalizing. They simply assume it is an automatic way to make a quiet audio file louder.

So what is normalizing an audio file? It’s simply amplifying every sample in the file by the largest amount possible without causing any clipping. That’s great if the entire file is quiet, but if there’s just one sample in there that already has the maximum value, then you’re stuck. Normalization won’t do anything.

But having acknowledged that it doesn’t help in every situation, how can we implement it for files it is suitable for? Well we start by examining every sample value and picking out the loudest. That’s most easily done by letting NAudio convert the samples into floating point in the range +/- 1.0, and the AudioFileReader class will do that automatically for us.

What we’ll do is read out batches of floating point samples using the Read method and find the largest value. We’ll use Math.Abs as the maximum peak might be a negative rather than a positive peak:

var inPath = @"E:\Audio\wav\input.wav";
float max = 0;

using (var reader = new AudioFileReader(inPath))
{
    // find the max peak
    float[] buffer = new float[reader.WaveFormat.SampleRate];
    int read;
    do
    {
        read = reader.Read(buffer, 0, buffer.Length);
        for (int n = 0; n < read; n++)
        {
            var abs = Math.Abs(buffer[n]);
            if (abs > max) max = abs;
        }
    } while (read > 0);
    Console.WriteLine($"Max sample value: {max}");
}

So that finds us the maximum value. How can we use that to normalize the file? Well, first of all if the max value is 0 or greater than or equal to 1, then normalization is not possible. But if it is between 0 and 1, then we can multiply each sample by (1 / max) to get the maximum possible amplification without clipping.

AudioFileReader has a handy Volume property that we can use to amplify the samples as they are read out, and since we’ve just read the whole way through, we need to jump back to the beginning by setting Position = 0. Then we can use the convenience method WaveFileWriter.CreateWaveFile16 to write the amplified audio back to a 16 bit WAV file.

Here’s the entire normalization example:

var inPath = @"E:\Audio\wav\input.wav";
var outPath = @"E:\Audio\wav\normalized.wav";
float max = 0;

using (var reader = new AudioFileReader(inPath))
{
    // find the max peak
    float[] buffer = new float[reader.WaveFormat.SampleRate];
    int read;
    do
    {
        read = reader.Read(buffer, 0, buffer.Length);
        for (int n = 0; n < read; n++)
        {
            var abs = Math.Abs(buffer[n]);
            if (abs > max) max = abs;
        }
    } while (read > 0);
    Console.WriteLine($"Max sample value: {max}");

    if (max == 0 || max > 1.0f)
        throw new InvalidOperationException("File cannot be normalized");

    // rewind and amplify
    reader.Position = 0;
    reader.Volume = 1.0f / max;

    // write out to a new WAV file
    WaveFileWriter.CreateWaveFile16(outPath, reader);
}

Now, as I said at the beginning, normalization is only good for files that are consistently quiet throughout. But what if your files do have the occasional loud bit? In that case, what you want is a compressor or limiter effect. Compression makes the loudest bits quieter, which means that afterwards you will be able to boost the whole file without clipping. It’s certainly possible to implement a compressor in NAudio, although there isn’t a built-in one, so I’ll cover how to make one in a future post.

Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.