0 Comments

A few months ago, Microsoft announced plans to shut down CodePlex. The site will be moving into read-only mode by October.

I was an early adopter of CodePlex and had 14 projects hosted there. It had been obvious for several years that CodePlex was slowly dying and it’s been over two years since I moved NAudio over to GitHub, a move that made a lot of sense.

But I needed to make a decision about the other projects I had on CodePlex. One of them, Skype Voice Changer, was in fact my greatest ever hit, having been downloaded over 3 million times (and went on to have an interesting journey of its own)! Many of my other projects were simple audio utilities or games, and I also made quite a few custom Silverlight controls. Obviously most of these projects are effectively obsolete now.

However, CodePlex kindly provided a way to easily import your projects into GitHub, and it wasn’t even too hard to bring along the documentation/wiki as well. So I’ve imported most of them into GitHub and turned the documentation into markdown files. Despite the fact that these are mostly dead/done projects, they do contain a bunch of XAML and audio related code snippets that may still be of benefit to someone in the future so I’m glad they can live on at GitHub.

So the only thing that remains is to say thank you to everyone who worked on CodePlex. The decision to shut it down makes sense, but I am grateful for the service it provided me over the past decade. A few things I’m thankful for…

  • When it launched, it was the best and easiest place to host .NET open source projects (I’d previously failed miserably to host an early version NAudio on sourceforge with CVS)
  • It introduced me to distributed version control thanks to adding Mercurial support
  • I made connections with several outstanding open source developers
  • The Developer Media ads were a small but welcome financial return on the time spent creating open source software
  • ClickOnce hosting and the ability to embed Silverlight apps in your docs were nice touches that other project hosting sites didn’t offer
  • Providing an easy mechanism to migrate my source code and docs away to GitHub (hopefully the discussion and issues will also be exportable soon too).

So goodbye CodePlex, and thanks for being part of my journey as an open source developer.

0 Comments

Azure App Service allows you to configure an external Git repository from which it can pull down code from. This works for both Web Sites and Function Apps, and can be configured as part of your ARM template.

But once you’ve configured the location of the external repository, simply pushing new commits to that external git repository won’t cause your website/function app to automatically update.

Instead you need to go into the Deployments section of the portal and click the Sync button:

image

When you click sync, it pulls down any new commits from the external git repository, and deploys them for you.

But how can we automate this process? We don’t want to have to manually go into the portal as part of our deployment. Here’s two ways I found…

Method 1 – PowerShell

The first technique uses Azure PowerShell. I’ll assume you’re already logged in and have the correct subscription selected. You can use these commands to do that if you’re new to Azure PowerShell.

Login-AzureRmAccount
Select-AzureRmSubscription –SubscriptionName "My Sub"

Now, so long as we know the name of our WebSite/Function app and the resource group it is in, we can request a sync with the following command (thanks to David Ebbo):

Invoke-AzureRmResourceAction -ResourceGroupName "MyResourceGroup" -ResourceType Microsoft.Web/sites -ResourceName "MyFuncApp" -Action sync -ApiVersion 2015-08-01 -Force –Verbose

Method 2 – Azure CLI

I’ve only just started using the Azure CLI, but I like what I’ve seen so far. Its nice and simple to work with and you can easily explore the available commands just by typing az.

Just like with PowerShell we do need to make sure we’re logged in and have the correct subscription selected, which we can do with:

az login
az account set –s "My Sub"

And now to request a sync, we can call the sync action for our deployment source. Here’s the syntax for a function app

az functionapp deployment source sync –g MyResGroup –n MyFunctionApp

If it was a webapp instead, it’s pretty much the same:

az webapp deployment source sync –g MyResGroup –n MyWebApp

Method 3 – App Service Continuous Deployment Alternatives

There are several other ways to tackle this problem, by integrating with VSTS, setting up web hooks, or using local git repositories but the techniques described above will be useful if you picked the external git repository option.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.

0 Comments

I’ve released an update to NAudio as it’s been a while since the last one. This one is mostly bug fixes and a few very minor features:

  • MidiFile can take an input stream
  • WaveOut device –1 can be selected allowing hot swapping between devices
  • AsioOut exposes FramesPerBuffer
Want to get up to speed with the the fundamentals principles of digital audio and how to got about writing audio applications with NAudio? Be sure to check out my Pluralsight courses, Digital Audio Fundamentals, and Audio Programming with NAudio.

0 Comments

One of the most daunting things for developers new to Git is understanding how they can undo commit mistakes. And the git reset command is a powerful tool to help with that. But it can seem confusing. What does the magical git reset --hard HEAD~1 invocation that you may have seen on stack overflow actually do? And what is the difference between a “hard” and a soft reset?

In this video, I demonstrate three situations in which the git reset command can help:

The video demonstrates in more detail, but the three scenarios I discuss are:

1. Throwing away a junk commit. If you’ve just committed to your local repository and now you decide that you want to throw that away, you can use git reset --hard HEAD~1 to jump the current branch back to the previous commit (that’s what HEAD~1 means – go back one commit from the current position), and reset the working directly back to the state it was in at the time of that commit (that’s what the --hard flag means)

2. Committing too early.If you’ve just committed but realize that you should have changed just one or two things first, the git reset command can be used without the --hard flag to put the current branch back where it was, you can simply use git reset HEAD~1  and your working directory won’t change at all. That way you can make a few final tweaks before re-performing the commit.

3. Committing on the wrong branch.Sometimes you make a commit and then realise that you’d intended to create a new branch for that commit, rather than working directly on the master branch. Getting out of this situation is nice and easy. First, keep your new commit safe by creating a new branch that points at it. For example git branch feature1.  And now, we’re still on the master branch, so we can point that back to the previous commit with git reset --hard HEAD~1. And then you can checkout your new feature branch and continue working on that.

Note:There is one important caveat with these git reset tips – if you’ve already pushed to the server, things can get tricky. Others may have pulled and built on top of commits you want to throw away. In these situations it’s usually better to use git revert to create a new commit that undoes the mistake. That’s generally a lot safer.

By the way the tools I use in the video are posh git and GitViz.

0 Comments

Azure Functions comes with three levels of authorization. Anonymous means anyone can call your function, Function means only someone with the function key can call it, and Admin means only someone with the admin key can call it.

What this means is that to secure our Azure functions we must pre-share the secret key with the client. Once they have that key they can call the function, passing it as a HTTP header or query parameter. But this approach is not great for serverless single page apps. We don’t want entrust the secret to a client app where it might get compromised, and we don’t have an easy way of getting it to the client app anyway.

Instead, it would be better if the users of our single page application could log in to an authorization server, and for that to issue us with access tokens (preferably JSON web tokens) that we can pass to our functions. They can then check that the access token is valid, and proceed to accept or deny the function request. And this is of course the way OAuth 2.0 works.

However, at the moment there isn’t an easy way to enable verification of access tokens in Azure Functions. You could make your function anonymous and then write the verification yourself, but it’s generally not a good idea to implement your own security.

I wanted to see if there was an easier way.

And it turns out that there is a feature in App Service called “Easy Auth”. Azure Function apps support this by virtue of being built on top of App Service.

Easy Auth is an on-off switch. If you turn it on for your App Service, then everyincoming HTTP request must be authorized. If you’re not authorized you’ll get redirected to log in at the authorization server.

Easy Auth supports several identity providers, including Facebook, Google, Twitter, Microsoft and Azure Active Directory. You can only pick one though (however if the one you pick is Azure AD B2C, then that can support additional social identity providers).

The downside of using EasyAuth is that your whole site requires login. You can’t have any pages that can be viewed without needing to provide credentials.

But the benefit of this approach is that it provides a relatively simple way to get things secured. And if we combine it with Azure AD B2C, we can allow users to self sign-up for our application and support things like password resets, two factor auth, email verification and so on. This is a great “serverless” approach to authentication, delegating to a third party service and keeping our own applic

So I set myself the challenge of integrating a simple SPA that calls through to an Azure Functions back-end with AD B2C. I can’t promise this is the only or best way to do this, but here’s the steps I took to get it working.

Step 1 – Create an Azure AD B2C Tenant

First of all you’ll need to create an Azure AD B2C tenant. This can be done through the portal, and detailed instructions are available here so I won’t repeat them here. You’ll need to make sure you associate it with a subscription.

Step 2 – Create a Sign Up Or Sign In Policy

Next we need a sign-up or sign-in policy. This allows people to sign in, but also to self register for your application. If you want to control users yourself then you’d just need a sign-in policy. You can create one of these policies in the portal in the settings for your AD B2C tenant.

image

The policy lets us select one or more identity provider. Generally you’ll want to enable basic email and password logins, but you can also add Facebook, Twitter, Google etc, so this is a great option if you want to support multiple social logins. (learn how to configure them here).

image

You can specify “sign-up” attributes which is what information you require from a new user signing up. That might just be their full name and country, but could also include some custom attributes that you define.

You can choose which claims will be included within the access tokens (JWTs), which can make life easier for your application getting hold of useful user info such as their display name without needing to look it up separately.

You can turn on multi-factor authentication, which gives an excellent second level of security for users who have a verified phone number.

And you can also customize the UI of the login page. This is important, because your users will log in at a login.microsoftonline.com page that doesn’t look like it has anything to do with your app by default:

image

Step 3 – Create an AD B2C Application

Finally you need to create a new application in AD to represent the application you will be protecting with Azure AD. Give it a name, select that you want to include a web app, and then you need to provide a Reply URL.

image

The reply URL is a special URL that includes the name of your function app. So if your app is called myfuncapp, the reply URL will be https://myfuncapp.azurewebsites.net/.auth/login/aad/callback

Once you save this application, it will be given an application id (which is a GUID). That’s important as you’ll need it to set up Easy Auth, which is the next step.

Step 4 – Set Up Azure Functions Proxies

The way we’re going to make our single page app magically work with our back end functions is for both the static content and the functions to be served up by our function app. And we can do that by using function proxies. I actually wrote an article about how to use proxies to serve static web content, but there’s a gotcha that’s worth calling out here. Since we’re proxying web requests to static content andserving up functions from the same function app, we need to make sure that the proxies don’t intercept any calls to /api which is where the functions will be going.

So here’s how I do it. I set up three proxies.

The first has a route template of /, and points at my index.html in blob storage. e.g. https://myapp.blob.core.windows.net/web/index.html

The second has a route template of /css/{*restOfPath} and points at https://myapp.blob.core.windows.net/web/css/{restOfPath}

And the third has a route template of /scripts/{*restOfPath} and points at https://myapp.blob.core.windows.net/web/scripts/{restOfPath}

This way my site can have static content in css and scripts folders and a single index.html file, while the /api route will still go to any other functions I have.

Step 5 – Configure CORS

Our static website will be calling through to the functions, so lets make sure that CORS is set up. In the Azure Functions CORS settings, add an entry for https://myfuncapp.azurewebsites.net (obviously use the actual URI of your function app, or any custom domain you have pointing at it)

Step 6 – Enable Easy Auth

We enable Easy Auth by going to our Azure Function app settings screen and selecting Authentication/Authorization, and turning App Service Authentication on.

And we’ll also say that when a request is not authenticated, it should log in with Azure Active Directory.

image

Next we need to set up the Azure Active Directory authentication provider, for which we need to selcet “advanced” mode. There are two pieces of information that we need to provide. First is the client ID, which is the application ID of our application we created earlier in AD B2C. The second (issuer URL) is the URL of our sign up or sign in policy from AD B2C. This can be found by looking at the properties of the sign up or sign in policy in AD B2C.

image

Once we’ve set that up, any request to either the static web content (through our proxy) or to call a function will require us to be logged in. If we try to access the site without being logged in, we’ll end up getting redirected to the login page at microsofonline.com.

Step 7 – Calling the functions from JavaScript

So how can we call the function from JavaScript? Well, it’s pretty straightforward. I’m using the fetch API and the only special thing I needed was to make sure I set the credentials property to include, presumably to make sure that the auth cookies set by AD B2C were included in the request.

fetch(functionUrl, {     method: 'post', 
    credentials: 'include',
    body: JSON.stringify(body),
    headers: new Headers({'Content-Type': 'text/json'})
})
.then(function(resp) {     if (resp.status === 200) { 
    // ...

Step 8 – Accessing the JWT in the Azure Function

The next question is how can the Azure Function can find out who is calling? Which user is logged in? And can we see what’s inside their JWT?

To answer these questions I put the following simple code in my C# function to examine the Headers of the incoming HttpRequestMessage binding.

foreach(var h in req.Headers)
{
    log.Info($"{h.Key}:{String.Join(",", h.Value)}");
}

This reveals that some special headers are added to the request by Easy Auth. Most notably X-MS-CLIENT-PRINCIPAL-ID contains the guid of the logged in user, and helpfully their user name is also provided. But more importantly X-MS-TOKEN-AAD-ID-TOKEN contains the (base 64 encoded) JWT itself. To decode this manually you can visit the excellent jwt.io, and you’ll see that the information in it includes all the attributes that you asked for in your sign up and sign in policy, which might include things like the user’s email address or a custom claim.

X-MS-CLIENT-PRINCIPAL-NAME:Test User 1
X-MS-CLIENT-PRINCIPAL-ID:7e9be1af-6943-21d6-9ae1-5c78c11ff756
X-MS-CLIENT-PRINCIPAL-IDP:aad
X-MS-TOKEN-AAD-ID-TOKEN:eyJ0eXAiOiJKV1QiLCJhbGciOiJSUz...

Unfortunately, Azure Functions won’t do anything to decode the JWT for you, but I’m sure there are some NuGet packages that can do this, so no need to write that yourself.

Should I Use This?

I must admit it didn’t feel too “easy” setting all this up. I took a lot of wrong turns before finally getting something working. And I’m not sure how great the end product is. It is a fairly coarse grained security approach (all web pages and functions require authentication), and having the functions returning redirect requests rather than unauthorized response codes feels hacky.

What would be really great is if if Azure Functions offered bearer token validation as a first class authentication option at the function level. I’d like to say that my function is protected by bearer tokens and give it the well known configuration of my authorization server. Hopefully something along these lines will be added to Azure Functions in the near future.

Of course another option would just be to set your functions to anonymous and examine the Authorization header yourself. You would need to be very careful that you properly validated the token though, making sure the signature is valid and it hasn’t expired. That’s probably reason enough to avoid this option because you can be sure that if you try to implement security yourself, you’ll end up with a vulnerable system.

Maybe there’s a better way to integrate Azure Functions and AD B2C. Let me know in the comments if there is. Chris Gillum who’s the expert in this area has a great two part article on integrating a SPA with AD B2C (part 1, part 2) although it isn’t explicitly using Azure Functions, so I’m not sure whether all the configuration options shown can be used with Function Apps (that’s an experiment for another day).

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight course Azure Functions Fundamentals.