Overwhelmed by Choice with Azure Functions?
I am a huge fan of Azure Functions but one of the challenges people who are new to it face is the sheer number of choices it gives you. For almost every task, there are multiple ways of accomplishing it. This means that people are either confused about which option to take, or unaware that there are alternatives to the first method they discovered.
In this post I'll go through some of the choices at your disposal, and give some suggestions to help you decide which is the most appropriate for your function apps.
Three environments to write functions in
Most people's first introduction to Functions is a demo showing how you can write code directly in the Azure portal! This is great for quick demos and experiments. But it's not what you should be using in production.
You can also create functions in Visual Studio. Visual Studio has its own special Azure Functions tooling, making it easy to create new Azure Function App projects and providing templates for new functions and local testing and debugging.
But you don't need to use Visual Studio. You can also develop functions with any text editor and the Azure Functions command line tools. The Azure Functions command line tooling also lets you easily create new function apps, functions and do local testing.
Which should I use? My recommendation: use the portal only for demos or very quick experimental stuff. For real apps, pick Visual Studio if that is your regular development tool, and otherwise use a good text editor (e.g. VS Code) in conjunction with the Azure Functions command line tooling.
Multiple languages to write functions with
Which should I use? Obviously use your favourite language, although my recommendation is to stick to the fully supported languages if at all possible. They have the best documentation, stability and binding support.
You are of course completely free to mix and match languages within a single function app. I show how to do this in my Functions Todo Backend sample.
Two ways to write C# functions
Not only can you write functions in multiple languages, but if you choose C# there are two separate ways of doing it. First you can create
.csx files (C# script). This is what you use if you code in the portal and was the original C# programming model of Azure Functions. It's still supported but now there is also the ability to create "pre-compiled functions", where you write regular C# code which is compiled into a DLL. This has benefits for performance and testability.
Which should I use? Unless you're doing a quick experiment in the portal, the precompiled functions approach is the one I recommend. This is what the Visual Studio tooling will create for you, but it seems that the current version of the Azure Functions tooling still uses the csx approach (I haven't checked whether the new .NET Core version of the tooling does this too, which brings us to the next choice...)
Two versions of the Azure Functions runtime
Even more confusingly, there are two versions of the Azure Functions runtime. There's "Version 1" which is still the current GA version of Azure Functions (at the time of writing), and runs on the full .NET framework, making it Windows only. That's absolutely fine as Azure Functions is serverless so you don't really need to care what the underlying OS is.
However, a new version 2 of the Azure Functions runtime is in development and is built on .NET Core. This makes it cross-platform, which opens up local testing on non-Windows platforms as well as the ability to host the runtime in Docker containers and more. It's the future of Azure Functions and has binding and language extensibility built in which makes it a much better platform for future innovation.
Which should I use? For the moment, stick to version 1 unless you absolutely need cross platform. But once V2 goes GA, I'd recommend all new development uses that.
Two ways to get billed
Most people know that Azure Functions offers a serverless pricing model. You just pay for the time that your functions are running. If they don't run at all, you pay nothing. This is a hugely attractive pricing model and can result in dramatic cost savings. This model is called the "consumption" plan - you're only paying for what you consume.
However, there is an alternative pricing model in which you pay for an app service plan. This is the same pricing model as Azure App Service. You pay a fixed monthly price which will reserve you your own VM (or multiple VMs) dedicated to your function app.
Which should I use?. In most scenarios the consumption plan makes the most sense, but the App Service plan offers a few advantages in certain situations. First, it doesn't restrict the duration your functions can run for, which is important if you have functions that last for more than 5 minutes (although that may be a sign your function is doing too much). It also provides predictable pricing model - there's no risk of accidentally running up a huge bill because of heavy incoming load.
Multiple ways to trigger your functions
Some people seem to be under the impression that all Azure Functions are triggered by HTTP requests. This is not the case. Azure Functions supports multiple "triggers". A HTTP request is just one kind of trigger. Others include messages on queues, or timers, or blobs appearing in a storage container.
Event Grid events can also trigger functions. This is a new addition to Azure, and not all services fully support it yet, but there is a lot of momentum behind it and soon it will mean that pretty much anything that happens in any of your Azure resources can trigger a function execution.
Which should I use? Use whichever trigger fits your needs. HTTP requests are good for building APIs or handling webhooks, but in many cases you don't need HTTP request triggers at all.
Numerous ways to deploy
OK, now we're getting into crazy territory. There are literally so many ways to deploy your Azure Functions it makes your head spin. And still some people aren't happy with the options we've got now!
A big part of the reason there are so many options is that Azure Functions is based on App Service and that already had a whole bunch of ways to deploy. Let's go through the options...
- Publish manually from Visual Studio. Visual Studio has built-in tooling that lets you easily deploy your Function App either to an existing Function App or to a brand new one, all within the IDE. Under the hood it uses webdeploy (I think).
- You can publish with webdeploy, just like you can a regular web app. There's an msbuild target you can use to create the deployment package and then you can call the tool, but this option is a bit cumbersome and there are easier options.
- You can simply FTP your function code onto the app service plan's disk.
- You can use the Kudu REST API to upload the code / binaries for your function. This API lets you individually add and remove individual files on the host.
- You can use the Kudu "zipdeploy" API to upload a zipped package of your code
- You can set up continuous deployment watching a Git repository so that every time you push to that repo, Kudu will clone it and build it. (except when it doesn't get push notifications, in which case you have to trigger it manually)
- There is a variation on the Git option where it is a "local" repository - that is, rather than pushing to a repo hosted externally on GitHub or VSTS, you push to a repo stored on your Function App own storage.
- As well as using Git you can sync from other sources including Dropbox, OneDrive, or Mercurial repositories on Bitbucket.
- There's a brand new "run from zip" option where you provide a URL to a zip containing your code (e.g. with a SAS token in a blob storage account) and functions. This offers some performance benefits over the previous techniques, and so could end up becoming the recommended approach in the future. It's still in an experimental state at the moment though.
- There's a variation on the new "run from zip" technique where you put the zip into a special
SitePackagesfolder and update a text file to point to it. This avoids the need for long-lived SAS tokens and also provides rollback capabilities and may be a preferred option for some people.
I told you there were a lot of options! And that doesn't even count logging into the Kudu console and dragging and dropping files in, or coding directly in the Azure portal!
Which one should I use? It's not an easy decision. Here's my recommendations. Only use right-click publish from VS if you're just testing and experimenting. Use Git integration for the sort of small projects where having your own full-blown CI build pipeline would be overkill - you just want to commit changes, push to GitHub/VSTS and let it go live in due course (I'm actually using the Git deployment technique for this blog). Use the Kudu zip deploy API for a more enterprise CI/CD pipeline where your build creates the assets and performs some tests before pushing them live. And keep an eye on the "run from zip" technique - I suspect that will become the recommended approach once some of the issues have been ironed out.
Two ways to test and debug
The great thing about the Azure Functions tooling is it lets you test locally. This provides a great rapid development process saving you pushing to the cloud or the complexities of debugging remotely. Most of the time you will use the Azure Storage Emulator in conjunction with local testing if you're working with blobs, queues or tables, but you can of course point to a real Azure storage account while you're testing locally.
And it's possible to turn on remote debugging and connect your Visual Studio directly to the cloud. The last time I tried this it was quite an unreliable process, but hopefully things have improved a bit in the meantime.
Which one should I use? Hopefully, local testing and debugging is good enough 99% of the time, but its nice to know you can attach the debugger to Azure for those cases where an issue is only showing up in production. And if you have a good CI/CD process set up, you can easily deploy a test instance of your function app to run integration or performance tests against.
Three ways to host the runtime
We've already mentioned the obvious two places you'll host the runtime - either in the cloud on an App Service, or locally for testing and debugging. But you can also host the Azure runtime in a container, by using the microsoft/azure-functions-runtime Docker image. This is using the .NET Core version 2 of the runtime so it can run on Linux containers.
Which one should I use? Normally, you'd simply let App Service host the runtime for you, but the containerization option opens up lots of possibilities to use your Azure Functions in on premises environments, or in other clouds.
Two ways to monitor your functions
Out of the box, Azure Functions is associated with an Azure Storage account, and it uses table storage in that account to record details of the invocation of each function, including its logs and timing. Some of that information is surfaced to you in the Azure portal, so for each function you can see the recent invocations and dive into individual instances to see the log output.
It is however, quite a rudimentary experience, and that's where App Insights integration comes in. It is very straightforward to configure your Azure Functions to send data to App Insights, and if you do so, you'll benefit from much richer querying and analytics.
Which one should I use? This one is easy, enable App Insights for all your Function Apps.
Three ways to secure your HTTP functions
Let's end with one which I don't think has quite enough choice yet. For the most part, Azure Functions are secured by virtue of the fact that only an authorized caller can trigger them. For example, if your function is triggered by a queue message, then only someone who can post to that queue can trigger your function.
But for HTTP triggered functions, they have public endpoints accessible from the internet, and so you need to think carefully about security. By default, Azure Functions provides a system of authorization keys. You have to pass the key as a query string parameter or header to be able to call the function. In server to server instances this may be adequate, but is not feasible as a security mechanism for client side apps, as you wouldn't want to expose the secret key.
Currently your options for APIs intended to be called from the browser are limited. First of all, you have the browser's CORS protection and Azure Functions lets you configure CORS settings for your Function App. And there is an App Service feature called "App Service Authentication" which turns on authentication globally for every single web request coming into that app. It supports a bunch of common identity providers like facebook and google, as well as Active Directory integration.
But I'd like to see it much easier for us to configure bearer token authentication on a per-function basis, allowing a mix of anonymous and authenticated functions. If Azure Functions could do this automatically, as well as provide easy integration with Managed Service Identities, that would be awesome.
What should I use? Your choice of security mechanism depends on whether you are willing to trust your callers with function keys or not. You could of course implement your own code to examine and validate a bearer token, but that's something I'd rather see the platform take care of that, as it's all too easy to make mistakes rolling your own security implementations.
As you can see, the Azure Functions platform is very rich and offers a wide variety of choices. These can seem overwhelming at first, but hopefully this guide gives you a starting point to decide which features of the Azure Functions platform are right for your application.
Great write-up, Mark! The official docs need this sort of guide for new devs, because you're right, there are so many options that it's overwhelming. I think many new devs might end up making sub-optimal choices and leave with a bad first impression.Matt Honeycutt
I'm on a Mac trying to develop a pre-compiled C# .NET Core Azure function (runtime v2). It has one EventHub trigger binding and one CosmosDB output binding. It works great locally. The Azure function is set up to deploy from VSTS. This happens when code is pushed to the git branch. The function shows up in Azure but does not work at all. I am suspecting issues with bindings, since the "function.json" which is the only visible file in the portal, only contains one of the two bindings.... No idea how to make this work.Andreas S
when you build, the tooling autogenerates the binding files (functions.json). There should be one for each function, and the bindings are based on the attributes of your functions. If the git deploy reported success but the functions.json files don't match the code, I'd report that as an issue on the functions GitHub repositoryMark Heath
Thanks Mark, really useful. Keep it going please ;)Daniel Ferreira
I have a function which gets triggered whenever an image gets uploaded to blob container. It is of type BlobTrigger. The image uploading is done by web app which is Azure AD enabled. Can you please help me to know what is the best security that should be implemented in this case. Only authenticated users should be able to access the azure function app. Any help on this is much appreciated.santosh kumar patro
What a ripper of an article Mark. Not too deep, not too fluff piece, chockers with goodnessdantheother
Mark, this was incredibly useful. I've shared it with quite a few folks. Any plans on updating this for the latest Azure changes? VSTS -> Azure DevOps. Azure Functions V2 to GA? Thanks.Troy Witthoeft
Yes, would be good to update this (even though it's less than a year old!). Things are moving fast in the serverless worldMark Heath
Once again Mark, you're a fantastic human being. I watched some of your Pluralsight courses years ago and they were great. What a great article! So helpful. Thanks!alicate
thanks for the kind words alicate. I've just finished creating a complete update to my Pluralsight Azure Functions Fundamentals course that covers version 2 of Azure Functions. Hoping it will go live in the next few weeksMark Heath
Just to re-iterate what people are saying, this is a very useful article. Thank you. I am glad it is not just me that felt a little overwhelmed. We are just getting in to Azure integration and personally have no background in Microsoft development so the learning curve is steep but this helps a lot.Andy Bruckshaw