"Serverless" architecture is one of the most exciting innovations to emerge in the cloud computing space in recent years. It offers several significant benefits including rapid development, automatic scaling and a cost-effective pricing model. Regular readers of my blog will know that I have been (and still am) an enthusiastic proponent of Azure Functions.
But "serverless" does entail some trade-offs. For every benefit of "serverless" there are some corresponding limitations, which may be enough to put some people off adopting it altogether. And it also can seem to be at odds with "containerized" approach to architecture, with Kubernetes having very much established itself as the premier approach to hosting cloud native applications.
I think the next stage of maturity for "serverless" is for the up-front decision of whether to use a "serverless" architecture or not to go away, and be replaced by a kind of "sliding scale", where the decision of whether to run "serverless" or not is a deploy-time decision rather than being baked in up front.
To explain what I mean, let's look at five key benefits of serverless, and how in some circumstances, they introduce limitations that we want to get around. And we'll see that we're already close to a situation where a "sliding scale" allows us to make our application more or less serverless depending on our needs.
Servers abstracted away
The first major selling point of "serverless" is that servers are abstracted away. I don't need to manage them, patch them, or even think about them. I just provide my application code and let the cloud provider worry about where it runs. This is great until I actually do care for some reason about the hardware my application is running on. Maybe I need to specify the amount of RAM, or require a GPU or an SSD. Maybe for security reasons I want to be certain that my code is not running on shared compute with other resources.
Azure Functions is already a great example of the flexibility we can have in this area. It's multiple "hosting plans" allow you to choose between a truly serverless "consumption" plan where you have minimal control of the hardware your functions are running on, all the way up to "premium" plan with dedicated servers, or containerizing your Function App and running it on hardware of your choice.
Automatic scale in and scale out
A second major attraction of serverless is that I don't need to worry about scaling in and scaling out. The platform itself detects heavy load and automatically provisions additional compute resource. This is great until I need to eliminate "cold starts" caused by scaling to zero, or need to have more fine-grained control over the maximum number of instances I want to scale out to, or want to throttle the speed of scaling in and out.
Again, we're seeing with serverless platforms an increased level of flexibility over scaling. With Azure Functions, the Premium plan allows you to keep a minimum number of instances on standby, and you can even take complete control over scaling yourself by hosting your Functions on Kubernetes and using KEDA to manage scaling.
Consumption based billing
A third key benefit of serverless is only paying for what you use. This can be particularly attractive to startups or when you have dev/test/demo deployments of your application that sit idle for much of the time. However, the consumption-based pricing model isn't necessarily the best fit for all scenarios. Some companies prefer a predictable monthly spend, and also want to ensure costs are capped (avoiding "denial of wallet" attacks). Also many cloud providers such as Azure can offer significantly reduced "reserved instance" pricing, which can make a lot of sense for a major application that has very high compute requirements.
Once again, Azure Functions sets a good example for how we can have a sliding scale. The "consumption" hosting plan is a fully serverless pricing model, whilst you can also host on a regular ("dedicated") App Service plan to get fixed and predictable monthly costs, with the "premium" plan offering a "best of both worlds" compromise between the two. And of course the fact that you can host on Kubernetes gives you even more options for controlling costs, and benefitting from reserved instance pricing.
Binding-based programming model
Another advantage associated with serverless programming models is the way that they offer very simple integrations to a variety of external systems. In Azure Functions, "bindings and triggers" greatly reduce the boilerplate code required to interact with messaging systems like Azure Service Bus, or reading and writing to Blob Storage or Cosmos DB.
But this raises some questions. Can I benefit from this programming model even if I don't want to use a serverless hosting model? And can I benefit from serverless hosting without needing to adopt a specific programming model like Azure Functions?
The answer to both questions is yes. I can run Azure Functions in a container, allowing me to benefit from its bindings without needing to host it on a serverless platform. And we are increasingly seeing "serverless" ways to host containerized workloads (for example Azure Container Instances or using Virtual Nodes on an AKS cluster). This means that if I prefer to use ASP.NET Core which isn't inherently a serverless coding model, or even if I have a legacy application that I can containerize, I can still host it on a serverless platform.
As a side note, one of the benefits of the relatively new "Dapr" distributed application runtime is the way that it makes Azure Functions-like bindings easily accessible to applications written in any language. This allows you to start buying into some "serverless" benefits from an existing application written in any framework.
In serverless architectures, you typically prefer a PaaS database, rather than hosting it yourself. Azure comes with a rich choice of hosted databases including Azure SQL Database and Azure Cosmos DB. What we've also seen in recent years is a "serverless" pricing model coming to these databases, so that rather than a more traditional pricing model of paying a fixed amount for a pre-provisioned amount of database compute resource, you pay for the amount of compute you actually need, with the database capacity automatically scaling up or down as needed.
Of course this comes with many of the same trade-offs we discussed for scaling our compute resources. If your database scales to zero you have a potential cold start problem. And costs could be wildly unpredictable, especially if a bug in your software resulted in a huge query load. Again, the nice thing is that you don't have to choose up front. You could deploy dev/test instances of your application with serverless databases to minimise the costs given that they may be idle much of the time, but for your production deployment you choose to pre-provision sufficient capacity for expected loads, maybe allowing some scaling but within a much more carefully constrained minimum and maximum level.
"Serverless" does not have to be an "all-in" decision. It doesn't even need to be an "up front" decision anymore. Increasingly you can simply write code using the programming models of your choice, and decide at deployment time to what extent you want to take advantage of serverless pricing and scaling capabilities.