Dapr 1.10 - More steps in the right direction
Dapr 1.10 was announced this last week, and I've been having a play with its capabilities. You can read the announcement here. I also updated my Dapr GloboTicket sample application to use Dapr 1.10, and if you are a Pluralsight subscriber you can watch my Dapr Fundamentals course where I show how I built this sample app using a variety of Dapr building blocks.
Upgrading to Dapr 1.10
There are two parts to updating to a new version of Dapr. First is updating your Dapr CLI install, and the second is updating the Dapr runtime (which might be a "self-hosted" local installation, or might be on Kubernetes).
You can always find out what version of both the CLI you have installed and the runtime with the
dapr --version command:
PS C:\Users\markh\code> dapr --version CLI version: 1.10.0 Runtime version: 1.10.0
To update the CLI, I'm on Windows, so I simply run the PowerShell install command from the docs which will overwrite
dapr.exe (which normally resides in
C:\dapr) with the latest version.
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
To update your local runtime, simply uninstall and then run
dapr init. I'm showing here how you can specify the runtime version you want. This defaults to the latest, but is useful if you want to try out a release candidate (e.g.
dapr uninstall dapr init --runtime-version 1.10.0
To upgrade a Kubernetes cluster, make sure your kubectl context is pointing to the cluster you want to upgrade and then use the
dapr upgrade command (again you can specify a runtime version):
dapr upgrade -k --runtime-version 1.10.0
Note that after doing an upgrade, I usually restart all my services to ensure they've picked up the latest sidecar. I do this with the
kubectl rollout restart deployment command:
kubectl rollout restart deployment frontend
Is Dapr "production ready"?
Often when I give talks on Dapr, people will ask me if it's "production ready" yet. My answer is that there are definitely people successfully using it in production already, and the choice of whether it's ready for your use case depends on whether the current feature set that Dapr offers meets your needs.
I still have a short list of features that I am waiting for Dapr to implement before its ready for some of the projects I'm working on, but I think that if I was starting a brand new project I would very strongly consider using it.
The good news is that Dapr has slowly been working through many of the items on my list of "weaknesses" or "missing features", and Dapr 1.10 addresses several of them, so let me quickly mention some highlights.
One of my criticisms of Dapr was that the local dev experience is not particularly easy to set up. For each microservice you need a
dapr run command to launch it. You can of course do this with a simple script (in my GloboTicket demo app I've made a powershell script for each microservice) which you can run in separate terminal windows.
But if you are working within Visual Studio or VS Code then you need to know how to tell it to start the Dapr sidecar and attach the debugger to the correct process, which is not necessarily something that all developers know how to do.
Dapr's new "multi-app run" feature doesn't fully address this, but is certainly a nice step in the right direction, as it introduces a workflow similar to
docker compose (or project Tye). Basically with one command you can start up all your microservices and Dapr containers and see the aggregated logs.
Unfortunately it's not supported on Windows yet, but its great to see that local development experience is on the radar of the Dapr team.
Bulk publish and subscribe
Another criticism of Dapr is that the building blocks expose a "lowest common denominator" set of functionality, meaning that some of the powerful capabilities of the underlying component that implements that building block are not available. This might not be a big deal with some of the building block types, but messaging is a good example of one where you
For example, I use Azure Service Bus a lot, and take advantage of features such as sending scheduled messages, dead lettering, more efficient publishing by batching messages, and attaching metadata to messages. Initially most of those were not offered by Dapr, but it's good to see them slowly arrive.
Dead-lettering support was added in Dapr 1.8 and now Dapr 1.10 offers improved performance by supporting batching of messages both for sending and receiving.
I'm currently tracking a few Dapr issues for message metadata (#5179), and sending scheduled messages (#2675), which are the last two capabilities that are missing for my needs.
Of course, the disadvantage of Dapr offering additional capabilities such as these is that it increases the possibility that some of the underlying backing stores may not support all of those features. It's the inevitable "leaky abstraction" problem, and its something that developers will have to take into account that switching to a different backing component may lose (or degrade) some capabilities that you were taking advantage of.
Pluggable component SDKs
Another of the criticisms of Dapr is the extensibility model. The Dapr sidecar process itself contains all the implementations of all components, meaning that if you wanted to create your own custom component, you'd be forced to fork the Dapr sidecar and implement it in Go. This would probably be too much of a barrier to entry for most people, so it's nice that the new "pluggable component" model allows for creating your own custom components in any language.
Although creating a custom component isn't something I'd expect to need to do very often, the fact that its now much easier to implement them is good news for those who are concerned that Dapr might not be flexible enough for their needs.
More "stable" components and features
Another potential reason that some people might consider Dapr not to be "production ready" is the large number of features and components that were marked as "preview" or "alpha". No one wants to commit to building something with a technology that could potentially break in unexpected ways in the future.
So it's nice to see that in Dapr 1.10, several of the components (including the Cron job) have been marked as "stable", along with the resiliency polices feature.
Finally, its great to see Dapr being expanded with additional building blocks that cover common requirements in distributed applications. And I think workflows certainly counts as that - every enterprise application I've worked on has needed them in some form or other. So it's great to see that Dapr 1.10 introduces a brand new building block for workflows.
Dapr workflows are very much in preview at the moment, but they've been built in a very similar way to Azure Durable Functions. I recommend checking out the sample app to understand. I expect I'll be blogging more about this feature in the future, but it's still in its early stages so breaking changes are to be expected.
One advantage of using the Dapr workflows over another workflow engine, would be that it can take advantage of other Dapr capabilities such as swappable state stores for holding workflow state, and taking advantage of Dapr's observability and tracing.
With the addition of workflows, Dapr is getting very close to being a one-stop shop for all of your most commonly needed distributed application patterns. There's still a couple more that I'd like to see (e.g. would be very interested to see if Dapr could provide something that helped with implementing an event sourcing pattern), but the overall coverage is already looking pretty good.
It's great to see Dapr gradually addressing many of the concerns I had with it, and filling in some of the missing feature gaps. With each new version it becomes more attractive as an foundation to build your distributed applications on.