Posted in:

Dapr provides a set of "building blocks" that greatly simplify microservice development. I've already made the case for why you should use Dapr for distributed applications, and in this post, I want to explore the options for running locally (particularly with .NET developers in mind).

There's actually quite a lot of choice, and so this post is simply my current understanding of the options available and why you might pick one. At the time of writing Dapr 1.5 has just been released, and I'm sure that over time there will be further improvements to simplify things.

Ideally, if I'm working on a microservices application, I want it to be really easy to run the entire application locally, as well as to test and debug the microservice I'm working on in the context of the whole application.

Choice 1 - One repo or many?

One of the first choices you run into with microservices (regardless of whether you're using Dapr) is whether to put all of your microservices into a single source code repository or have one per microservice.

The advantage of keeping all microservices in one Git repo is that you've just got one thing to clone and all your code is conveniently located in one place, making it easier to find what you're looking for. The disadvantage is that as the number of microservices grows, this repo can become unwieldy. You can also find that developers inadvertently create inappropriate tight coupling between microservices such as adding direct project references to the codebase of another microservice in Visual Studio.

Another tricky challenge is the many CI/CD tools assume that a single Git repo means a single asset to build and deploy. But with microservices you want to independently deploy and release each microservice. You may also want to tag and branch them independently in Git, which can get confusing. For that reason, a lot of teams working on microservices gravitate towards separate repos per microservice, especially as the project grows much larger.

To be honest I can't say I know what the best approach is here. It seems that the "monorepo" is making a comeback in terms of popularity, and with a few improvements in CI/CD tooling, maybe the inherrent difficulties with that approach can be overcome.

Fortunately Dapr will work with either approach, but the choice you make does have some implications for how you will start everything up for local development.

Choice 2 - Self-hosted or containers?

One of the key choices for running Dapr locally is whether you'd prefer your code to be containerized or not. Dapr supports running in "self-hosted" mode, where you simply run your microservice and the Dapr "sidecar" natively on your development machine. Any auxiliary services that implement the building blocks (such as Redis for state stores and pub sub) can also run locally on your machine, and you might independently decide to containerize them.

But you can go all-in with containers, and have your own code running in containers. Whether you choose this approach will depend on factors like how comfortable your development team are with using tools like Docker Compose or Kubernetes. They'll need to know how to debug code running inside containers. Now that Docker Desktop has become a commercial product, you may also not be able to use it without purchasing licenses for your team.

Containers choice: Docker compose or Kubernetes

If you do decide to go with running your microservices locally as containers, there are two approaches I've seen with Dapr. One is to construct a Docker Compose file that has a container for each microservice, plus a Dapr sidecar for each microservice, and any additional services such as Redis and Zipkin. The nice thing about this is that the Docker Compose file can either point at the source code for each microservice, or can reference pre-built images in a Docker registry, meaning that if you only care about working on a single microservice, you don't need to build the code for all the others.

The disadvantage of the Docker Compose method at the moment is that it requires a bit of expertise with the Docker Compose syntax to set it up properly. You need to ensure you correctly set up ports and networking so everything can talk to each other on the expected host name ("localhost" gets particularly confusing), and you will also need to correctly map your Dapr component definitions into the right place. Of course, once you've got it working for the first time, things become easier. But I did find myself taking a lot longer than I hoped to get this running when I first tried it (due mostly to silly mistakes).

Here's a snippet of a Docker Compose file I set up for a demo application I have been using to explore Dapr. It shows one microservice called "frontend" along with the definition I'm using for the Dapr sidecar.

  frontend:
    image: ${DOCKER_REGISTRY-}frontend
    build:
      context: .
      dockerfile: frontend/Dockerfile
    environment:
      - DAPR_HTTP_PORT=3500
    networks:
      - globoticket-dapr

  frontend-dapr:
    image: "daprio/daprd:1.5.0"
    command: [
      "./daprd",
     "-app-id", "frontend",
     "-app-port", "80",
     "-components-path", "/components",
     "-config", "/config/config.yaml"
     ]
    volumes:
      - "./dapr/dc-components/:/components"
      - "./dapr/dc-config/:/config"
    depends_on:
      - frontend
    network_mode: "service:frontend"

If you'd like to see a full example of a Docker Compose file that can be used for Dapr, then this one which is part of the eShopOnDapr sample application would be a good choice.

The alternative is to just use Kubernetes for running your Dapr containers on. This has a lot of advantages. First, if you're also using Kubernetes in production, then you've minimised the difference between development and production environments which is always a good thing. Second, the Dapr CLI contains a number of helpful tools for installing Dapr onto a Kubernetes cluster and provides a dashboard. Third, if you run on Kubernetes, you can choose to use the single-node Kubernetes cluster managed by Docker Desktop, or point at a cloud hosted or shared cluster.

The main disadvantage of the Kubernetes approach again is the level of knowledge required by developers. Kubernetes is extremely powerful but can be perplexing, and it takes some time to become familiar with the format of the YAML files needed to define your deployments. Developers would need to understand how to debug code running in a Kubernetes cluster.

I'm hopeful that the Dapr tooling will improve in the future to the point that it can intelligently scaffold a Docker Compose file for you. It's possible that there is something already available that I don't know about, so let me know in the comments.

Self-hosted choice: startup scripts or sidekick?

If you choose the self-hosted route, then for every microservice you start locally, you also need to start a Dapr sidecar process. The easy way to do this is to just write a script that calls dapr run sets up the various port numbers and locations of the Dapr component definitions and configuration and then calls whatever starts up your microservice (in my case dotnet run). Then you just run this script for every microservice in your application, and attach your debugger to the process of the app you're working on.

Here's a example of a PowerShell script I have to start one of the microservices in my demo application

dapr run `
    --app-id frontend `
    --app-port 5266 `
    --dapr-http-port 3500 `
    --components-path ../dapr/components `
    dotnet run

There is however another nice option I discovered when watching the recent (and excellent) DaprCon conference. The "Dapr sidekick" project is a community-created utility that allows your application automatically launch the Dapr sidecar process on startup (plus some additional nice features such as restarting the sidecar if it goes down). This would be a particularly great option if you're using Visual Studio for development as it would simplify the task of starting up the microservices and automatically attaching the debugger. And it also would make a lot of sense if you were running "self-hosted" Dapr in production (which I think was one of the key motivations for creating Dapr sidekick).

Choice 3 - Visual Studio Code or Visual Studio

If like me you're a .NET developer, then the two main development environments you're likely to be choosing between are Visual Studio 2022 and VS Code.

Visual Studio Code has the advantage of being cross-platform, so would make sense if some or all of your team aren't using Windows. And there is a VS Code Dapr extension that comes with a bunch of helpful convenience features like scaffolding Dapr debugging tasks and components, and interacting with some of the building blocks. This makes VS Code an excellent choice for working on Dapr projects.

However, your dev team may be more familiar with Visual Studio, so I also tried developing with Dapr in Visual Studio 2022. The challenges I found for running self-hosted mode were that VS2022 doesn't seem to offer an easy way to use dapr run instead of dotnet run to start up services. As mentioned above, Dapr sidekick is a potentially good solution to this. I also tried the Docker Compose approach in VS2022. Visual Studio can automatically scaffold Dockerfiles and Docker Compose orchestration files for you which gives you a great start and simplifies your work considerably. You do unfortunately have to add in all the sidecars yourself, and make sure you get the networking right. After several failed attempts I finally got it working, so it is possible, and the advantage of this approach is that you can now just put breakpoints on any of your microservices and you'll hit them automatically.

Choice 4 - Entirely local or shared cloud resource?

The final choice I want to discuss in this post is whether you want to run all your microservices (and all the Dapr component services) locally on your development machine. There are advantages of doing so - you don't incur any cloud costs, and you have your own sandboxed environment. But as a microservices application grows larger, you may find that the overhead of running the entire thing on a single developer machine is using too much RAM.

One way of reducing the resources needed to run locally is for all your dependent services such as databases and services busses to be hosted elsewhere. If you are accessing these via Dapr building blocks, then it's a trivial configuration change to point them at cloud resources.

But you might want to go one step further and start cloud-hosting some of the microservices themselves. However, I'm not sure that the Dapr service invocation components have particularly strong support for a hybrid mode yet (where some microservices run locally and others elsewhere), so it might make more sense to use a cloud-hosted Kubernetes cluster to run the whole thing, and then debug into that. One interesting option is to make use of "Bridge to Kubernetes", which allows you to run your microservice locally but all the other microservices in Kubernetes, and automatically handles the correct routing of traffic between them. Check out this demo from Jessica Deen to see this in action with Dapr and Visual Studio Code.

Other options

There are a few other possible options worth exploring. One is project Tye which is a very promising proof-of-concept project that is particularly good at simplifying starting up many microservices. I think it could work well with Dapr (and there is a sample showing Tye integrated with Dapr), but Tye is still considered "experimental" at the moment. Hopefully it will continue to develop, or the good ideas from Tye can be incorporated into other tools.

The second is a new Azure service, Azure Container Apps, which is currently in preview. It is a very interesting service that simplifies hosting containerized microservices and offers a serverless billing model. Under the hood it uses Kubernetes, but the complexity is abstracted away from you. And it comes with built-in support for Dapr - you just specify that you want to enable Dapr and the sidecars will automatically be injected. I'm quite excited by this service, and assuming its not too hard to debug into it, could be a great option for development as well as production.

Gotchas

One gotcha I ran into with running Dapr locally, is that the built-in service discovery mechanism can conflict with VPNs and other security software in corporate environments. There's an open issue on the Dapr GitHub project to offer a simple way of working round this problem (currently you need to use Consul).

Summary

I really like what Dapr has to offer in terms of simplifying microservice development, but if you want to use it you will need to take some time to decide which mode of local development works best for you and your team. Are you using Dapr for a microservices project? I'd be really interested to hear what choices you've made for running locally.

Want to get up learn more about how Dapr can greatly simplify the task of building microservices? Be sure to check out my Pluralsight course, Dapr 1 Fundamentals.

Comments

Comment by Kyle

Giving this a try today using a docker-compose setup. I noticed that I had to restart the Dapr sidecar for it to pick up my code changes (some new pubsub topics). Did you find that issue, too?

Kyle