Posted in:

I spent last week on holiday in France, with minimal internet connection, so it gave me a chance to do some reading, and I chose Building Microservices by Sam Newman to keep me entertained. I was hoping it would answer some of my questions about how to practically implement a microservice architecture. The book was very helpful, but didn't answer all my questions. So here's some 'microthoughts' of my own on microservices.

Why use microservices? For me, the two big promises are better maintainability (versus creating a gigantic 'monolith'), and better scalability (can scale up services individually).

But aren't microservices just as complex as a monolith? In a way, yes. Deployment is a whole lot harder, and there are challenges with making it easy for developers to debug and trace things through the system. But the idea is that these added complexities are more than outweighed by the ease with which individual services can be worked on in isolation. Monoliths are probably easier to work with up to a certain size, and then the balance will start to tip in favour of microservices being easier to work with as the system continues to grow and evolve over time.

How should the microservices communicate with each other? Surprisingly, the seems to be no consensus on this. Should microservices talk to each other via RESTful APIs? Or maybe they should put messages in queues or post to an event bus? Should they know who their collaborators are, or should they simply report "what happened" and trust the appropriate service(s) to respond appropriately? Sam Newman calls this orchestration vs choreography. So simply deciding to use "microservices" still leaves you with some big architectural decisions to make up front.

Should each microservice be owned by a team? The chapter on Conway's law was one of the most interesting parts of Newman's book. The way you structure your teams will inevitably affect your architecture. Tightly coupled monoliths are developed by large teams with no strict separation of responsibilities. So if you want properly independent microservices, then you need to give ownership of them to a single team / custodian. Otherwise your theoretically replaceable services will turn out to be tightly coupled to other services. But in many companies, developers are expected to be allowed to freely make changes anywhere at all in the system, so this transition may be hard to make.

How do we secure our microservices? In some ways, there is nothing different about security for microservices versus a monolith. We still need to use many of the same techniques of authentication, authorization and cryptography. But microservices do bring additional challenges, such as whether we can trust "internal" network traffic between services, or whether we need to constantly keep authenticating and authorizing for each inter-service call. It would be nice to see a set of "best practices" or guidelines emerge for this.

Does it cost more to host microservices? One issue that was not really addressed in Newman's book was the cost implications of microservices. In monolith world, we pay for one big server, and one big database. In microservices world, we might have many services and many databases, resulting in having to rent multiple servers in the cloud. Obviously the theory is that the improved scalability of a microservices architecture will eventually allow for big cost savings, but again, I'm not sure whether that benefit is realised immediately -your system probably needs to grow to a certain size for it to start to become worthwhile.

How do we deploy microservices? Deployment is one of the biggest challenges of building a microservices system. The key is of course to automate everything, including deployment to any internal staging systems you may have. The other big paradigm shift for those used to the monolith world, is that you must be able to deploy services individually, rather than replacing everything in one hit. This of course means that there must be version tolerance baked into the way your services communicate with one another. If you always deploy all the services in one go, then I'd say you're still actually building a monolith.

Just one production system? Almost everyone talking about microservices is doing it in an environment where there is just one live production deployment of the software. But what about systems where every customer needs their own completely independent instance of the microservices solution? In this case, you may have hundreds of live deployments to maintain, each one potentially with a different collection of versions of the microservices. Unless you force all your customers to keep up to date with the latest versions of all services, you could end up with an even bigger versioning headache than you have when you deploy a monolithic solution to multiple customers.

How do we make it easy for developers to debug? This was a question I was hoping for more information in Newman's book. In monolith world, developers are used to being able to run the whole thing on their laptop - client, server and database. But that becomes infeasible in a world of microservices. So how can they do it? Must they remote debug against a staging system? Or must they mock out all their collaborators? I'd love to read more about how teams building microservices are coping with this.

Where is configuration stored? With multiple services, each one will need to pick up configuration settings, such as connection strings to data stores, network addresses of collaborating services, cryptography keys and so on. There is a challenge to setting up the automated deployments to configure the services with the right information for the environment we are running in, and to keep all secrets out of source code. Again, this is an area in which I hope some "best practices" will emerge, rather than everyone having to invent their own solution to this problem.

Where are the microservices exemplars? One of the ideas I really liked in Newman's book was the concept of "exemplars". Basically you identify one or more existing services as "exemplars" that other new services should copy. It can show how things like configuration, communication, monitoring, security, deployment, and logging should be done to avoid the creators of every new service having to reinvent it. It allows people creating a new service to do so very quickly, and keep their focus on the specifics of what that service needs to do, rather than getting bogged down writing a lot of framework related boilerplate code.

But what I'd like to see is some open source "exemplar" microservices, that could bootstrap the process of creating your own microservices solution. They could demonstrate some best/recommended practices for security, logging, monitoring, configuration management etc. Maybe these already exist and I've not yet stumbled across them. Perhaps it's because I'm in .NET world, and a lot of people building microservices seem to be in Linux land. I'd love to examine the code of an open source .NET microservices exemplar project that deploys to Azure. Let me know in the comments if such a thing exists.

Want to learn more about how to architect and build microservices applications? Be sure to check out my Pluralsight course Microservices Fundamentals

Comments

Comment by Mark Heath

thanks Kevin, I'll check those out

Mark Heath
Comment by Sendhil Kumar R

Very thoughtful post. Nicely articulated.
Thanks for sharing.
Regards,
Sendhil

Sendhil Kumar R
Comment by Bill Strong

So MicroServices are just a name for something that has been around for ages. Essentially, it is structuring your application in reusable components, and clearly defining the structure to allow it to be deployed in a more versatile manner.
It is the same thought process you go through when you decide which functions should go into this DLL, or library, you are just placing the processing portion on a server. As long as you define the interface between the objects strictly, and decouple code that doesn't need to be in the same place, you can take the same design from a program on one computer with one executable to a program that spans thousands of servers.
The technique is what matters, the clean design that allows for this break up of functionality, not the implementation details. Those should be chosen on a per project basis, depending on the need.
Does your application have a JSON file it already needs to read? Instead of introducing a new xml format, stick with a JSON based message passing format. You should stop thinking of these things as separate techniques., and use the knowledge gained from the history of computing those types of applications to give you algorithms that work, and the implementation details are what you get paid for deciding.

Bill Strong