0 Comments Posted in:

I'm really pleased to announce that my latest Pluralsight course "Microservices Fundamentals" has been released. This course takes you on a journey through some of the key decisions you'll need to consider when adopting a microservices approach, starting with architecting microservices, then onto building, securing and delivering microservices, as well as looking at options for communication between microservices.

Of course, that's a lot of ground to cover in just over two hours, so I can't go into great depth on any one topic in particular, but I hope it will prove helpful for teams considering microservices to make decisions on which principles and practices are a good fit in their context.

To be honest, it is quite a daunting task to produce training course on a topic as broad-ranging as microservices. There isn't one tech stack or even one set of architectural patterns that microservices require you to adopt. I'm also well aware that there are many tools and techniques for building microservices that I've never used, so my focus in this course is sharing some of my experience (both good and bad) of attempting to adopt microservices. I've learned a lot over the last few years, but there's a lot more to learn, and the whole area of microservices is experiencing rapid change with lots of innovation like the recently announced dapr project.

I wanted to illustrate what I was teaching in the course by referring to a sample microservices application, and I settled on the idea of using the eShopOnContainers reference microservices application from Microsoft. This is an open source project that illustrates a wide variety of the techniques and approaches that I wanted to discuss in the course. The sample application uses ASP.NET Core and Docker, but of course neither are requirements for microservices, so my focus is less on the specifics of the code and more on the architectural choices and patterns. It does however serve as a great illustration of how containerizing your microservices greatly simplifies the task of getting things running locally.

Anyway, I hope you find the Microservices Fundamentals course helpful if you're considering adopting microservices. I'd love to hear your stories of the challenges and successes you're having with microservices, and what your feedback about the course. I'm also hoping to contribute a follow-up course in the Pluralsight microservices learning path, so watch this space for further updates.


0 Comments Posted in:

One of the great things Git is how easy it make merging. Two developers can work on the same file and in most cases, the merge algorithm will silently and successfully combine their changes without any manual intervention required.

But merging is not magic, and it's not bullet proof. It's possible for changes to conflict (e.g. two developers edit the same line), and it's also possible for changes that don't strictly "conflict" to nevertheless cause a regression.

Regressions due to merges can be very frustrating, so here are five tips to avoid them.

1. Little and often

Merges are more likely to be successful if they are performed regularly. Wherever possible avoid long-lived branches and instead integrate into the master branch frequently. This is the philosophy behind "continuous integration": frequent merges allow us to rapidly detect and resolve problems. This might require you adopt techniques such as "feature flags" which allow in-progress work to be present but inactive in the production branch of the code. If you absolutely cannot avoid using a long-lived feature branch, then at least merge the latest changes from master into your feature branch on a regular basis, to avoid a "big bang" merge of several months of work once the feature is complete.

2. Pay special attention to merge conflicts

Git identifies changes that cannot be automatically merged as "conflicts". These require you to choose whether to accept the changes from the source or target branch, or whether to rewrite the code in such a way that incorporates the modifications from both sides (which is usually the right choice).

Sometimes, due to unfortunate characteristics of the development tools in use, you can find that certain high-churn code files are constantly producing conflicts. In older .NET framework projects, for example, csproj and package.json files would constantly require manual merges. The volume of these trivial conflicts can cause developers to get lazy, and start resolving them too rapidly without due care and attention.

Whenever your code conflicts with someone else's changes, find out who made the conflicting changes. They should be involved in the merge process. I recommend "pair merging" where possible, where you agree together on the resolution of the conflict before completing the merge. But if that's not possible, at least make contact with the author of the conflicting change, and ask them to specifically review the changes to conflicting files.

3. Code reviews

Code reviews are also an important part of avoiding regressions. I recommend using a "pull request" process, where no code gets into the master branch without going through a code review. If any merge conflicts are involved, then all authors whose code conflicted with your changes should be invited to the code review, in addition to whoever is usually invited.

I also recommend that in the pull request description, you should explicitly highlight areas of special concern. This is especially important if a code review contains many files, as it's possible for reviewers to get code review fatigue after looking through the first few hundred changes, and start missing important things. Make sure reviewers are aware of the high-risk areas of change, which includes any merge conflicts.

4. Unit tests

Unit tests have several benefits, but one particularly valuable one is that they protect us against regressions. If your feature gets broken by someone else's merge, it's very easy to point the finger of blame at them, but first should ask yourself the question "why didn't I write a unit test that could have detected this"? Undetected regressions indicate gaps in automated test coverage.

5. M&M's (Microservices & Modularization)

If you're performing many merges, it's because many developers are working on the same codebase. Which probably means that you have a large "monolith". One of the benefits of adopting a microservices architecture, and extracting components out into their own modules (e.g. NuGet packages in .NET), is that it makes it much easier for different teams working on different features to do so without stepping on each other's toes. So lots of merge conflicts may indicate that your service boundaries are in the wrong place, or you have code with too many responsibilities.

Collective responsibility

It's important to recognize that these five suggestions are not all aimed at the individual who performs the merge. In fact, only #2 is directly aimed at the merger. The others are the shared responsibility of the rest of the development team. And these five suggestions aren't the only ways we can reduce the likelihood of merge conflicts. If you're interested in more on this topic, I've also written about how the application of principles like the Open Closed Principle, and Single Responsibility Principle result in code that is easier to merge.


0 Comments Posted in:

Last week I was tasked with tracking down a perplexing problem with an API - every call was returning a 500 error, but there was nothing in the logs that gave us any clue why. The problem was not reproducible locally, and with firewall restrictions getting in the way of remote debugging, it took me a while to find the root cause.

Once I had worked out what the problem was, it was an easy fix. But the real issue, was the fact that we had some codepaths where exceptions could go unlogged. After adding exception logging to some places that had been missed, it became immediately obvious what was going wrong. Had this issue manifest itself on a production system, we could have been looking at prolonged periods of down-time, simply because we weren't logging these exceptions.

So this post is a quick reminder to check all your services - are there any places where exceptions can be thrown that don't end up in your logs? Here's three places to check.

Application startup

When an application starts up, one of the first things you should do is create a logger, and log a "Starting up" message at informational level. This is invaluable in providing a quick sanity check that at your application code did in fact start running and that it is correctly configured for logging.

I also like to log an additional message once all the startup code has completed. This alerts you to any problems with your service only managing to get half-way through initialization, or if there is a long-running operation hidden in the start-up code (which is usually a design flaw).

Of course, you should also wrap the whole startup code in an exception handler, so that any failures to start the service are easy to diagnose. Something like this is a good approach:

public void Main()
{
    var logger = CreateLogger();
    try 
    {
        logger.Information("Starting up");
        Startup();
        logger.Information("Started up");
    }
    catch (Exception ex)
    {
        logger.Error(ex, "Startup error");
    }
}

Middleware

In our particular case, the issue was in the middleware of our web API. This meant the exception wasn't technically "unhandled" - a lower level of the middleware was already catching the exception and turning it into a 500 response. It just wasn't getting logged.

Pretty much all web API frameworks provide ways for you to hook into unhandled exceptions, and perform your own custom logic. ASP.NET Core has exception middleware that you can customize, and the previous ASP.NET MVC allows you to implement a custom IExceptionHandler or IExceptionLogger. Make sure you know how to do this for the web framework you're using.

Long-running threads

Another place where logging can be forgotten is in a long-running thread such as a message pump, that's reading messages from a queue and processing them. In this scenario, you probably have an exception handler around the handling of each message, but additionally you need to log any exceptions at the pump level - e.g. if it loses connection to the message broker, you don't want to die silently and end up no longer processing messages.

In this next sample, we've remembered to log exceptions handling a message, but not exceptions fetching the next message.

while(true)
{
    // don't forget to handle exception that happen here too!
    var message = FetchNextMessage(); 
    try
    {
        Handle(message);
    }
    catch(Exception ex)
    {
        logger.Error(ex, "Failed to handle message");
        // don't throw, we want to keep processing messages
    }
}

You might already have this

Of course, some programming frameworks and hosting platforms have good out-of-the box logging baked in, which saves you the effort of writing this yourself. But it is worth double-checking that you have sufficient logging of all exceptions at whatever point they are thrown. An easy way to do this is to just throw a few deliberate exceptions in various places in your code (e.g. MVC controller constructor, middleware, application startup, etc), and double-check that they find their way into the logs. You'll be glad you did so when something weird happens in production.

In a world of microservices, observability is more critical than ever, and ensuring that all exceptions are adequately logged is a small time investment that can pay big dividends.