Monoliths & Microservices


An opinionated overview

Ever since diving into the software development world I was troubled by a duality:

On the one hand I have built and operated many services described as monolithic with relative ease, on the other hand I’m always told I, and others, should build microservices because they are better in a variety of ways.

With this post I’m going to compare both software architectures by looking at the key benefits often associated with microservices and additional considerations I think are important.

TL;DR

Benefits of microservices?

Throughout the internet you see many assertions made about why microservices are better, I have collected the most common ones for us to analyze:

These are not false promises, but they are not automatic advantages when using microservices either. Let’s break them down one by one, looking at the claims, any possible benefits and the related costs and considerations.

No technology lock-in

In a monolithic application you are usually bound to the language and frameworks chosen at the start of the project, changing them in favour of a new, better or more secure language will range from hard to do to basically impossible.

On the other hand, with microservices, you can absolutely take Python for service A, Java for service B and maybe even rewrite service C in a completely new language if it turns out better suited for it’s function.

In practice the benefits of a mixed and highly variable tech stack can however easily be offset by the additional costs incurred in supporting a multitude of language specific tools, the infrastructure required to run completely different services and lost flexibility in developer allocations.

Faster deployments

In a monolithic application you either deploy all of it or you don’t deploy anything, you have one package and thats it.

With microservices you may be able deploy them independently, matching their rate of change or pushing out critical bug fixes without affecting others.

In practice those claimed benefit depend on some architectural considerations, like strictly decoupling services or gracefully handling breaking changes in APIs. It can certainly be achieved but it incurs additional costs to do so, otherwise you will need to coordinate releases between services, giving you a monolith in all but name.

Scalability

This one is a favorite of mine, simply put if you require more capacity from service A you simply scale it up. In a monolithic application you scale up the whole thing or you don’t scale up anything at all.

Personally I think scalability has very little to do with this architectural choice for most applications not serving Google-levels of requests. It requires comparable tooling to load-balance and scale a monolithic application as it does to scale multiple microservices.

In addition there are ways to use and scale multiple deployments of your monolith for different request paths, achieving load-splitting in a way similar to microservices without incurring the associated costs.

Smaller problem domains

Given specialized services you inherently have smaller code bases for individual services and don’t need to keep as much side effects and domain knowledge in your head.

I don’t think this should be part of the architecture debate at all. Tightly coupled modules mixing problem domains in a monolith are not dissimilar from tightly coupled APIs between the same modules running in different services.

Along the same line of thought it is possible to decouple multiple modules inside a monolith to well-defined public interfaces with stable APIs, backwards compatibility and small problem domains.

Neither architectural style has the advantage here and neither works well with deeply interdependent code, no matter how it’s architected.

Isolated faults

Simply put, if service A has a problem, service B is not affected by it. In monolithic application on the other hand a fault in one sub system can cause the whole application to become affected.

In practice this is often a dangerously faulty assumption. Let me demonstrate with a simple, commonly seen, example:

  • Service A provides an API to load the users profile
  • Service B provides APIs around a users orders

Now, to load a users profile, service A makes a request to service B to get the latest X orders to display on the profile. In this simple example it is obvious that there is no fault isolation between those two services, in bigger, complex systems it may be hard to even draw the dependency graph, let alone identify required and optional calls and how each service handles failing dependant calls.

It is possible to design solutions to this problem, however as they are highly varied based on application requirements I can’t provide guidance here.

In a monolith on the other hand you know that any part of the system could take down the whole system at any time. To secure against that you can scale-out horizontally, running multiple instances of your monolith.

Additionally, as mentioned in Scalability, it is possible to isolate parts of the system using multiple deployments and request path based load-splitting.

Additional considerations

Infrastructure

Have you ever deployed a monolithic application? It’s usually straight forward, let’s take a simple web store as an example:

  • A load balancer in front, possibly NGINX or Traefik if you feel fancy
  • One or more virtual servers running the app
  • One or more virtual servers running the (relational) database
  • One or more virtual servers running background tasks like sending emails and preparing reports
  • Some sort of CI/CD tooling to test and deploy new versions
  • A tool to aggregate logs, possibly as simple as a syslog server

Let’s now build the same simple web store with microservices, there we need:

  • A Kubernetes cluster
  • A load balancer
  • Multiple virtual servers to run the Kubernetes control plane
  • Multiple virtual servers to run workloads inside Kubernetes
  • One or more virtual servers running the (relational) database and possibly multiple different database versions or even systems
  • A message bus to facilitate inter-service communication
  • Some sort of CI/CD tooling to test and deploy new versions for every service
  • A tool to aggregate logs, possibly as simple as a syslog server
  • A tool to do distributed tracing

This however raises certain questions:

  • What if the Kubernetes cluster doesn’t work properly?
  • What if the message bus fails to deliver messages?
  • What if tracing fails and we can’t see errors?

From this high-level overview it should become clear that microservices require a higher level of initial investment into infrastructure. The operating cost of both applications likely scales relatively similar, but has a higher baseline with microservices too.

Some or all of those costs may be offset by using pre-existing platforms in your organization, some may be acceptable once you decide you are building such a platform.

Business requirements

No software system is built in a vacuum, your organization will have certain requirements that should influence your architecture decisions, things like:

  • How much operating budget is there for the finished system?
  • How many people are assigned to build and maintain it?
  • How long does the organization expect the system to be used?
  • What rate of change needs to be achieved?

From personal experience small teams and operating budgets on the lower end lend themselves well to monolithic applications, while high rates of change may be easier to achieve with microservices, given higher staffing and funding.

Conclusion

Both architectural styles have real benefits and drawbacks, therefore it is impossible to give a blanket recommendation fitting every use case. Additionally monolithic and microservice architectures are flexible concepts on a spectrum and in practice a hybrid application may serve you best.

Wan’t to have a chat with me about your current infrastructure and software architecture challenges? Book a free call

See also