The scope of this article is not to decide which architecture style is better but only to synthesise the pros and the cons of each of them.
I worked in the past on a microservices oriented architecture and now we need to decide on which style we’ll use on a new project. I was not very thrilled about how complex things can get by going into the microservices realm so some well spent effort was needed into deciding if it’s worth and needed to go this way.
The Microservices architectural style is an approach to developing an application as a suite of fully decoupled small (micro) services. Each service has its own process and the communication is done via external channels (not in-memory) – often HTTP.
In the past few years many big projects migrated to this style (Amazon, Netflix, eBay) and so far the results are positive; but we can’t yet assess that the microservices architecture is the way to go until we don’t see how it matures.
- Each service can be deployed independently of other services. This allows for a Y axis scaling.
- Decentralised data management. Because each service is independent and runs in its own environment, it also uses it’s own data store. This way the app hits on the data store are split on each service’s data store. So the data store can be scaled independently instead of just duplicating the entire data store as it would happen in a monolith.
- Easier to scale development. It makes it easier to organize the development around multiple teams so that you have one team e entirely responsible for a microservice.
- Improved fault isolation. If there is an issue – hardware or software (memory leaks) in one service, only that service will be affected. In case of the monolithic architecture, such errors will bring the entire application down.
- Easier and faster to develop and understand (new devs, the IDE is faster, the project boots faster etc.
- Freedom in technology choices. Each microservice can use its own stack (programming language, data store etc.)
Moving to a distributed system gets the project to a new level of complexity (network latency, fault tolerance, message serialization, unreliable networks, asynchronicity (for performance), versioning etc.)
Major Operations overhead. 20 microservices can mean 40-60 processes for operations (deployments, monitoring, intervention etc.). DevOps is also a must because a microservice is very tightly integrated into the environment.
Microservices need to be closely monitored so the failures are detected quickly and if possible to automatically restore the service.
Data consistency is more difficult to maintain (cannot be guaranteed). Application-level events, database replication can be needed sometimes for optimisations or protecting key points from fails from other services. Coordination between teams due to schema / protocol – interfaces changes. Cache invalidation.
- Testing is a lot more difficult. Unit testing do not guarantee that the service behaves correctly because of the remote dependencies. Integration tests need to be used to check the service’s interactions with the other microservices, the data stores and caches. But it’s still hard to verify that the service works correctly with the other services. There are solutions like fully integrated end-to-end tests – but they cover many moving parts and they are difficult to write and to maintain (flakiness, excessive runtime, costs).
- Debugging a set of services is difficult. A very good logging and monitoring system is needed.
- IDEs are still oriented on building monolithic systems; they don’t provide too much support for distributed applications
- The inter-service communication mechanism must work really smooth (inside a LAN).
- Applications needs to be designed so that it can tolerate the failure of services. Any service call can fail due to a long list of reasons. This is a big disadvantage compared to monolithic systems because of the complexity to handle this (code duplication, data duplication, message queues etc.)
Microservices are certainly a nice looking architecture with lots of benefits and no critical drawbacks encountered yet.
Individual services are easier to develop, to understand, to scale and to deploy independently. But you will need:
- High-level of automation – you will need a good and big DevOps team for monitoring and intervention.
- To deal with complex distributed data management.
- If the components do not compose cleanly there could be really strange issues in the long term. Is can be difficult to decide where a thing fits better. It can seem fine when you are looking only at your component with the missing connection – but when it gets put inside the system it can get really messy.
It certainly makes sense and looks promising for a large, complex application that is evolving rapidly; but it can be a burden for not-big-enough applications.
Once more than 100 developers work on a project a threshold is reached where building a product gets harder and harder using a monolithic approach. At 10 developers it’s OK.Todd Hoff @ http://highscalability.com
The Monolith style combines all the components of an application in a single program (a single WAR file, a single directory hierarchy (PHP, Rails etc.). In software engineering the term was first used to describe the main frame applications, that had no modularity, becoming un-maintainable over time.
Nowadays, we have a well-established understanding about the decoupling and the separation of concerns and we started writing modular application with “enough” decoupled modules. If you find your complex application hard to maintain, then better look first at your language choice, your features system, your programmers, your coding standards and best practices regarding bug fixing, your build system and your release & deployment procedure.
- Simple to develop. Everything is inside one code base and the development tools are currently built with this style in mind.
- Simple to deploy. You have only one application to deploy.
- Simple to scale. You just duplicate the application on multiple systems and put it behind a load balancer.
- Data consistency. It is guaranteed through data-store internal / implemented mechanisms (constraints, events etc.).
- Only one application to monitor and only one technology stack for operations to handle.
- The modules communicate through memory. This is super fast and no asynchronicity is needed for boosting the performance.
- The large codebase intimidates the new developers. Overloaded IDEs.
- Difficult development scaling. When the application gets to a certain size and there’s a certain number of developers working on it, the work needs to be split it into functional areas and assign it to dedicated teams.
- Modularity breaks over time. Of course, the code can be well thought and written, but total decoupling will never be achieved and modules will start to become more and more inter connected.
- Continuous deployment is difficult. If you need to update one component, you have to re-deploy the entire application.
- Long-term commitment to a technology stack. It’s close to impossible to switch the adopt newer technologies.
- No Y axis scaling is possible. Scaling requires scaling of the entire application instead of only parts of it.
- Difficult feature rollbacks. If a previously deployed feature starts making problems, you have to rollback all the features deployed after the broken feature – to a previous version of the entire application.
It is very easy to kick off a project using a monolithic architecture. There is a lot less to worry about compared to the microservices approach. It is by no means a broken architecture; it has his limits indeed, the same as the microservices one will reach its limits – the question is if the project will reach those limits fast enough in order to justify the extra effort in starting the project as a microservices architecture.
Developers are used with writing and maintaining monolithic applications, and good developers also know how to engineer their features with the idea of separation of concerns and other best coding practices.
Monolithic applications are way simpler, faster and cheaper to build. If you are not sure which solution is better to go for, it might be more fitted to start with a monolithic application. You can always switch to them later on when the project gets bigger and you need to scale for development and for performance.
There are trade-offs for any architecture. If the monolithic application is well written, it could be quite easy to migrate to microservices. There are various strategies for incrementally evolving an existing monolithic application to a microservice architecture. You can start by writing new features as microservices and create the glue that fits them into the monolith. Then you can decompose the monolith into microservices too.
Both approaches have their pros and cons – the separation of concerns is a good thing but it can be achieved with both architecture styles. In order to decide which one fits the best for a project, you need to analyse the context. Do you really need microservices? Are you sure that the application will need to scale a lot (especially the development and the data store) and fast? Do you have the time needed to kick off such an architecture? Do you have the needed resources for extra-operations (DevOps, automation), distributed data management expertise, testing expertise?
Each drawback has a solution, both for monolithic and for the microservices architecture. The solutions for the cons of the microservices are a lot costly in the short-medium term – you need to solve the issues before you deploy the first version of the project.
The cons for the monolithic approach will generate costs in the long-term (depends how fast the application is growing). You will either pay for more developers (needed by the more difficult scalable development) or, eventually, for the transition to a microservices oriented architecture.