monolith has become a pejorative. it implies something old, something that needs to be decomposed, something you graduate from. this framing is wrong in a specific way: it treats monolithic architecture as a failure mode rather than a deliberate choice with real advantages.
a monolith is the correct starting architecture for most new systems. understanding why, and understanding the actual conditions under which decomposition becomes worth the cost, makes you a better architect than memorizing microservices patterns.
a monolith is a system deployed as a single unit. all the code runs in one process (or a small number of identical processes behind a load balancer). database calls happen in-process. function calls between modules are regular function calls, not network requests.
this does not mean unstructured. a well-organized monolith has clear module boundaries, encapsulated data access, and separation of concerns. the difference from a distributed system is deployment and communication topology, not internal structure.
no network latency between components. a call from the order module to the inventory module is a function call taking microseconds. the same interaction as a service-to-service call takes milliseconds minimum, plus retry logic, circuit breakers, and distributed tracing to debug.
transactions are easy. a database transaction that touches multiple tables works naturally in a monolith, it is one database, one connection, one transaction. in a distributed system, coordinating a transaction across service boundaries requires distributed transaction protocols or careful design to avoid needing them at all.
debugging and tracing are simpler. a production incident in a monolith has one place to look, one set of logs, one call stack. a production incident in a distributed system involves correlating logs across services, tracing request flows through multiple systems, and debugging failure modes that do not exist locally.
deployments are straightforward. deploy one artifact. rollback one artifact. the deployment is atomic, either the new version is running or the old version is running.
operational overhead is low. one service to monitor, one set of infrastructure to manage, one build pipeline to maintain. the overhead of each additional service is not zero.
the monolith does have failure modes, but they are specific, not inevitable:
scaling is coarse-grained. if one component needs more resources, you scale everything. you cannot scale the image processing component independently from the API layer.
deployment coupling. changing any part requires deploying the whole thing. a bug in a rarely-changed module can block deployment of frequently-changed modules.
technology lock-in. the whole system uses the same language, runtime, and framework version. adopting a better tool for a specific problem requires either wrapping it in a service or waiting for a whole-system migration.
team scaling limits. at some point, many engineers working on the same codebase becomes a coordination problem. feature branches conflict. shared abstractions calcify. the build gets slow.
note that most of these problems are about team size and deployment frequency, not technical capability. a monolith with three engineers and monthly deployments does not have these problems. a monolith with 200 engineers and multiple daily deployments might.
decomposition is worth considering when:
- scaling requirements diverge: one component genuinely needs a different resource profile than others and coarse scaling is expensive.
- deployment cadences diverge: teams need to deploy independently without coordinating and being blocked by each other.
- technology requirements diverge: a specific component is genuinely better served by a different language, runtime, or database.
- team ownership is clear: you have distinct teams with clear product ownership. a service boundary that does not correspond to a team boundary is an organizational problem waiting to happen.
do not decompose because:
- the codebase feels messy (refactor the monolith first. the mess will follow you into services)
- you read an article about microservices
- you want to use a new technology
- you think it will make things faster (it often makes them slower)
a well-structured monolith is easier to split into services later than a poorly-structured one. the internal module boundaries you build now become the service boundaries later if you need them. build the modules; delay the network boundary until the cost of keeping it a monolith is higher than the cost of the distributed system.