keeping it simple: why postgres and a framework are enough
on why most developers and companies overcomplicate their tech stacks when a fullstack framework and postgres would do just fine
i've been thinking about how we build software lately. not the code itself, but the decisions we make before writing a single line. the choices about databases, caching layers, message queues, service meshes. the architecture.
there's a pattern i keep seeing: developers and companies reaching for complexity by default. a new project starts and immediately the discussion turns to microservices, kafka, redis, mongodb, elasticsearch, kubernetes. tools that solve real problems at scale, but problems most projects will never have.
the thing is, for most applications, postgres and a solid fullstack framework would be enough. not just enough to start with, but often enough long-term.
why we miss the obvious
here's the thing: we all know about resume-driven development. we all know about cargo-culting faang. these aren't new insights. but knowing about these biases doesn't stop them from shaping our decisions.
the real problem is that complexity has become a signal for seriousness. using postgres for everything feels like admitting you're not operating at "real" scale. choosing a monolith feels like you're not a "real" engineering organization. the tools you use become a statement about your ambitions, not your needs.
this creates a blind spot. we spend so much time looking for the next piece of infrastructure to add that we never deeply learn what we already have. postgres has been quietly adding features for twenty years while we've been installing new databases. frameworks have solved entire categories of problems while we've been building microservices that recreate those same problems in distributed form.
the postgres you don't know about
most developers treat postgres like a glorified excel sheet. tables, rows, maybe some foreign keys if they're feeling fancy. but modern postgres has quietly evolved into something much more interesting.
take full-text search. we default to elasticsearch because that's what you do for search, right? but postgres has had solid full-text search for years. not just basic LIKE queries. real search with stemming, ranking, phrase matching, even highlighting excerpts. i've built product search, documentation search, even log search with just postgres. the performance holds up into millions of documents.
or consider jsonb columns. everyone thinks you need mongodb for flexible schemas, but postgres has been doing it better since 2014. better because you get real transactions, you can join your json data with regular tables, and you can index deep into json structures. i've seen teams maintain separate mongodb clusters just for user preferences and event data when postgres would have been simpler and more reliable.
the real killer feature nobody talks about: postgres can replace most of your infrastructure. need pub/sub? LISTEN/NOTIFY has been there forever. need a job queue? a simple SELECT ... FOR UPDATE SKIP LOCKED gives you atomic job claiming with automatic cleanup if workers die. teams run millions of jobs per day on this pattern. need caching? materialized views that refresh concurrently. need time-series data? just add the timescaledb extension.
each of these features is battle-tested, documented, and sitting right there in the database you're already using. but we install new services instead.
what actually breaks first
let me be specific about scale. i've seen a lot of systems grow, and it's never what people think will break.
a single postgres instance on decent modern hardware can handle far more than most applications will ever need. with proper indexing, connection pooling, and partitioning, you're looking at serious capacity - the kind of scale most startups dream about.
what actually breaks first? usually:
- missing indexes on foreign keys (causes table scans on deletes)
- not using connection pooling
- long transactions blocking vacuum (causes table bloat)
- using count(*) without thinking (it scans the whole table)
these are solvable problems. add indexes, add pgbouncer, fix your transaction boundaries. you don't need a new database.
when do you legitimately need more? when you have:
- true write scaling needs that a single instance can't handle
- global distribution requirements (data sovereignty, latency)
- specialized workloads (graph traversals, time-series at extreme scale)
but these are specific, measurable problems. not vague fears about the future.
the framework advantage
frameworks get dismissed as "not serious" but they solve entire categories of problems that microservices make worse.
auth is the obvious one. your framework's auth system has dealt with timing attacks, session fixation, csrf, secure password resets. it gets security updates. try coordinating that across five services with jwt tokens.
but it goes deeper. take background jobs. in rails/django/laravel you queue a job and it runs. with microservices? now you need a message broker, dead letter queues, serialization formats, service discovery. you've turned a function call into a distributed systems problem.
or database migrations. frameworks have solved this. they track what's been run, handle rollbacks, coordinate changes. in microservices? hope you enjoy manually coordinating schema changes across services.
every boring framework feature becomes its own distributed systems problem when you split things up. asset handling, form validation, health checks, they all become separate projects.
why they work together
frameworks and postgres were designed for each other. frameworks assume a real relational database. one with transactions, foreign keys, constraints. they're built to leverage those features.
when you use postgres with django or rails, migrations know about your foreign keys. the orm knows about your constraints. transaction middleware wraps your requests. the admin interface understands your relationships. it's a coherent system.
break this apart with microservices and specialized databases, and the framework can't help you anymore. django can't maintain referential integrity across service boundaries. rails can't wrap a distributed transaction. your data is scattered across systems that don't talk to each other. your framework is reduced to being a glorified http router.
this is why the "modern" stack feels so painful. you're working against the grain of your tools. you've taken an integrated system and blown it apart, then spent enormous effort trying to glue it back together with service meshes and distributed tracing.
when to actually distribute
i'm not saying never use microservices or specialized databases. but wait for actual pressure:
- when you have teams stepping on each other's deployments
- when different parts of your system have genuinely different scaling needs (measured, not hypothetical)
- when you need different runtime characteristics (python ml service + go api)
- when regulatory requirements force separation
even then, start small. extract one service that solves a specific problem. see how it goes. feel the operational pain. then decide if you want more of that.
what this means
the next time someone proposes kubernetes for your startup, or microservices for your small team, or kafka for your email queue, ask a simple question: what specific problem does this solve that we have right now?
if the answer involves the future tense, you have your answer.
postgres and a framework aren't a compromise or a starting point. they're a complete system designed to work together. use them until you have a real reason not to.
simple isn't a virtue in itself. but simple lets you focus on what matters: building features your users actually want.