Software Architecture Basics

LESSON 01

Software Architecture Basics

Monolith vs. Microservices

The architecture you choose in month one determines what kind of problems you'll have in year three.

11 min read

A monolith is a single codebase where all features live in one deployable unit. When you push code, the entire application redeploys. A microservices architecture splits functionality into independent services that communicate over a network. Your authentication service, payment service, and notification service are separate codebases that deploy independently. The choice between these is not about technology sophistication — it is about what kind of complexity you are willing to accept and when.

Monoliths are faster to build and reason about when your team is small and your product is still finding product-market fit. One codebase means one place to look for bugs, one deployment pipeline, and no coordination overhead between teams. The downside appears later: as the codebase grows, deploys become riskier, testing takes longer, and new engineers take weeks to understand how everything connects.

Microservices solve the monolith's scaling problems by creating boundaries. Each service can be owned by a different team, deployed independently, and written in different languages if needed. The cost is operational complexity. You now have multiple databases to keep in sync, network calls that can fail, and distributed debugging that requires tracing requests across services. If your team has never run a distributed system in production, microservices will slow you down.

The migration path from monolith to microservices is well-trodden but not trivial. You extract one bounded domain at a time — usually starting with something low-risk like notifications or file processing. Each extraction requires defining a clean API boundary, migrating data, and handling the transition period where both systems run in parallel. Companies that do this successfully spend 6–18 months on the migration. Companies that fail try to split everything at once.

Database architecture follows application architecture. In a monolith, you typically have one database that all code accesses. In microservices, each service owns its database and no other service can touch it directly. This prevents tight coupling but creates new problems: how do you join data across services? How do you maintain consistency when a transaction spans multiple databases? These are solvable problems, but they require engineering discipline your early team may not have.

The wrong question is "which architecture is better?" The right question is "what does my team know how to operate?" A well-built monolith will outperform a poorly implemented microservices architecture every time. If your engineers have never set up service discovery, circuit breakers, or distributed tracing, forcing microservices will create months of productivity loss while they learn on your production system.

Premature optimization toward microservices is one of the most common technical mistakes early-stage companies make. The benefits are real but irrelevant until you have enough scale that a monolith becomes a genuine constraint. If you are pre-product-market fit and your CTO is proposing microservices, the honest question to ask is: are we solving a problem we have or a problem we might have?

Start with a monolith. Split into services when the pain of coordination inside one codebase exceeds the pain of coordination across multiple.

This lesson is coming soon.

TERMS

A single codebase containing all application logic, deployed as one unit. When you ship a bug fix to the login flow, you redeploy the entire application including the payment processing code. This tight coupling makes early development fast but creates coordination bottlenecks as the team and codebase grow.

An architectural pattern where the application is composed of small, independently deployable services that communicate over a network. Each service owns a specific domain (e.g., user accounts, billing, notifications) and can be developed, deployed, and scaled separately. The trade-off is operational complexity — you are now managing distributed system failures.

The defined edge of what a service is responsible for, including its data, logic, and API contract. A clean boundary means other services interact only through documented APIs and never directly access the service's database. Poorly defined boundaries lead to tight coupling between services, which defeats the purpose of splitting them in the first place.

Any architecture where components run on separate machines and communicate over a network. Microservices are distributed systems. Distributed systems introduce failure modes that do not exist in monoliths: network partitions, eventual consistency, partial failures. If your team has never debugged a distributed system, expect a steep learning curve.

The formal specification of how a service can be called — what inputs it accepts, what outputs it returns, and what errors it might throw. When services depend on each other, breaking the contract (changing response formats, removing fields) breaks dependent systems. Versioning and backward compatibility become critical.

The smallest piece of code that can be deployed independently. In a monolith, the deployment unit is the entire application. In microservices, each service is a deployment unit. Smaller deployment units mean faster iteration but require more sophisticated deployment tooling.

A domain-driven design concept defining the scope within which a particular model applies. In practice, it is the logical boundary around a business capability — "everything related to user authentication" or "everything related to billing." Good microservice boundaries align with bounded contexts.

BEFORE YOUR NEXT MEETING

If we split this into microservices today, which service would we extract first, and do we have the monitoring in place to detect when it fails?

What is our deploy frequency right now, and is our monolith actually preventing us from shipping faster or is something else the bottleneck?

Can you walk me through what happens when a network call between two services times out — how does the system recover?

How many engineers would we need to hire before managing this monolith becomes genuinely harder than managing multiple services?

What does our rollback process look like if a microservice deploy breaks production — can we roll back one service without affecting others?

REALITY CHECK

SOURCES

LESSON 01 OF 04