System Design · Lesson 7 of 26

Software Architecture: Monolith → Microservices

Why Architecture Matters

Architecture is the set of decisions that are hard to reverse. Choosing to put everything in one codebase vs splitting it into services affects how you deploy, test, scale, hire, and debug — for years.

The goal isn't to pick the most modern architecture. The goal is to pick the simplest architecture that solves your problem today while leaving room to evolve.

This guide walks every major style from the ground up.


1. The Monolith

A monolith is a single deployable unit. All features — users, orders, inventory, notifications — live in one codebase and are deployed together.

┌─────────────────────────────────────────┐
│             Monolith Process            │
│                                         │
│  ┌──────────┐  ┌──────────┐  ┌───────┐ │
│  │  Orders  │  │  Users   │  │ Email │ │
│  └──────────┘  └──────────┘  └───────┘ │
│       ↕              ↕            ↕     │
│  ┌─────────────────────────────────┐   │
│  │         Single Database         │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘
            One deploy → one process

What "Monolith" Means in .NET

C#
// A typical ASP.NET Core monolith
// One project, one process, one database

// OrdersController.cs
[ApiController]
[Route("api/orders")]
public class OrdersController : ControllerBase
{
    private readonly AppDbContext _db;
    private readonly EmailService _email;     // direct class reference
    private readonly InventoryService _inv;  // direct class reference

    [HttpPost]
    public async Task<IActionResult> Create(CreateOrderRequest req)
    {
        // All in one method call, one transaction
        var order = new Order { ... };
        _db.Orders.Add(order);

        await _inv.ReserveStockAsync(req.Items);
        await _email.SendConfirmationAsync(order);

        await _db.SaveChangesAsync();
        return Ok(order.Id);
    }
}

Advantages of the Monolith

Simple to develop:

  • One F5 starts everything
  • No network calls between features — direct method calls
  • One transaction spans all operations (ACID guaranteed)
  • Easy to refactor across module boundaries

Simple to deploy:

  • One artifact to build, one container to run
  • No service discovery, no load balancer per service
  • One deployment pipeline

Simple to debug:

  • One log stream
  • One stack trace for any error
  • Direct step-through debugging across features

The Problems That Eventually Appear

Scaling is all-or-nothing:

Traffic spike on Orders → scale the whole app
Even though Users and Email don't need scaling

The deployment bottleneck:

Team A changes Orders
Team B changes Users
Team C changes Email

→ All three changes must be deployed together
→ Team C's bug blocks Team A's feature
→ Deploy frequency drops to once a week

The "big ball of mud":

Without discipline, modules start depending on each other in every direction:

Orders → Users (OK)
Users  → Orders (circular!)
Email  → Orders → Users → Email (infinite loop)
Inventory → Email for low-stock alerts (unexpected coupling)

When to Use a Monolith

Use a monolith when:

  • You're building an MVP or early-stage product
  • Team is under 8–10 people
  • You don't know the domain well enough to define service boundaries
  • You need one transaction across all operations (e.g., financial ledger)

Most successful companies started as monoliths. Amazon, Shopify, Stack Overflow, Basecamp — all ran on monoliths for years.


2. The Modular Monolith

The modular monolith is the monolith done right. It's still a single deployable unit, but internally divided into strongly bounded modules with explicit contracts between them.

┌─────────────────────────────────────────────────────┐
│                  Single Process                     │
│                                                     │
│  ┌──────────────┐    ┌──────────────┐              │
│  │  Orders      │    │  Users       │              │
│  │  Module      │    │  Module      │              │
│  │  ─────────   │    │  ─────────   │              │
│  │  IOrderSvc   │    │  IUserSvc    │              │
│  │  OrderRepo   │    │  UserRepo    │              │
│  └──────┬───────┘    └──────┬───────┘              │
│         │ (interface only)  │                       │
│  ┌──────▼────────────────────────────────────────┐  │
│  │           Module Communication Bus            │  │
│  │     (in-process events or direct interface)   │  │
│  └───────────────────────────────────────────────┘  │
│                                                     │
│  ┌──────────────────────────────────────────────┐  │
│  │  Database — separate schemas per module      │  │
│  │  orders.*    users.*    email.*              │  │
│  └──────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────┘

Rules of a Modular Monolith

  1. No direct class references across modules — only interface calls or events
  2. Separate database schemas per moduleorders.Orders, users.Users, never JOIN across schemas
  3. Each module owns its migrations — modules deploy their own schema independently
  4. Public API is explicit — every module exposes a defined contract

Implementation in .NET

Folder structure:

src/
├── Modules/
│   ├── Orders/
│   │   ├── OrdersModule.cs          ← registers DI
│   │   ├── Api/
│   │   │   └── OrdersController.cs
│   │   ├── Application/
│   │   │   ├── Commands/
│   │   │   └── Queries/
│   │   ├── Domain/
│   │   │   └── Order.cs
│   │   ├── Infrastructure/
│   │   │   ├── OrdersDbContext.cs   ← schema: "orders"
│   │   │   └── OrderRepository.cs
│   │   └── Contracts/
│   │       └── IOrderService.cs     ← public API
│   │
│   └── Users/
│       ├── UsersModule.cs
│       ├── Contracts/
│       │   └── IUserService.cs      ← public API
│       └── ...
│
└── Host/
    └── Program.cs                   ← wires modules together

Module registration:

C#
// Modules/Orders/OrdersModule.cs
public static class OrdersModule
{
    public static IServiceCollection AddOrdersModule(
        this IServiceCollection services,
        IConfiguration config)
    {
        services.AddDbContext<OrdersDbContext>(options =>
            options.UseSqlServer(
                config.GetConnectionString("Default"),
                sql => sql.MigrationsHistoryTable("__EFMigrationsHistory", "orders")
            )
        );

        services.AddScoped<IOrderService, OrderService>();
        services.AddMediatR(cfg =>
            cfg.RegisterServicesFromAssembly(typeof(OrdersModule).Assembly));

        return services;
    }
}

Cross-module communication via interface (in-process):

C#
// Orders module needs to look up a user — calls IUserService, not UserRepository directly
public class CreateOrderCommandHandler : IRequestHandler<CreateOrderCommand, int>
{
    private readonly OrdersDbContext _db;
    private readonly IUserService _userService;  // ← from Users module's Contracts

    public async Task<int> Handle(CreateOrderCommand req, CancellationToken ct)
    {
        // Interface call — no direct access to Users DB
        var user = await _userService.GetUserAsync(req.UserId, ct)
            ?? throw new NotFoundException("User", req.UserId);

        var order = new Order { UserId = user.Id, ... };
        _db.Orders.Add(order);
        await _db.SaveChangesAsync(ct);
        return order.Id;
    }
}

Cross-module communication via events (in-process):

C#
// After order is created, publish an event — Email module subscribes
public record OrderCreatedEvent(int OrderId, string UserEmail, decimal Total)
    : INotification;

// Email module handler
public class SendOrderConfirmationHandler : INotificationHandler<OrderCreatedEvent>
{
    public async Task Handle(OrderCreatedEvent e, CancellationToken ct)
    {
        await _emailService.SendAsync(e.UserEmail, "Order Confirmed", ...);
    }
}

Why This Pattern Is Underrated

The modular monolith gives you:

  • Module isolation — bugs in Email don't crash Orders
  • Independent schema evolution — each module migrates its own tables
  • Team ownership — Team A owns Orders/, Team B owns Users/
  • Simple deployment — still one artifact, one pipeline
  • Easy migration path — when a module needs to scale, extract it to a service with minimal refactoring

3. Service-Oriented Architecture (SOA)

SOA was the enterprise predecessor to microservices. Services are larger, coarser-grained, and communicate through a central Enterprise Service Bus (ESB).

┌──────────┐    ┌──────────┐    ┌──────────┐
│  Orders  │    │  Users   │    │  Billing │
│ Service  │    │ Service  │    │ Service  │
└─────┬────┘    └─────┬────┘    └─────┬────┘
      │               │               │
      └───────────────┼───────────────┘
                      ↓
          ┌──────────────────────┐
          │  Enterprise Service  │
          │       Bus (ESB)      │
          └──────────────────────┘

Problems with SOA:

  • ESB becomes a monolithic bottleneck — all traffic routes through it
  • Heavy XML/SOAP protocols (slow, verbose)
  • ESB contains business logic — the "smart pipe" anti-pattern
  • Tightly coupled to ESB vendor

SOA evolved into microservices by removing the ESB and making services communicate directly.


4. Microservices

Microservices split the application into many small, independently deployable services, each owning its data and communicating over a network.

                    ┌─────────────┐
    Client ───────▶ │  API Gateway│
                    └──────┬──────┘
                           │
          ┌────────────────┼────────────────┐
          ▼                ▼                ▼
   ┌─────────────┐  ┌─────────────┐  ┌─────────────┐
   │   Orders    │  │    Users    │  │    Email    │
   │   Service   │  │   Service   │  │   Service   │
   │  ─────────  │  │  ─────────  │  │  ─────────  │
   │  Orders DB  │  │   Users DB  │  │   Email DB  │
   └──────┬──────┘  └─────────────┘  └─────────────┘
          │
          ▼ (async event)
   ┌─────────────┐
   │  Inventory  │
   │   Service   │
   └─────────────┘

Core Principles

Database per service:

✅ Orders service → orders_db (SQL Server)
✅ Users service  → users_db  (PostgreSQL)
✅ Email service  → email_db  (MongoDB)
✅ Search service → (Elasticsearch)

❌ Never: two services sharing the same database tables

Services communicate over the network:

C#
// Orders service calling Users service via HTTP
public class UserServiceClient
{
    private readonly HttpClient _http;

    public async Task<UserDto?> GetUserAsync(int userId, CancellationToken ct)
    {
        var response = await _http.GetAsync($"/users/{userId}", ct);
        if (response.StatusCode == HttpStatusCode.NotFound) return null;
        response.EnsureSuccessStatusCode();
        return await response.Content.ReadFromJsonAsync<UserDto>(ct);
    }
}

Or via async messaging (preferred for decoupling):

C#
// Orders service publishes an event — doesn't know who listens
await _bus.PublishAsync(new OrderCreatedMessage
{
    OrderId = order.Id,
    UserId  = order.UserId,
    Items   = order.Items,
    Total   = order.Total,
}, ct);

// Inventory service subscribes independently
public class ReserveStockConsumer : IConsumer<OrderCreatedMessage>
{
    public async Task Consume(ConsumeContext<OrderCreatedMessage> ctx)
    {
        foreach (var item in ctx.Message.Items)
            await _inventory.ReserveAsync(item.ProductId, item.Quantity);
    }
}

The Problems You Must Solve

1. Distributed transactions — the hardest problem:

In a monolith, one DB transaction covers everything. In microservices, there is no distributed ACID transaction. Use the Saga pattern:

Order Saga (Choreography):
  1. Orders Service:    OrderCreated event published
  2. Inventory Service: StockReserved event published
                        OR StockFailed → triggers compensation
  3. Payment Service:   PaymentCharged event published
                        OR PaymentFailed → triggers compensation
  4. Email Service:     ConfirmationSent

Compensation (on failure):
  PaymentFailed → publish CancelOrder → Inventory releases stock

2. Service discovery:

Services need to find each other. In Kubernetes, DNS handles this automatically:

YAML
# Kubernetes service  discoverable at http://users-service/
apiVersion: v1
kind: Service
metadata:
  name: users-service
spec:
  selector:
    app: users
  ports:
    - port: 80
      targetPort: 8080

3. Observability — distributed tracing:

One user request spans 6 services. Without tracing, debugging is impossible:

C#
// OpenTelemetry — propagates a TraceId across all services
builder.Services.AddOpenTelemetry()
    .WithTracing(tracing => tracing
        .AddAspNetCoreInstrumentation()
        .AddHttpClientInstrumentation()
        .AddEntityFrameworkCoreInstrumentation()
        .AddOtlpExporter(o => o.Endpoint = new Uri("http://jaeger:4317"))
    );

4. Resilience — handling partial failures:

C#
// Polly — retry + circuit breaker on HTTP calls
builder.Services
    .AddHttpClient<UserServiceClient>()
    .AddResilienceHandler("users", pipeline =>
    {
        pipeline.AddRetry(new HttpRetryStrategyOptions
        {
            MaxRetryAttempts = 3,
            Delay = TimeSpan.FromMilliseconds(300),
            BackoffType = DelayBackoffType.Exponential,
        });

        pipeline.AddCircuitBreaker(new HttpCircuitBreakerStrategyOptions
        {
            SamplingDuration = TimeSpan.FromSeconds(30),
            FailureRatio = 0.5,
            MinimumThroughput = 10,
            BreakDuration = TimeSpan.FromSeconds(15),
        });
    });

When Microservices Are Worth It

| Signal | Implication | |--------|------------| | Teams of 50+ people | Need independent deploy to avoid coordination overhead | | Parts need different scaling | E.g. video transcoding needs 100x more compute than auth | | Parts use different tech stacks | ML inference in Python, API in .NET, frontend in Node | | Different SLAs per domain | Payments need 99.99%, blog needs 99.9% | | Regulatory isolation required | PCI-DSS for payments, HIPAA for health data |

When Microservices Are Too Much

  • Team under 20 people
  • Domain not well understood — you'll draw the wrong boundaries
  • Low traffic — network overhead outweighs scaling benefits
  • No DevOps maturity — you need Kubernetes, service mesh, distributed tracing, alert pipelines just to run the system

5. Event-Driven Architecture (EDA)

In EDA, services don't call each other — they publish events to a broker. Other services subscribe independently.

Orders        Kafka/            Inventory
Service ────▶ Service  ◀──────  Service
             Bus
              ▲
              │
Email         │       Notification
Service ──────┘       Service

Event types:

Event Notification:    "OrderCreated"           — something happened
Event-Carried State:   "OrderCreated { items }" — carries data so consumer doesn't query back
Domain Event:          "PaymentDeclined { reason, orderId, userId }"

Implementation with MassTransit + Azure Service Bus:

C#
// Publisher
public class OrderService
{
    private readonly IBus _bus;

    public async Task<int> CreateOrderAsync(CreateOrderRequest req, CancellationToken ct)
    {
        var order = await _repo.CreateAsync(req, ct);

        await _bus.Publish(new OrderCreatedEvent
        {
            OrderId    = order.Id,
            CustomerId = order.CustomerId,
            Items      = order.Items.Select(i => new OrderItemDto(i.ProductId, i.Qty)).ToList(),
            Total      = order.Total,
            CreatedAt  = DateTime.UtcNow,
        }, ct);

        return order.Id;
    }
}
C#
// Consumer — in a completely separate service
public class InventoryConsumer : IConsumer<OrderCreatedEvent>
{
    private readonly IInventoryRepository _repo;

    public async Task Consume(ConsumeContext<OrderCreatedEvent> ctx)
    {
        foreach (var item in ctx.Message.Items)
        {
            await _repo.ReserveAsync(item.ProductId, item.Quantity, ctx.CancellationToken);
        }
    }
}

EDA trade-offs:

| Pro | Con | |-----|-----| | Services are completely decoupled | Hard to trace request flow across events | | Publisher doesn't wait for consumers | Eventual consistency — state is out of sync temporarily | | Add consumers without touching publisher | Message ordering is hard (use partitioning keys) | | Replay events from the log | Debugging requires distributed tracing |


6. Serverless

Serverless removes infrastructure management. You deploy functions; the platform runs them on demand and scales to zero.

C#
// Azure Functions — HTTP trigger
public class ProcessOrderFunction
{
    [Function("ProcessOrder")]
    public async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
        FunctionContext ctx)
    {
        var order = await req.ReadFromJsonAsync<CreateOrderRequest>();
        var orderId = await _orderService.CreateAsync(order!);
        return new OkObjectResult(new { orderId });
    }
}
C#
// Service Bus trigger — reacts to events
[Function("SendOrderEmail")]
public async Task Run(
    [ServiceBusTrigger("order-created", Connection = "ServiceBus")] OrderCreatedEvent e,
    FunctionContext ctx)
{
    await _emailService.SendConfirmationAsync(e.CustomerEmail, e.OrderId);
}

When serverless fits:

  • Sporadic, unpredictable traffic (scales to zero)
  • Event-driven processing (file uploads, queue messages, webhooks)
  • Background jobs with simple inputs/outputs

When it doesn't:

  • Long-running processes (>10 minutes)
  • Latency-sensitive paths (cold starts add 200–2000ms)
  • Stateful workloads

7. The Migration Path: Monolith → Microservices

Don't rewrite. Extract. The Strangler Fig Pattern:

Phase 1: Monolith handles everything
  Client → Monolith → DB

Phase 2: Extract one service, route via API Gateway
  Client → API Gateway → New Orders Service → Orders DB
                       → Monolith (everything else)

Phase 3: Extract more services over time
  Client → API Gateway → Orders Service
                       → Users Service
                       → Monolith (remaining legacy)

Phase 4: Monolith is gone or just a thin legacy shell

Step-by-step extraction:

1. Identify the module boundary
   "Orders" is already mostly isolated — low coupling to other modules

2. Create the new service with its own DB
   orders-service/ → orders_db

3. Sync data: dual-write during migration
   Write to both monolith DB and new service DB
   Read from new service, fall back to monolith if missing

4. Cut over reads
   API Gateway routes /api/orders/* to new service

5. Stop writing to the old location
   Monolith no longer handles orders

6. Clean up legacy tables

Choosing the Right Architecture

Are you building an MVP?
└─ Yes → Monolith. Get to market. Refactor later.
└─ No  → How big is your team?
         └─ Under 15 → Modular Monolith.
         └─ 15–50    → Modular Monolith with a plan to extract services.
         └─ 50+      → Microservices (with platform team and DevOps maturity).

Do different parts need radically different scaling?
└─ Yes → Extract those parts as services, keep the rest as a monolith.

Do you have distributed tracing, CI/CD per service, on-call rotation?
└─ No → Don't do microservices yet. Invest in platform first.

Comparison Table

| | Monolith | Modular Monolith | Microservices | Serverless | |---|---|---|---|---| | Deploy unit | 1 process | 1 process | N services | Functions | | Data | Shared DB | Separate schemas | Separate DBs | External store | | Communication | In-process | In-process | Network (HTTP/queue) | Events / HTTP | | Transaction | ACID | ACID | Saga (eventual) | Saga / none | | Scaling | All-or-nothing | All-or-nothing | Per service | Per function | | Operational complexity | Low | Low | Very high | Medium | | Team size | 1–10 | 5–30 | 30+ | Any | | Debugging | Easy | Easy | Hard (tracing needed) | Hard (cold starts) | | Best for | MVPs, startups | Growing teams | Large orgs | Sporadic workloads |


Key Takeaways

  • Start with a monolith — you don't know your domain boundaries until you've built the product
  • Discipline beats architecture — a well-structured monolith beats a badly-bounded microservices system every time
  • The modular monolith is the unsung hero — most teams should be here, not in microservices
  • Microservices solve organisational problems (team independence, independent deploy) — not just technical ones
  • Never split by technical layer (e.g., frontend-service, database-service) — split by business domain (Orders, Users, Inventory)
  • The Strangler Fig is the safe migration path — never a full rewrite
  • Observability is not optional in distributed systems — distributed tracing, structured logging, and health checks are prerequisites, not nice-to-haves