Docker Compose: Run Your Entire Stack Locally
Learn Docker Compose to run a .NET API, PostgreSQL, and Redis locally with one command. Covers compose.yml, volumes, networks, health checks, and multi-stage Dockerfiles.
Docker Compose: Run Your Entire Stack Locally
Running a modern application locally used to mean installing every dependency by hand ā PostgreSQL, Redis, message queues, the works ā and hoping your colleagues had matching versions. Docker Compose solves this: define your entire stack in a single YAML file and start everything with docker compose up.
This guide covers Docker Compose from the ground up using a realistic example: a .NET 8 API backed by PostgreSQL and Redis, with proper health checks, named volumes, and a multi-stage Dockerfile.
What Docker Compose Is (and Isn't)
Docker Compose is an orchestration tool for local development and simple multi-container deployments. You define services, networks, and volumes in a compose.yml file, and Compose manages their lifecycle.
What it is:
- A way to define multi-container applications as code
- Perfect for local development
- Fine for small production deployments (single machine)
What it isn't:
- A replacement for Kubernetes in production
- Built for multi-machine clusters
- A secret manager (don't put production secrets in compose.yml)
The compose.yml File Structure
# compose.yml (the modern name; docker-compose.yml also works)
name: myapp
services:
api: # Service name (becomes the DNS hostname on the network)
build: . # Build from local Dockerfile
ports:
- "8080:8080"
depends_on:
postgres:
condition: service_healthy # Wait until postgres is healthy
redis:
condition: service_healthy
environment:
- ASPNETCORE_ENVIRONMENT=Development
env_file:
- .env.local # Load additional env vars from file
postgres:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: devpassword
volumes:
- postgres_data:/var/lib/postgresql/data # Named volume
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql # Init script
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myapp -d myapp"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data: # Named volume ā persists across container restarts
redis_data:
networks:
default:
name: myapp_networkThe .NET API: Complete Example
Let's build a .NET 8 API that actually uses Postgres and Redis.
// Program.cs
using Microsoft.EntityFrameworkCore;
using StackExchange.Redis;
var builder = WebApplication.CreateBuilder(args);
// PostgreSQL via Entity Framework Core
builder.Services.AddDbContext<AppDbContext>(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("Postgres")));
// Redis connection
builder.Services.AddSingleton<IConnectionMultiplexer>(provider =>
{
var connectionString = builder.Configuration.GetConnectionString("Redis")
?? "localhost:6379";
return ConnectionMultiplexer.Connect(connectionString);
});
builder.Services.AddScoped<ICacheService, RedisCacheService>();
builder.Services.AddScoped<IProductRepository, ProductRepository>();
builder.Services.AddControllers();
builder.Services.AddHealthChecks()
.AddNpgSql(builder.Configuration.GetConnectionString("Postgres")!)
.AddRedis(builder.Configuration.GetConnectionString("Redis")!);
var app = builder.Build();
// Auto-migrate on startup (fine for dev, use proper migrations in prod)
using (var scope = app.Services.CreateScope())
{
var db = scope.ServiceProvider.GetRequiredService<AppDbContext>();
db.Database.Migrate();
}
app.MapControllers();
app.MapHealthChecks("/health");
app.Run();// AppDbContext.cs
public class AppDbContext : DbContext
{
public AppDbContext(DbContextOptions<AppDbContext> options) : base(options) { }
public DbSet<Product> Products => Set<Product>();
}
public class Product
{
public int Id { get; set; }
public string Name { get; set; } = string.Empty;
public decimal Price { get; set; }
public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
}// RedisCacheService.cs
public interface ICacheService
{
Task<T?> GetAsync<T>(string key);
Task SetAsync<T>(string key, T value, TimeSpan? expiry = null);
Task RemoveAsync(string key);
}
public class RedisCacheService : ICacheService
{
private readonly IDatabase _db;
public RedisCacheService(IConnectionMultiplexer redis)
{
_db = redis.GetDatabase();
}
public async Task<T?> GetAsync<T>(string key)
{
var value = await _db.StringGetAsync(key);
if (value.IsNullOrEmpty) return default;
return JsonSerializer.Deserialize<T>(value!);
}
public async Task SetAsync<T>(string key, T value, TimeSpan? expiry = null)
{
var json = JsonSerializer.Serialize(value);
await _db.StringSetAsync(key, json, expiry ?? TimeSpan.FromMinutes(5));
}
public async Task RemoveAsync(string key)
{
await _db.KeyDeleteAsync(key);
}
}// ProductsController.cs
[ApiController]
[Route("api/[controller]")]
public class ProductsController : ControllerBase
{
private readonly IProductRepository _repo;
private readonly ICacheService _cache;
private readonly ILogger<ProductsController> _logger;
public ProductsController(
IProductRepository repo,
ICacheService cache,
ILogger<ProductsController> logger)
{
_repo = repo;
_cache = cache;
_logger = logger;
}
[HttpGet]
public async Task<IActionResult> GetAll()
{
const string cacheKey = "products:all";
// Try cache first
var cached = await _cache.GetAsync<List<Product>>(cacheKey);
if (cached is not null)
{
_logger.LogInformation("Cache hit for {CacheKey}", cacheKey);
return Ok(cached);
}
// Fall through to database
var products = await _repo.GetAllAsync();
await _cache.SetAsync(cacheKey, products, TimeSpan.FromMinutes(2));
return Ok(products);
}
[HttpGet("{id:int}")]
public async Task<IActionResult> GetById(int id)
{
var cacheKey = $"products:{id}";
var cached = await _cache.GetAsync<Product>(cacheKey);
if (cached is not null) return Ok(cached);
var product = await _repo.GetByIdAsync(id);
if (product is null) return NotFound();
await _cache.SetAsync(cacheKey, product, TimeSpan.FromMinutes(10));
return Ok(product);
}
[HttpPost]
public async Task<IActionResult> Create(CreateProductRequest request)
{
var product = new Product
{
Name = request.Name,
Price = request.Price
};
await _repo.AddAsync(product);
// Invalidate list cache
await _cache.RemoveAsync("products:all");
return CreatedAtAction(nameof(GetById), new { id = product.Id }, product);
}
}
public record CreateProductRequest(string Name, decimal Price);// appsettings.Development.json
{
"ConnectionStrings": {
"Postgres": "Host=postgres;Port=5432;Database=myapp;Username=myapp;Password=devpassword",
"Redis": "redis:6379"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
}
}Note the hostnames postgres and redis ā Docker Compose creates a DNS name for each service on the shared network.
Multi-stage Dockerfile for .NET
# Dockerfile
# āāā Stage 1: Restore āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS restore
WORKDIR /src
# Copy only project files first (better layer caching)
COPY ["src/MyApp.Api/MyApp.Api.csproj", "src/MyApp.Api/"]
COPY ["src/MyApp.Core/MyApp.Core.csproj", "src/MyApp.Core/"]
COPY ["src/MyApp.Infrastructure/MyApp.Infrastructure.csproj", "src/MyApp.Infrastructure/"]
# Restore is cached unless .csproj files change
RUN dotnet restore "src/MyApp.Api/MyApp.Api.csproj"
# āāā Stage 2: Build āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
FROM restore AS build
WORKDIR /src
COPY . .
RUN dotnet build "src/MyApp.Api/MyApp.Api.csproj" \
-c Release \
--no-restore \
-o /app/build
# āāā Stage 3: Publish āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
FROM build AS publish
RUN dotnet publish "src/MyApp.Api/MyApp.Api.csproj" \
-c Release \
--no-restore \
--no-build \
-o /app/publish \
/p:UseAppHost=false
# āāā Stage 4: Runtime (final image) āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final
# Security: don't run as root
RUN addgroup --system --gid 1001 appgroup && \
adduser --system --uid 1001 --ingroup appgroup appuser
WORKDIR /app
# Copy only the published output ā SDK is NOT in this image
COPY --from=publish --chown=appuser:appgroup /app/publish .
USER appuser
# Use port 8080 (non-privileged)
ENV ASPNETCORE_URLS=http://+:8080
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.Api.dll"]Why multi-stage?
- The SDK image is ~800MB; the ASP.NET runtime image is ~220MB
- The final image has no build tools ā smaller attack surface
- Layer caching:
dotnet restoreis only re-run when project files change
Override Files for Dev vs Prod
compose.yml is the base. compose.override.yml is automatically merged when you run docker compose up ā it's your dev overrides.
# compose.override.yml (auto-applied in dev, never commit to prod)
services:
api:
build:
target: build # Use the build stage (includes SDK for hot reload)
volumes:
# Mount source code for hot reload
- ./src:/src
environment:
- ASPNETCORE_ENVIRONMENT=Development
- DOTNET_WATCH_RESTART_ON_RUDE_EDIT=true
command: dotnet watch run --project src/MyApp.Api/MyApp.Api.csproj
postgres:
ports:
- "5432:5432" # Expose postgres port to host in dev (for pgAdmin)
redis:
ports:
- "6379:6379" # Expose redis port to host in dev (for RedisInsight)For CI or production, use a different file:
# compose.prod.yml
services:
api:
image: ghcr.io/myorg/myapp:latest
restart: always
environment:
- ASPNETCORE_ENVIRONMENT=Production
# No source code mounts, no exposed debug ports# Use a specific override file
docker compose -f compose.yml -f compose.prod.yml up -dVolumes: Named vs Bind Mounts
services:
postgres:
volumes:
# Named volume: Docker manages storage location
# Data persists across container stop/start
- postgres_data:/var/lib/postgresql/data
# Bind mount: maps host directory to container path
# Use for development (source code, config files)
- ./scripts:/docker-entrypoint-initdb.d:ro # :ro = read-only
api:
volumes:
# Bind mount for hot reload in development
- ./src:/src
volumes:
postgres_data: # Declare named volume
# Optional: use a specific driver
# driver: localWhen to use each
| Type | Use for | Persists? | |------|---------|-----------| | Named volume | Database data, persistent state | Yes | | Bind mount | Source code (dev), config files | Yes (host files) | | tmpfs | Temp data, secrets in memory | No |
Managing volumes
# List all volumes
docker volume ls
# Inspect a volume (see where data lives)
docker volume inspect myapp_postgres_data
# Remove a volume (deletes all data!)
docker volume rm myapp_postgres_data
# Remove all unused volumes
docker volume pruneHealth Checks
Health checks let Compose know when a service is truly ready, not just running.
services:
postgres:
image: postgres:16-alpine
healthcheck:
# Command to run inside the container
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s # Check every 10 seconds
timeout: 5s # Fail if check takes > 5 seconds
retries: 5 # Mark unhealthy after 5 failures
start_period: 30s # Don't count failures for first 30s (startup time)
api:
depends_on:
postgres:
condition: service_healthy # Wait for postgres to be healthy
redis:
condition: service_healthyFor your .NET API itself:
api:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s # Give .NET time to start upEnvironment Variables and .env Files
# compose.yml
services:
api:
environment:
# Hard-coded (fine for non-secrets)
- ASPNETCORE_ENVIRONMENT=Development
# Reference from shell environment
- MY_SETTING=${MY_SETTING}
# From .env file (default)
- DATABASE_URL
env_file:
- .env.local # Loaded in addition to environment:# .env (auto-loaded by Compose, but DON'T commit secrets here)
POSTGRES_DB=myapp
POSTGRES_USER=myapp
POSTGRES_PASSWORD=devpassword# .env.local (gitignored, your personal overrides)
POSTGRES_PASSWORD=mysecretlocalpassword
OPENAI_API_KEY=sk-...# .gitignore
.env.local
*.env.localEssential Compose Commands
# Start all services in foreground (see logs)
docker compose up
# Start in detached/background mode
docker compose up -d
# Start specific service only
docker compose up api
# Stop and remove containers (volumes kept)
docker compose down
# Stop and remove containers AND volumes (deletes data!)
docker compose down -v
# View logs for all services
docker compose logs
# Follow logs for a specific service
docker compose logs -f api
# List running containers
docker compose ps
# Execute a command inside a running container
docker compose exec postgres psql -U myapp -d myapp
# Execute a command inside the api container
docker compose exec api /bin/bash
# Rebuild images (after Dockerfile changes)
docker compose build
# Rebuild and restart
docker compose up --build
# Pull latest images
docker compose pull
# Show resource usage
docker compose stats
# Scale a service (run multiple instances)
docker compose up --scale api=3Full Stack: .NET API + PostgreSQL + Redis
Here is the complete compose.yml for our example app:
# compose.yml
name: myapp
services:
# āāā .NET API āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
api:
build:
context: .
dockerfile: Dockerfile
target: final
container_name: myapp-api
restart: unless-stopped
ports:
- "8080:8080"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ConnectionStrings__Postgres=Host=postgres;Port=5432;Database=myapp;Username=myapp;Password=${POSTGRES_PASSWORD}
- ConnectionStrings__Redis=redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
- backend
# āāā PostgreSQL āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
postgres:
image: postgres:16-alpine
container_name: myapp-postgres
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-devpassword}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/db:/docker-entrypoint-initdb.d:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myapp -d myapp"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
networks:
- backend
# āāā Redis āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
redis:
image: redis:7-alpine
container_name: myapp-redis
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
# āāā Volumes āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
volumes:
postgres_data:
driver: local
redis_data:
driver: local
# āāā Networks āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
networks:
backend:
driver: bridge# compose.override.yml (dev extras ā auto-merged)
services:
api:
build:
target: build
volumes:
- ./src:/src
command: dotnet watch run --project src/MyApp.Api/MyApp.Api.csproj --urls http://+:8080
postgres:
ports:
- "5432:5432" # pgAdmin access
redis:
ports:
- "6379:6379" # RedisInsight access
# Dev-only: pgAdmin for database browsing
pgadmin:
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin@local.dev
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
depends_on:
- postgres
networks:
- backend
# Dev-only: Redis insight
redis-insight:
image: redis/redisinsight:latest
ports:
- "5540:5540"
depends_on:
- redis
networks:
- backendTroubleshooting Common Issues
"Connection refused" to postgres
The API started before postgres was ready. Fix: use depends_on with condition: service_healthy and add a healthcheck to postgres.
Port already in use
# Find what's using port 5432
lsof -i :5432 # macOS/Linux
netstat -ano | findstr :5432 # Windows
# Or just change the host port mapping
ports:
- "5433:5432" # Map container 5432 to host 5433Containers can't reach each other
Make sure they're on the same network. Services in the same compose file share a default network automatically, but if you use custom networks, all services must be listed under the same network.
Volume data not persisting
Check that you're using a named volume (postgres_data:/var/lib/...) not a path that Docker treats as a bind mount.
Changes to Dockerfile not taking effect
Run docker compose build --no-cache api to force a full rebuild.
Summary
Docker Compose makes local development painless:
- compose.yml: defines services, volumes, networks in one file
- Multi-stage Dockerfile: small runtime image, fast builds with layer caching
- Health checks + depends_on: services start in the right order
- Named volumes: data persists across restarts
- compose.override.yml: dev extras (hot reload, port exposure) that don't touch the base config
- env files: non-secret config in
.env, secrets in.env.local(gitignored)
One command ā docker compose up ā and your entire stack is running.
Enjoyed this article?
Explore the Backend Systems learning path for more.
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.