Seq & ELK — Centralised Logging You Can Actually Search
Move beyond file logs with Seq for local dev and ELK for production. Structured Serilog sinks, index templates, and correlation IDs for distributed tracing.
Why File Logs Don't Scale
Flat log files work fine for a single service on one machine. The moment you have:
- Multiple service instances behind a load balancer
- More than one microservice involved in a request
- Logs rolling over daily across dozens of pods
...you're grep-ing across SSH sessions and losing your mind. Centralised structured logging solves this.
Seq — Dev-Friendly Log Server
Seq is free for a single user, has a slick search UI, and understands Serilog's structured events natively. Run it with Docker:
docker run -d --name seq -e ACCEPT_EULA=Y -p 5341:5341 -p 8080:80 datalust/seqPackages
dotnet add package Serilog.AspNetCore
dotnet add package Serilog.Sinks.Seq
dotnet add package Serilog.Enrichers.Environment
dotnet add package Serilog.Enrichers.ThreadConfiguration
// Program.cs
using Serilog;
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(builder.Configuration)
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.Enrich.WithThreadId()
.WriteTo.Console()
.WriteTo.Seq("http://localhost:5341")
.CreateLogger();
builder.Host.UseSerilog();// appsettings.json
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft.AspNetCore": "Warning",
"System": "Warning"
}
}
}
}Structured Log Events
The key difference: log objects, not interpolated strings.
// Bad — you can't filter on OrderId in Seq
_logger.LogInformation($"Order {orderId} created for customer {customerId}");
// Good — OrderId and CustomerId become searchable properties
_logger.LogInformation("Order {OrderId} created for {CustomerId}", orderId, customerId);In Seq UI you can now query: OrderId = 'abc-123' and get every log event touching that order across all services.
Correlation IDs
Add a correlation ID middleware so every log event for a single HTTP request shares the same ID:
public class CorrelationIdMiddleware(RequestDelegate next)
{
private const string Header = "X-Correlation-ID";
public async Task InvokeAsync(HttpContext context)
{
var correlationId = context.Request.Headers[Header].FirstOrDefault()
?? Guid.NewGuid().ToString("N");
context.Response.Headers[Header] = correlationId;
using (LogContext.PushProperty("CorrelationId", correlationId))
{
await next(context);
}
}
}
// Program.cs
app.UseMiddleware<CorrelationIdMiddleware>();
app.UseSerilogRequestLogging(); // logs method, path, status, elapsedNow every log line has CorrelationId — search CorrelationId = 'abc' in Seq to see the full request trace.
ELK Stack — Production-Scale Logging
ELK = Elasticsearch (store + search) + Logstash (ingest pipeline) + Kibana (UI). For .NET, you often skip Logstash and push directly to Elasticsearch.
dotnet add package Serilog.Sinks.Elasticsearch.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(
new Uri(builder.Configuration["Elasticsearch:Uri"]!))
{
AutoRegisterTemplate = true,
AutoRegisterTemplateVersion = AutoRegisterTemplateVersion.ESv7,
IndexFormat = $"dotnet-logs-{DateTime.UtcNow:yyyy-MM}",
ModifyConnectionSettings = c =>
c.BasicAuthentication(
builder.Configuration["Elasticsearch:User"],
builder.Configuration["Elasticsearch:Password"])
})Index Template
Register this in Elasticsearch once to ensure correct field mappings:
PUT _index_template/dotnet-logs
{
"index_patterns": ["dotnet-logs-*"],
"template": {
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"level": { "type": "keyword" },
"message": { "type": "text" },
"CorrelationId": { "type": "keyword" },
"RequestPath": { "type": "keyword" },
"StatusCode": { "type": "integer" },
"Elapsed": { "type": "float" },
"MachineName": { "type": "keyword" },
"service": { "type": "keyword" }
}
}
}
}keyword fields are exact-match filterable. text fields are full-text searched. Getting this right means fast queries at scale.
Kibana Search Across Services
In Kibana Discover, with the index pattern dotnet-logs-*:
CorrelationId : "abc123" AND level : "Error"
service : "order-api" AND StatusCode >= 500
RequestPath : "/api/orders*" AND Elapsed > 1000Choosing Seq vs ELK
| | Seq | ELK | |---|---|---| | Setup | 1 Docker command | docker-compose with 3 services | | Cost | Free (1 user) | Free (self-hosted) | | Best for | Local dev, small teams | Multi-service production | | Query language | Seq SQL-like | Lucene / KQL |
Run Seq locally, push to ELK in production. Use the same Serilog configuration with environment-based sink selection:
var logConfig = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.Enrich.FromLogContext();
if (environment.IsDevelopment())
logConfig.WriteTo.Seq("http://localhost:5341");
else
logConfig.WriteTo.Elasticsearch(esOptions);
Log.Logger = logConfig.CreateLogger();Summary
- Structured logging (
{Property}tokens) is the foundation — without it, centralised logging is just fancy grep - Seq is the fastest path to searchable logs in dev; zero config beyond a Docker container
- ELK scales to terabytes across hundreds of services; index templates ensure field types are correct
- Correlation IDs thread a single request across every service it touches — essential for microservices debugging
Enjoyed this article?
Explore the Backend Systems learning path for more.
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.