Cloud Integration on Azure: Service Bus, Event Grid, Functions & Hybrid Architecture
Architect-level guide to cloud-native integration on Azure β Service Bus internals, Event Grid routing, serverless Functions patterns, and hybrid on-premises/cloud architectures. With production .NET examples throughout.
Modern systems are cloud-first by default β but that does not mean they live entirely in the cloud. They span managed PaaS services, on-premises systems, third-party SaaS, and edge compute. Cloud integration is the discipline of connecting these layers reliably, at scale, and without tight coupling.
Azure provides three primary integration primitives: Service Bus (reliable enterprise messaging), Event Grid (reactive event routing), and Functions (serverless compute as the glue). Understanding which to use, how to compose them, and how to bridge to on-premises systems is the difference between a working demo and a production architecture.
The Azure Integration Decision Map
Before diving into each service, orient to the fundamental trade-offs:
βββββββββββββββββββββββββββββββββββββββββββ
β What are you moving? β
βββββββββββββββββββββββββββββββββββββββββββ
β β
Commands / Notifications /
Work Items State Changes
β β
βββββββββββββββ ββββββββββββββββββββ
β Service Bus β β Event Grid β
β β β β
β At-least- β β At-least-once, β
β once, FIFO, β β push-based, β
β sessions, β β fan-out, β
β dead-letter β β 24h retry β
βββββββββββββββ ββββββββββββββββββββ
β β
βββββββββ¬ββββββββ
β
βββββββββββββββββ
β Functions β
β (triggered β
β by either) β
βββββββββββββββββ| Service | Model | Ordering | Retry | Max message size | Best for | |---------|-------|----------|-------|-----------------|----------| | Service Bus Queue | Pull, competing consumers | FIFO + sessions | DLQ after N failures | 256KB (100MB premium) | Work distribution, commands | | Service Bus Topic | Pull, subscription filter | Per subscription | DLQ after N failures | 256KB (100MB premium) | Fan-out with filtering | | Event Grid | Push (webhook/SDK) | Best-effort | 24h exponential retry | 1MB | Reactive routing, CloudEvents | | Event Hubs | Pull, log (Kafka-compatible) | Per partition | Retention period | 1MB (1TB w/ capture) | High-throughput streaming | | Functions | Triggered compute | N/A | Retry policies | N/A | Glue logic, transformations |
Azure Service Bus: Enterprise Messaging
Service Bus is Azure's enterprise message broker β the managed equivalent of IBM MQ or on-premises ActiveMQ. It handles the hard problems: ordering, dead-lettering, duplicate detection, sessions, and transactions.
Namespace, Queues, and Topics
Service Bus Namespace: mycompany-servicebus.servicebus.windows.net
β
βββ Queue: orders.processing (point-to-point)
βββ Queue: payments.commands (point-to-point)
β
βββ Topic: order.events (pub/sub)
β βββ Subscription: inventory (filter: Label = 'OrderPlaced')
β βββ Subscription: billing (filter: Label = 'OrderPlaced' OR 'OrderCancelled')
β βββ Subscription: analytics (no filter β receives all)
β
βββ Topic: notifications (pub/sub)
βββ Subscription: email (filter: UserProperty 'channel' = 'email')
βββ Subscription: push (filter: UserProperty 'channel' = 'push')Queues β each message delivered to one consumer. Competing consumers scale horizontally.
Topics β each message delivered to every subscription independently. Subscriptions can filter.
Dead Letter Queue
Every queue and topic subscription has a built-in DLQ at <entity>/$DeadLetterQueue. Messages land there when:
- Max delivery count exceeded (default: 10)
- Message TTL expires before delivery
- Consumer explicitly dead-letters it
// Explicitly dead-letter with reason
await receiver.DeadLetterMessageAsync(message,
deadLetterReason: "ValidationFailed",
deadLetterErrorDescription: $"OrderId field missing. Raw: {message.Body}");Rule: monitor DLQ depth as a first-class metric. A DLQ that silently fills is an undetected incident.
Sessions: Ordered Processing per Entity
Service Bus sessions are the mechanism for guaranteed ordered processing of messages belonging to the same entity β all messages with SessionId = "ORD-1234" are always delivered to the same consumer, in order.
// Send with session ID
var message = new ServiceBusMessage(JsonSerializer.SerializeToUtf8Bytes(orderEvent))
{
SessionId = order.Id.ToString(), // all messages for this order go to same consumer
ContentType = "application/json"
};
await sender.SendMessageAsync(message);// Receive with session β accepts one session at a time, exclusive lock
await using var sessionReceiver = await client.AcceptNextSessionAsync("orders.processing");
await foreach (var msg in sessionReceiver.ReceiveMessagesAsync())
{
await ProcessInOrderAsync(msg);
await sessionReceiver.CompleteMessageAsync(msg);
}Sessions are the correct answer whenever business correctness depends on processing an entity's events in sequence β order state machines, patient record updates, financial ledger entries.
Duplicate Detection
Service Bus can deduplicate messages with the same MessageId within a configurable window (up to 7 days). Enable it at queue/topic creation:
await adminClient.CreateQueueAsync(new CreateQueueOptions("orders.processing")
{
RequiresDuplicateDetection = true,
DuplicateDetectionHistoryTimeWindow = TimeSpan.FromMinutes(30)
});On send, always set a deterministic MessageId:
var message = new ServiceBusMessage(payload)
{
MessageId = $"order-placed-{order.Id}-{order.Version}"
};If the network drops after send but before acknowledgement, the producer retries. The duplicate arrives at the broker β and is silently discarded.
Service Bus in .NET: Complete Pattern
// Program.cs β register with DI
builder.Services.AddAzureClients(clients =>
{
clients.AddServiceBusClient(builder.Configuration["ServiceBus:ConnectionString"]);
});
// Producer service
public class OrderEventPublisher(ServiceBusClient client)
{
private ServiceBusSender? _sender;
public async Task PublishAsync(OrderPlacedEvent evt, CancellationToken ct)
{
_sender ??= client.CreateSender("order.events");
var message = new ServiceBusMessage(BinaryData.FromObjectAsJson(evt))
{
MessageId = evt.EventId,
SessionId = evt.OrderId,
Subject = nameof(OrderPlacedEvent),
ContentType = "application/json",
CorrelationId = evt.CorrelationId,
ApplicationProperties = { ["eventType"] = nameof(OrderPlacedEvent) }
};
await _sender.SendMessageAsync(message, ct);
}
}// Consumer β hosted service
public class InventoryWorker(ServiceBusClient client, ILogger<InventoryWorker> logger)
: BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var options = new ServiceBusProcessorOptions
{
MaxConcurrentCalls = 4,
AutoCompleteMessages = false,
MaxAutoLockRenewalDuration = TimeSpan.FromMinutes(5)
};
await using var processor = client.CreateProcessor(
"order.events", "inventory", options);
processor.ProcessMessageAsync += HandleMessageAsync;
processor.ProcessErrorAsync += HandleErrorAsync;
await processor.StartProcessingAsync(stoppingToken);
await Task.Delay(Timeout.Infinite, stoppingToken);
}
private async Task HandleMessageAsync(ProcessMessageEventArgs args)
{
var evt = args.Message.Body.ToObjectFromJson<OrderPlacedEvent>();
try
{
await _inventoryService.ReserveAsync(evt.OrderId, evt.Items);
await args.CompleteMessageAsync(args.Message);
}
catch (TransientException)
{
// abandon β message returns to queue for retry
await args.AbandonMessageAsync(args.Message);
}
catch (ValidationException ex)
{
// permanent failure β skip retries, send to DLQ
await args.DeadLetterMessageAsync(args.Message,
"ValidationFailed", ex.Message);
}
}
private Task HandleErrorAsync(ProcessErrorEventArgs args)
{
logger.LogError(args.Exception,
"Service Bus error on {EntityPath}", args.EntityPath);
return Task.CompletedTask;
}
}Pricing and Tier Selection
| Tier | Max message size | Sessions | Geo-DR | Transactions | Use | |------|-----------------|----------|--------|-------------|-----| | Basic | 256KB | No | No | No | Dev/test queues only | | Standard | 256KB | Yes | No | No | Most workloads | | Premium | 100MB | Yes | Yes | Yes | Prod, compliance, large payloads |
For production: Standard minimum, Premium for compliance-sensitive workloads (healthcare, finance) or message sizes over 256KB.
Azure Event Grid: Reactive Event Routing
Event Grid is a push-based, fully managed event routing service. Where Service Bus is about reliable work delivery, Event Grid is about broadcasting facts to interested subscribers β with zero infrastructure to manage.
The Event Grid Model
Event Source Event Grid Event Handler
βββββββββββββ ββββββββββ βββββββββββββ
Azure Blob Storage βββΊ Topic / Domain ββpushβββΊ Azure Function
Azure Resource Manager (filter, route) ββpushβββΊ Logic App
Custom application ββpushβββΊ Service Bus Queue
Any CloudEvents source ββpushβββΊ Webhook endpoint
ββpushβββΊ Event HubsEvent Grid pulls from sources and pushes to handlers. You do not poll. You register a subscription and events arrive at your endpoint.
Event Grid vs Service Bus: The Key Distinction
Service Bus Queue: "Process this work item. Retry until acknowledged."
Event Grid: "This thing happened. Notify whoever cares."Event Grid retries for up to 24 hours with exponential backoff, but it does not guarantee ordered delivery or exactly-once. It is designed for fan-out notification β multiple handlers react to the same event independently.
Use Event Grid when:
- Reacting to Azure resource lifecycle (blob created, VM started, container pushed)
- Notifying multiple downstream services of a state change
- Building reactive pipelines without managing subscriptions manually
Use Service Bus when:
- Work must be processed exactly once (or reliably at-least-once with dedup)
- Ordering matters
- Processing time may be long (lock renewal, sessions)
- You need DLQ with inspection
Custom Topic: Publishing from Your Application
// Publish to a custom Event Grid topic
var endpoint = new Uri("https://my-topic.eastus-1.eventgrid.azure.net/api/events");
var credential = new AzureKeyCredential(configuration["EventGrid:Key"]);
var client = new EventGridPublisherClient(endpoint, credential);
var cloudEvent = new CloudEvent(
source: "/orders/order-service",
type: "com.mycompany.order.placed",
data: new OrderPlacedData(order.Id, order.CustomerId, order.Total))
{
Id = Guid.NewGuid().ToString(),
Time = DateTimeOffset.UtcNow,
Subject = $"orders/{order.Id}"
};
await client.SendEventAsync(cloudEvent);Always use CloudEvents format (the CNCF standard) rather than the legacy Event Grid schema β it is vendor-neutral, supported by Event Grid natively, and portable to any CloudEvents-compatible broker.
Event Grid Domain: Multi-Tenant Fan-Out
For systems serving multiple tenants or teams, an Event Grid Domain provides a single endpoint with per-tenant topics:
EventGrid Domain: events.myplatform.com
Topic: tenant/acme-corp β subscriptions for ACME's handlers
Topic: tenant/globex β subscriptions for Globex's handlers
Topic: tenant/initech β subscriptions for Initech's handlersPublishers post to events.myplatform.com/api/events with a topic field β Event Grid routes to the right tenant topic automatically. This replaces building your own multi-tenant routing layer.
Event Grid System Topics (Azure Resource Events)
Event Grid has built-in integration with Azure services. No custom topic needed β just subscribe:
// React to blob storage events via Azure SDK
// Subscription configured in portal or Bicep:
// Source: Storage Account
// Event type: Microsoft.Storage.BlobCreated
// Handler: Azure Function endpoint
// In your Azure Function:
[Function("ProcessUploadedFile")]
public async Task Run(
[EventGridTrigger] CloudEvent cloudEvent)
{
var blobEvent = cloudEvent.Data!.ToObjectFromJson<StorageBlobCreatedEventData>();
logger.LogInformation("Blob created: {BlobUrl}", blobEvent.Url);
await _processor.ProcessAsync(blobEvent.Url);
}This pattern powers file processing pipelines, document ingestion, image resizing, and compliance scanning without any custom plumbing.
Azure Functions: Serverless Compute as Integration Glue
Azure Functions is the compute layer that binds messaging, storage, HTTP, and timers together. In integration architectures, Functions serve as event handlers, transformers, routers, and orchestrators β without managing servers.
Trigger + Binding Model
Functions express their integration intent declaratively:
// Input: Service Bus message
// Output: Cosmos DB document + another Service Bus message
[Function("EnrichAndRoute")]
[CosmosDBOutput("orders", "enriched", Connection = "CosmosConnection")]
public async Task<MultiOutput> Run(
[ServiceBusTrigger("order.events", "enrichment", Connection = "ServiceBusConnection")]
ServiceBusReceivedMessage message,
FunctionContext context)
{
var order = message.Body.ToObjectFromJson<OrderPlacedEvent>();
var customer = await _customerService.GetAsync(order.CustomerId);
var enriched = new EnrichedOrder(order, customer.Tier, customer.Region);
return new MultiOutput
{
CosmosDocument = enriched,
DownstreamMessage = new ServiceBusMessage(BinaryData.FromObjectAsJson(enriched))
{
SessionId = enriched.OrderId,
Subject = "EnrichedOrderReady"
}
};
}
public class MultiOutput
{
public EnrichedOrder? CosmosDocument { get; set; }
[ServiceBusOutput("enriched.orders", Connection = "ServiceBusConnection")]
public ServiceBusMessage? DownstreamMessage { get; set; }
}The binding model means no SDK boilerplate for reading from queues or writing to storage β you declare the integration intent and Azure wires it up.
Durable Functions: Stateful Orchestration
Standard Functions are stateless β each invocation is independent. Durable Functions (built on the Durable Task Framework) add stateful orchestration: long-running workflows, fan-out/fan-in, timers, and human approval steps.
// Orchestrator β coordinates the workflow
[Function("OrderFulfillmentOrchestrator")]
public static async Task RunOrchestrator(
[OrchestrationTrigger] TaskOrchestrationContext context,
ILogger logger)
{
var order = context.GetInput<OrderPlacedEvent>()!;
// Fan-out: run inventory check and fraud check in parallel
var inventoryTask = context.CallActivityAsync<bool>("CheckInventory", order);
var fraudTask = context.CallActivityAsync<FraudResult>("CheckFraud", order);
await Task.WhenAll(inventoryTask, fraudTask);
if (!inventoryTask.Result || fraudTask.Result.IsHighRisk)
{
await context.CallActivityAsync("CancelOrder", order.OrderId);
return;
}
await context.CallActivityAsync("ReserveInventory", order);
await context.CallActivityAsync("ChargePayment", order);
// Wait for external event: warehouse picks the order (timeout after 48h)
var pickedEvent = await context.WaitForExternalEvent<string>(
"OrderPicked",
timeout: TimeSpan.FromHours(48));
if (pickedEvent == null)
{
await context.CallActivityAsync("EscalateToWarehouse", order.OrderId);
return;
}
await context.CallActivityAsync("ScheduleShipment", new { order.OrderId, pickedEvent });
}Durable Functions use Azure Storage (or Netherite/MSSQL) to checkpoint state between steps. If the host restarts mid-workflow, it replays from the last checkpoint β without re-executing completed steps.
When to use Durable Functions vs Service Bus Saga:
| | Durable Functions | Service Bus + Saga Orchestrator |
|--|------------------|--------------------------------|
| State storage | Azure Storage (auto) | Custom DB (you manage) |
| Visibility | Durable Task Monitor | Your own tooling |
| Replay on failure | Built-in | You implement |
| Long-running timers | Native (CreateTimer) | Scheduled messages |
| Language support | .NET, JS, Python, Java | Any (via broker) |
| Scale | Consumption plan limits | Independent services |
Use Durable Functions for workflows within one bounded context or microservice. Use Service Bus sagas when the workflow spans multiple independently deployed services.
Function Scaling and Concurrency
On the Consumption plan, the Functions host scales instances based on queue/topic depth (for Service Bus triggers) or event rate (Event Grid triggers). Each instance handles multiple invocations concurrently.
For Service Bus triggers, control parallelism explicitly:
// host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"maxConcurrentCalls": 8,
"maxConcurrentSessions": 4,
"maxAutoLockRenewalDuration": "00:05:00"
}
},
"functionTimeout": "00:10:00"
}Consumption plan limitations to know:
- 10-minute timeout (configurable to 60 min on Premium/Dedicated)
- Cold start latency: 1-3 seconds for .NET isolated (use Premium plan for latency-sensitive workloads)
- No VNet integration on Consumption (Premium plan required for private endpoints)
For always-on integration workers with predictable load: Premium plan (pre-warmed instances, VNet, 60-min timeout). For bursty event processing: Consumption plan (pay-per-execution, infinite scale-out).
Hybrid Architectures: Bridging Cloud and On-Premises
Most enterprise systems are not pure cloud β they have on-premises ERP, legacy databases, mainframe systems, or regulated workloads that cannot move. Hybrid integration connects these worlds.
The Hybrid Integration Stack on Azure
On-Premises Azure
ββββββββββββ βββββββββββββββββββββββββββββββββββββ
ERP (SAP, Dynamics) βββββββββββΊ API Management (facade)
Legacy DB Service Bus (queue buffer)
File shares βββββββββββΊ Blob Storage (via Azure File Sync)
Internal APIs βββββββββββΊ API Management + VPN/ExpressRoute
SFTP servers βββββββββββΊ Logic Apps (SFTP connector)
Message brokers βββββββββββΊ Service Bus (bridge via on-prem agent)On-Premises Data Gateway
Azure's On-Premises Data Gateway is a relay agent that runs on-premises and bridges cloud services (Logic Apps, Power BI, API Management) to on-premises resources through an outbound-only HTTPS connection β no inbound firewall rules required.
On-Premises Network
βββββββββββββββββββββββββββββββββββββββββββββββ
β On-Premises Data Gateway (Windows service) β
β ββ polls Azure Service Bus relay β
β ββ connects to on-prem SQL/Oracle/SAP β
βββββββββββββββββββββββββββββββββββββββββββββββ
β outbound HTTPS only (port 443)
βΌ
Azure Service Bus Relay
β
βΌ
Logic Apps / Power Automate / Azure Analysis ServicesNo VPN required for read/query patterns. For write patterns or high volume, use ExpressRoute or VPN Gateway.
Hybrid Network Connectivity Options
| Option | Bandwidth | Latency | Cost | Use | |--------|-----------|---------|------|-----| | Site-to-Site VPN | Up to 10 Gbps | 10-50ms | Low | Branch offices, dev/test | | ExpressRoute | 50 Mbpsβ100 Gbps | <10ms | High | Production, regulated data | | Azure Arc | N/A | N/A | Per-resource | Manage on-prem as Azure resources | | Service Bus Relay | Messaging only | 20-50ms | Per message | Legacy app integration | | Private Endpoints | Full bandwidth | VNet latency | Per hour | PaaS services on private IPs |
ExpressRoute is the production standard for hybrid architectures handling sensitive data (healthcare, financial) β it does not traverse the public internet. VPN is acceptable for dev/test or low-sensitivity data.
Azure Service Bus Hybrid Bridge Pattern
When an on-premises system cannot connect directly to Azure, use Service Bus as a durable bridge:
On-Premises Azure
ββββββββββββ βββββββββββββββββ
Legacy Order System Service Bus Queue
β "orders.inbound"
β .NET Agent (runs on-prem) β
βββreads from on-prem Oracle tableβββΊ β
β transforms to canonical format β
βββpublishes to Service Bus βββββββββββββββββββββββββββΊβ
β
Azure Function (triggered)
β
ββββββββββββ΄βββββββββββ
β Order Processing β
β Cosmos DB β
β Event Grid publish β
βββββββββββββββββββββββThe on-premises agent writes to Service Bus over HTTPS (port 443). If connectivity drops, messages queue locally and flush when connectivity resumes β at-least-once delivery with zero message loss.
// On-premises agent: poll Oracle, publish to Service Bus
public class OnPremOrderBridgeWorker : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var newOrders = await _oracleRepo.GetUnpublishedOrdersAsync();
foreach (var order in newOrders)
{
var canonical = _mapper.ToCanonicalOrder(order);
var message = new ServiceBusMessage(BinaryData.FromObjectAsJson(canonical))
{
MessageId = $"oracle-order-{order.OrderId}", // dedup key
Subject = "OrderFromLegacy"
};
await _sender.SendMessageAsync(message, stoppingToken);
await _oracleRepo.MarkPublishedAsync(order.OrderId);
_logger.LogInformation("Bridged order {Id} to Service Bus", order.OrderId);
}
await Task.Delay(TimeSpan.FromSeconds(10), stoppingToken);
}
}
}Azure Arc: Managing On-Premises as Azure Resources
Azure Arc projects on-premises Kubernetes clusters, servers, and databases into the Azure control plane. You get Azure Policy, RBAC, Defender for Cloud, and Azure Monitor β applied to on-premises resources.
For integration, Arc-enabled Kubernetes means you can run Azure Functions (via Azure Functions on Arc) or container workloads on-premises while managing them through Azure. The workload runs locally (near the on-prem data), while operational control is centralised in Azure.
On-Premises Data Centre Azure
ββββββββββββββββββββββββ βββββββββββββββββββββββββ
K8s cluster (Arc-enabled) Azure Portal / ARM
βββ Azure Functions (arc) ββββββ deployment, config, policy
βββ Service Bus consumer Azure Monitor (metrics)
βββ Direct DB access Defender for Cloud (security)
(no data leaves premises)This pattern is essential for data sovereignty requirements β data stays on-premises, but operational tooling is cloud-managed.
Composing the Three Services: End-to-End Pattern
Real architectures compose all three. A complete order processing integration:
[Customer API]
β HTTP POST /orders
βΌ
[API Management] β throttle, auth, schema validation
β
βΌ
[Azure Function: OrderReceiver]
β publishes to
βΌ
[Service Bus Topic: order.events]
β
βββ Subscription: fulfillment βββΊ [Function: FulfillmentOrchestrator (Durable)]
β β
β checks inventory (on-prem via VPN)
β charges payment (Stripe webhook)
β schedules shipment
β β
β βΌ
β publishes to Event Grid
β β
β ββββΊ [Function: SendConfirmationEmail]
β ββββΊ [Function: UpdateCRM]
β ββββΊ [Function: TriggerWarehouseSystem]
β
βββ Subscription: analytics βββΊ [Event Hubs β Stream Analytics β Power BI]Service Bus handles the reliable work queue β fulfillment must happen exactly once with ordering.
Durable Functions orchestrate the multi-step fulfillment workflow with compensation on failure.
Event Grid fans out the completion event to multiple independent downstream handlers.
The three services serve different roles and should not be interchanged.
Production Checklist
Service Bus
- [ ] Connection strings in Key Vault (never in appsettings)
- [ ] Managed Identity for auth (not connection strings in production)
- [ ] DLQ depth alert configured (threshold: > 0 for critical queues)
- [ ]
MessageIdset to deterministic value for dedup - [ ]
MaxDeliveryCounttuned per queue (default 10 is rarely right) - [ ] Premium tier for payloads > 256KB or Geo-DR requirement
Event Grid
- [ ] CloudEvents schema (not legacy Event Grid schema)
- [ ] Webhook endpoint validates
aeg-event-type: SubscriptionValidationhandshake - [ ] Dead-letter destination configured for undeliverable events
- [ ] Subject filter set to avoid over-triggering
Functions
- [ ] Managed Identity for all downstream service connections
- [ ]
host.jsonconcurrency tuned (maxConcurrentCalls) - [ ] Timeout configured for long-running operations
- [ ] Durable Function storage account on Premium Storage (not shared)
- [ ] Application Insights wired up (distributed tracing, dependency tracking)
Hybrid
- [ ] ExpressRoute or VPN for production (not public internet)
- [ ] On-premises agent has local retry + durable outbox (never drop messages)
- [ ] Data classification reviewed β PII/PHI does not leave regulated boundary
- [ ] Azure Arc policy applied to on-prem resources for unified compliance posture
Key Principles
Service Bus is for work. Event Grid is for facts. Putting work items on Event Grid (no DLQ, 24h retry limit, no sessions) is a production incident waiting to happen.
Managed Identity everywhere. Connection strings rotate, leak, and expire. Managed Identity has none of those problems and is free.
Hybrid does not mean half-migrated. A proper hybrid architecture intentionally keeps some workloads on-premises (latency, data sovereignty, cost) while leveraging cloud for elasticity and managed services. It is a permanent topology, not a transition state.
Durable Functions for orchestration within a service. Service Bus sagas for orchestration across services. The boundary is the deployment unit.
Model your events with CloudEvents. Proprietary event schemas lock you to a single broker. CloudEvents is vendor-neutral β your events work on Event Grid today, Kafka tomorrow.
Related: Event-Driven Architecture Deep Dive β Kafka internals, EOS, Event Sourcing
Related: Distributed Systems Patterns β Saga, Outbox, Circuit Breaker
Related: Azure Functions: First Steps β HTTP triggers, local dev, deployment
Enjoyed this article?
Explore the System Design learning path for more.
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.