Back to blog
Integration Engineeringintermediate

Deep Dive: Messaging Systems

Understand how messaging systems work: queues, topics, brokers, delivery guarantees, JMS, publish-subscribe with Node.js and JMS, Kafka architecture, and operational considerations for production messaging.

SystemForgeApril 18, 20269 min read
Messaging SystemsJMSKafkaNode.jsPub/SubMessage BrokersFITech
Share:š•

Messaging systems are the infrastructure that enables asynchronous, reliable communication between applications. This deep-dive covers how they work from the ground up — the data structures, the delivery semantics, the JMS standard, and practical implementation of publish-subscribe using Node.js and JMS (the programming exercise for this course).


The Messaging Model

A messaging system has three participants:

Producer ──► [Broker / Message Server] ──► Consumer
            (stores and delivers messages)

Producer: the application that sends messages. It does not need to know who the consumer is or whether the consumer is running.

Broker: the server that receives messages, stores them until they are consumed, and delivers them to consumers.

Consumer: the application that receives and processes messages. It connects to the broker and receives messages either by polling (pull) or by being notified (push).

The broker's key function is temporal decoupling: the producer writes a message and moves on immediately. The consumer reads the message when it is ready — which might be milliseconds or hours later.


Queues and Topics

Queue (Point-to-Point)

A queue stores messages and delivers each message to exactly one consumer. If multiple consumers are connected, they share the messages (competing consumers).

Producer → [Queue: process-orders] → Consumer A (gets message 1)
                                   → Consumer B (gets message 2)
                                   → Consumer A (gets message 3)

Properties:

  • FIFO ordering (generally)
  • Each message consumed once
  • Messages persist until consumed or TTL expires

Use case: work queues, task distribution, job processing.

Topic (Publish-Subscribe)

A topic delivers each message to all active subscribers (fan-out).

Producer → [Topic: order-events] → Subscriber A (inventory) — gets a copy
                                 → Subscriber B (billing)   — gets a copy
                                 → Subscriber C (analytics) — gets a copy

Properties:

  • Each subscriber maintains its own cursor (position) in the topic
  • A message is delivered to every subscriber independently
  • Durable subscribers receive messages even if they were offline

Use case: event broadcasting, notification systems, data pipelines.


JMS (Java Message Service)

JMS is a Java API standard for messaging. It defines how Java applications interact with message brokers — the operations, the message types, and the semantics. JMS is implemented by all major Java message brokers (ActiveMQ, IBM MQ, Oracle AQ, Amazon MQ).

JMS Core Concepts

ConnectionFactory: factory object for creating connections to the broker.
Connection: a session with the broker.
Session: a unit of work (creates producers and consumers, manages transactions).
Destination: the target of a message — a Queue or a Topic.
MessageProducer: sends messages to a destination.
MessageConsumer: receives messages from a destination.

JMS Message Types

| Type | When to use | |------|-------------| | TextMessage | Text content (JSON, XML, plain text) | | BytesMessage | Binary data | | MapMessage | Key-value pairs | | ObjectMessage | Serialised Java objects | | StreamMessage | Primitive values in sequence |

JMS Message Structure

Every JMS message has three parts:

  • Header fields: routing metadata (destination, message ID, timestamp, delivery mode, priority, expiration, correlation ID)
  • Properties: application-defined metadata for filtering and routing
  • Body: the actual payload

JMS Example: Sending and Receiving

Sending a message to a queue (Java):

JAVA
ConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
try (Connection connection = factory.createConnection()) {
    connection.start();
    Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
    Queue queue = session.createQueue("order.processing");
    MessageProducer producer = session.createProducer(queue);

    TextMessage message = session.createTextMessage();
    message.setText("{\"orderId\": \"ORD-1234\", \"customerId\": \"CUST-99\"}");
    message.setStringProperty("orderType", "express");  // property for filtering

    producer.send(message);
    System.out.println("Sent: " + message.getJMSMessageID());
}

Receiving from a queue (synchronous poll):

JAVA
MessageConsumer consumer = session.createConsumer(queue);
Message received = consumer.receive(5000);  // wait up to 5 seconds
if (received instanceof TextMessage) {
    System.out.println("Received: " + ((TextMessage) received).getText());
}

Receiving asynchronously (MessageListener):

JAVA
consumer.setMessageListener(message -> {
    try {
        if (message instanceof TextMessage) {
            String body = ((TextMessage) message).getText();
            processOrder(body);
            // AUTO_ACKNOWLEDGE: message auto-acked after listener returns
        }
    } catch (JMSException e) {
        throw new RuntimeException(e);
    }
});

JMS Acknowledge Modes

| Mode | Behaviour | |------|-----------| | AUTO_ACKNOWLEDGE | Session automatically acks after listener returns successfully | | CLIENT_ACKNOWLEDGE | Application calls message.acknowledge() explicitly | | DUPS_OK_ACKNOWLEDGE | Lazy acknowledgement — may cause duplicates | | Transacted session | Messages acked as part of a session transaction |


Pub/Sub with Node.js and JMS

The programming exercise for this course involves implementing a simple publish-subscribe integration using Node.js and JMS. This section provides the conceptual framework and code structure.

Node.js and JMS

Node.js can connect to JMS brokers using AMQP (the underlying protocol that JMS implementations often support) via the rhea library or through STOMP protocol using stomp-client.

For this exercise, we use STOMP over ActiveMQ (the most accessible setup for Node.js):

Bash
npm install @stomp/stompjs

Publisher (Node.js, STOMP)

JAVASCRIPT
const { Client } = require('@stomp/stompjs');
const WebSocket = require('ws');

const client = new Client({
  brokerURL: 'ws://localhost:61614/ws',  // ActiveMQ WebSocket connector
  onConnect: () => {
    // Publish an order event to a topic
    client.publish({
      destination: '/topic/order.events',
      body: JSON.stringify({
        type: 'OrderPlaced',
        orderId: 'ORD-1234',
        customerId: 'CUST-99',
        amount: 299.99,
        timestamp: new Date().toISOString()
      }),
      headers: {
        'content-type': 'application/json',
        'correlation-id': 'req-abc-123'
      }
    });
    console.log('Published order event');
    client.deactivate();
  }
});

client.activate();

Subscriber (Node.js, STOMP)

JAVASCRIPT
const { Client } = require('@stomp/stompjs');

const client = new Client({
  brokerURL: 'ws://localhost:61614/ws',
  onConnect: () => {
    // Subscribe to order events topic
    client.subscribe('/topic/order.events', (message) => {
      const event = JSON.parse(message.body);
      console.log(`[Inventory Service] Received: ${event.type} for order ${event.orderId}`);
      processInventoryReservation(event);
    });
    console.log('Inventory subscriber active, waiting for events...');
  }
});

client.activate();

function processInventoryReservation(event) {
  console.log(`Reserving inventory for order ${event.orderId}`);
  // ... business logic
}

Running the Exercise

  1. Install Apache ActiveMQ locally (download from activemq.apache.org)
  2. Start ActiveMQ: ./bin/activemq start
  3. Open admin console: http://localhost:8161 (admin/admin)
  4. Run multiple subscriber scripts in separate terminals
  5. Run the publisher script — observe all subscribers receive the event
  6. Try disconnecting one subscriber, publish messages, reconnect — observe durable vs non-durable behaviour

Exercise Extension: Competing Consumers on a Queue

Change the destination from a topic to a queue:

JAVASCRIPT
// Publisher
destination: '/queue/order.processing'

// Run two subscriber instances
// Observe: each message goes to only ONE subscriber (load balanced)

This demonstrates the difference between pub/sub (topic — all subscribers get a copy) and point-to-point (queue — one subscriber gets each message).


Delivery Guarantees in Depth

How At-Least-Once Delivery Works

1. Consumer receives message (broker marks it as "in-flight")
2. Consumer begins processing
3. Case A: processing succeeds → consumer sends ACK → broker deletes message
4. Case B: consumer crashes → broker does not receive ACK
   → after visibility timeout: broker marks message as available again
   → another consumer receives the message

This is why duplicates happen: the message was processed by the first consumer, but the broker did not know — so it redelivered.

Design rule: assume every consumer will receive every message at least twice. Design accordingly.

Persistent vs. Non-Persistent Messages

Persistent (JMS DeliveryMode.PERSISTENT):
The broker writes the message to disk before acknowledging the producer. If the broker restarts, the message survives. Slower write throughput, high reliability.

Non-persistent (DeliveryMode.NON_PERSISTENT):
The broker keeps the message in memory only. Faster, but message is lost if the broker restarts. Use only for non-critical, high-volume data.

For business integrations: always use persistent delivery.


Message Selectors (JMS)

JMS consumers can filter messages using a message selector — a SQL-92-like expression evaluated against message properties:

JAVA
// Only receive express orders
MessageConsumer consumer = session.createConsumer(
    queue,
    "orderType = 'express' AND amount > 100"
);

Message selectors are evaluated by the broker before delivery. This is more efficient than receiving all messages and filtering in the consumer.


Apache Kafka Architecture Overview

Kafka takes a fundamentally different approach to messaging: instead of a queue (where messages are deleted after delivery), Kafka uses an append-only log.

Core Kafka Concepts

Topic: a named, durable, append-only log. Messages are never deleted after delivery — they are retained for a configurable period (or indefinitely).

Partition: topics are divided into partitions. Each partition is an independent ordered log. Partitions enable parallel processing — each partition can be consumed by a separate consumer.

Offset: the position of a message in a partition. Consumers track their offset and advance it as they process messages. Consumers can reset their offset to replay messages.

Consumer group: a set of consumers sharing the work of consuming a topic. Each partition is assigned to one consumer in the group.

Broker: a Kafka server node. Kafka runs as a cluster of brokers.

Replication: each partition is replicated across multiple brokers. If one broker fails, another takes over without message loss.

Kafka vs. JMS/AMQP

| Aspect | JMS (ActiveMQ, IBM MQ) | Apache Kafka | |--------|------------------------|--------------| | Model | Queue / Topic | Partitioned log | | Message deletion | After acknowledgement | After retention period | | Replay | Not possible | Yes — reset consumer offset | | Throughput | High (100k-1M/s) | Very high (millions/s) | | Consumer model | Push (broker delivers) | Pull (consumer fetches) | | Ordering | Per queue (FIFO) | Per partition (by key) | | Best for | Enterprise workflows, complex routing | Streaming, data pipelines, audit |


Error Handling in Messaging

Retry Logic

When message processing fails, the system must decide: retry or give up?

message arrives
  → attempt processing
  → failure
    → transient failure? (network, temporary resource unavailability)
        → retry with exponential backoff: 1s, 2s, 4s, 8s...
        → max retries reached → dead letter queue
    → permanent failure? (invalid data, business rule violation)
        → dead letter queue immediately (retrying will not fix it)

Dead Letter Queue (DLQ) in JMS

In JMS (ActiveMQ), configure the DLQ in the broker:

XML
<!-- activemq.xml — configure dead letter strategy -->
<deadLetterStrategy>
    <individualDeadLetterStrategy
        queuePrefix="DLQ."
        useQueueForQueueMessages="true"
        maxInactivityDuration="5000"/>
</deadLetterStrategy>

Failed messages from order.processing queue appear in DLQ.order.processing.


Lecture Summary

  • Messaging systems decouple producer and consumer: the producer writes and moves on; the consumer processes when ready.
  • Queues deliver each message to exactly one consumer (work distribution); topics deliver to all subscribers (fan-out).
  • JMS is the Java standard for messaging: ConnectionFactory → Connection → Session → Queue/Topic → MessageProducer/Consumer.
  • The programming exercise implements pub/sub using Node.js + STOMP on ActiveMQ: multiple subscribers receive the same event; competing consumers share a queue.
  • Delivery guarantees: persistent + manual acknowledge = at-least-once. Every consumer must be idempotent.
  • Kafka stores messages in an append-only partitioned log — consumers can replay, and multiple independent consumer groups consume independently.
  • Design error handling explicitly: transient failures retry with backoff; permanent failures route immediately to DLQ; alert on any DLQ message.

Next: Deep Dive — Securing Integrations

Enjoyed this article?

Explore the Integration Engineering learning path for more.

Found this helpful?

Share:š•

Leave a comment

Have a question, correction, or just found this helpful? Leave a note below.