Back to blog
Security & Complianceadvanced

Threat Modeling — Think Like an Attacker Before They Do

Learn how to threat-model a REST API using STRIDE, build data flow diagrams with trust boundaries, score threats with DREAD, and translate findings into security requirements and test cases.

LearnixoApril 15, 20268 min read
SecurityThreat ModelingSTRIDEDREADPASTA.NETAPI Security
Share:𝕏

Why Threat Model? And When?

Security issues found in production cost 30x more to fix than issues found at design time. Threat modeling identifies threats, attacks, vulnerabilities, and countermeasures at the point when changes are cheapest: the design phase.

When to threat model:

  • New feature with a new trust boundary — a new API endpoint, external integration, or user role
  • Architecture change — moving to microservices, adding a message bus, adopting a new auth flow
  • Compliance requirement — SOC 2, HIPAA, and PCI-DSS all require it explicitly or implicitly

When not to threat model: retrospectively as a post-launch checkbox. At that point you are doing a security review, not a threat model.

STRIDE — Six Threat Categories

STRIDE is a Microsoft-developed mnemonic for six categories of threats. For each, you ask: does my system have a component vulnerable to this?

S — Spoofing

An attacker pretends to be something they are not: a user, a service, or a system.

REST API example: a caller claims to be the admin service by sending a forged JWT with "sub": "admin-service". If your API does not validate the token signature and issuer, it accepts the forged claim.

Mitigation: validate JWT signature (RS256 or ES256 — not HS256 with a shared secret), validate iss and aud claims, use Managed Identities for service-to-service auth so there are no shared secrets to steal.

T — Tampering

An attacker modifies data in transit or at rest.

REST API example: a MITM attack modifies the HTTP response between your API and a mobile client — changing a bank transfer amount from 100 to 10,000. Or an attacker modifies a database record directly if the DB user has overly broad write access.

Mitigation: HTTPS everywhere with HSTS, validate request bodies with FluentValidation, use database row-level permissions, sign outbound data where integrity is critical.

R — Repudiation

A user or service denies having performed an action, and you cannot prove otherwise.

REST API example: a user deletes a record and later claims they never did. You have no audit trail capturing who did it, when, and from which IP and correlation context.

Mitigation: immutable audit logs (append-only, sent to a log store the API service cannot delete from), structured logging with user identity and correlation ID on every mutation.

I — Information Disclosure

Sensitive data is exposed to someone not authorised to see it.

REST API example: a 500 response includes a full stack trace revealing your database schema and ORM version. Or a GET /users endpoint returns all users when the caller should only see their own record.

Mitigation: ProblemDetails format in production (no stack traces), explicit per-resource authorization checks, suppress version headers, encrypt PII at rest, mask sensitive fields in logs.

D — Denial of Service

An attacker exhausts resources and makes your system unavailable.

REST API example: an unauthenticated endpoint accepts large request bodies. An attacker sends 10,000 requests per second with 10 MB payloads — your API crashes under memory pressure.

Mitigation: rate limiting (per-IP and per-user), request body size limits, circuit breakers between services, Azure DDoS Protection Standard.

E — Elevation of Privilege

An attacker gains capabilities they should not have.

REST API example: a regular user discovers that POST /admin/users does not check for the admin role — it only checks for any authenticated user. They escalate themselves to admin.

Mitigation: explicit role checks on every privileged endpoint, policy-based authorization in ASP.NET Core, never rely on obscurity.

Data Flow Diagrams and Trust Boundaries

A DFD (Data Flow Diagram) for a threat model has five elements:

  1. External entities — things outside your system (browsers, mobile apps, third-party APIs)
  2. Processes — your services and functions
  3. Data stores — databases, caches, queues
  4. Data flows — arrows showing data movement between elements
  5. Trust boundaries — dashed lines separating zones of different trust

A minimal DFD for a JWT-authenticated REST API with PostgreSQL:

[Browser] --HTTPS--> [API Gateway] --trust boundary-- [OrdersAPI] ---- [PostgreSQL]
                                                           |
                                                      [Redis Cache]
                                                           |
                                         --trust boundary---------
                                                      [PaymentService]

Every arrow that crosses a trust boundary is a potential threat. That is where you apply STRIDE systematically.

Attack Trees

An attack tree decomposes an attacker goal into sub-goals. The root is the goal; leaf nodes are concrete attacks.

Goal: Access another user's orders (IDOR)

  • Enumerate user IDs (sequential IDs, timing attack on responses)
    • Brute-force IDs in GET /orders/
    • Observe IDs in order confirmation emails
  • Bypass authorization check
    • Find endpoint that lacks the Authorize attribute
    • Find endpoint that checks authentication but not resource ownership
  • Steal another user's session token
    • XSS on order confirmation page
    • Intercept token over HTTP endpoint

Each leaf node maps to a concrete test case. An attack tree ensures test coverage matches attacker capability.

Worked Example: Threat-Model a REST API Step by Step

System: A REST API with JWT auth (Azure AD), PostgreSQL database, Redis cache.

Step 1 — Define Scope and Assets

Assets to protect: customer PII, payment card data, order history, admin operations.

Step 2 — Draw DFD and Mark Trust Boundaries

Trust boundaries: Internet to API Gateway (TLS, rate limiting, WAF), Gateway to Service (JWT validation, internal network), Service to Database (Key Vault connection string, least-privilege DB user).

Step 3 — Apply STRIDE per Data Flow

For the flow Browser to POST /orders:

| Threat | STRIDE | Mitigation | |--------|--------|------------| | Forged JWT | Spoofing | Validate sig, iss, aud, exp | | Modified order body | Tampering | FluentValidation on request model | | No record of order creation | Repudiation | Audit log with userId, orderId, timestamp | | Stack trace in 500 response | Info Disclosure | ProblemDetails, suppress in production | | 10,000 requests per second | Denial of Service | Rate limiter 100 req/min per user | | User sets isAdmin:true in body | Elevation of Privilege | Never bind privilege fields from request body |

Step 4 — DREAD Scoring

DREAD scores threats 1-3 on five axes: Damage, Reproducibility, Exploitability, Affected users, Discoverability. Sum = priority.

| Threat | D | R | E | A | D | Total | Priority | |--------|---|---|---|---|---|-------|----------| | Forged JWT — missing validation | 3 | 3 | 2 | 3 | 3 | 14 | Critical | | Missing rate limiting | 2 | 3 | 3 | 3 | 2 | 13 | High | | Stack trace in 500 response | 2 | 3 | 1 | 3 | 2 | 11 | Medium |

Step 5 — Translate to Requirements and Test Cases

From the "Forged JWT" threat:

  • Security requirement: All JWT tokens must be validated for signature (RS256), expiry, issuer (Azure AD tenant), and audience (api://order-service).
  • Test case 1: Send a request with an expired token. Expect: 401 Unauthorized.
  • Test case 2: Send a request with a valid signature but wrong audience. Expect: 401 Unauthorized.
  • Test case 3: Send a request with alg:none. Expect: 401 Unauthorized.
C#
[Fact]
public async Task ExpiredToken_Returns401()
{
    var expiredToken = BuildJwt(expiry: DateTimeOffset.UtcNow.AddMinutes(-1));
    var response = await _client.GetAsync("/api/orders",
        withBearer: expiredToken);
    Assert.Equal(HttpStatusCode.Unauthorized, response.StatusCode);
}

[Fact]
public async Task WrongAudience_Returns401()
{
    var wrongAudToken = BuildJwt(audience: "api://wrong-service");
    var response = await _client.GetAsync("/api/orders",
        withBearer: wrongAudToken);
    Assert.Equal(HttpStatusCode.Unauthorized, response.StatusCode);
}

PASTA Methodology Overview

PASTA (Process for Attack Simulation and Threat Analysis) is a seven-stage risk-centric methodology for enterprise-scale threat models:

  1. Define business objectives and security requirements
  2. Define the technical scope
  3. Application decomposition (DFDs, trust boundaries, data classification)
  4. Threat analysis (threat actors, TTPs from MITRE ATT&CK)
  5. Vulnerability and weakness analysis (CVEs, code review, SAST)
  6. Attack modelling (attack trees, simulation)
  7. Risk and impact analysis (business impact, not just technical severity)

PASTA is heavier than STRIDE but produces business-aligned risk output. Use it for compliance-driven threat models (PCI-DSS, HIPAA) where you need board-level risk reporting.

Microsoft Threat Modeling Tool

The Microsoft Threat Modeling Tool (free download) automates DFD creation and STRIDE analysis:

  1. Draw your system using SDL stencil shapes (external entities, processes, data stores, data flows, trust boundaries)
  2. The tool auto-generates a threat list based on the element types and data flows you drew
  3. Review each generated threat — mark it as Mitigated, Not Applicable, or Needs Investigation
  4. Export the threat model as a PDF for design review documentation

The generated threats are a starting point, not a complete list. Always supplement with manual review and domain knowledge.

Key Takeaways

  • Threat model at design time — 30x cheaper than fixing after launch
  • STRIDE gives you a structured checklist so you do not miss categories
  • Every trust boundary crossing is a potential threat — put it on your DFD
  • DREAD scoring lets you prioritise what to fix first
  • Each threat must produce a concrete security requirement and a test case that verifies the mitigation works
  • Threat models are living documents — update them whenever the architecture changes

Enjoyed this article?

Explore the Security & Compliance learning path for more.

Found this helpful?

Share:𝕏

Leave a comment

Have a question, correction, or just found this helpful? Leave a note below.