Agentic Workflows · Lesson 2 of 3

Claude Code: Advanced Agent Patterns

Anthropic Claude

Advanced Claude Code — Production Patterns

This guide assumes you've used Claude Code and want to go deeper: the full configuration system, isolated agents, MCP servers, hook automation, context optimization at scale, and the patterns that separate engineers who get 2× productivity from those who get 10×.

How Claude Code Actually Works

Before optimising Claude Code, understand its execution model.

When you send a message, Claude doesn't immediately write code. It runs a tool-use loop:

Your prompt
    ↓
Claude decides which tools to call
    ├── Read(file)          → reads file contents
    ├── Glob(pattern)       → finds files by pattern
    ├── Grep(pattern)       → searches file contents
    ├── Bash(command)       → executes shell command
    ├── Write(file, content)→ creates/overwrites a file
    ├── Edit(file, old, new)→ surgical find-and-replace
    └── Agent(type, prompt) → spawns a subagent
    ↓
Claude synthesises results, decides next tool call
    ↓ (repeats until done)
Claude reports completion

This loop is why Claude Code can execute a request like "add rate limiting to all public endpoints" — it reads the project, finds every controller, understands the middleware pipeline, writes the changes, runs the build, fixes any errors, then reports done.

Implications for advanced use:

  • Claude makes many small tool calls — each one costs tokens
  • Long sessions accumulate intermediate results in context
  • The loop can be short-circuited by good CLAUDE.md (less exploration needed)
  • Permissions block specific tool calls — the deny list is the safety net

Advanced CLAUDE.md Architecture

A beginner writes one flat CLAUDE.md. An advanced engineer uses the full loading stack.

The loading hierarchy

~/.claude/CLAUDE.md              ← global personal rules (every project)
     ↓ loaded first
your-project/CLAUDE.md           ← project root (always loaded)
     ↓ then
your-project/src/api/CLAUDE.md   ← directory-scoped (loaded when in that dir)
     ↓ then
your-project/src/domain/CLAUDE.md

Claude walks up the directory tree from the file being edited and loads every CLAUDE.md it finds. Use this to scope instructions to sub-packages in a monorepo:

apps/
├── api/
│   └── CLAUDE.md   ← "This is a .NET 9 minimal API. Use ProblemDetails for errors."
├── web/
│   └── CLAUDE.md   ← "This is a React 19 app. Use TanStack Query. No Redux."
└── worker/
    └── CLAUDE.md   ← "This is a background worker. No HTTP code here."
CLAUDE.md            ← shared team rules

Personal vs team override

MARKDOWN
# CLAUDE.local.md  (gitignored  your personal file)

## My personal preferences
- I prefer explicit `return` types in TypeScript even when inferable
- Use British English in all doc comments
- When I say "clean up", touch formatting only  don't refactor logic
- Always show me a diff summary before executing multi-file changes

CLAUDE.local.md takes precedence over CLAUDE.md for personal overrides. Add it to .gitignore — it stays local.

The anti-pattern list (most impactful CLAUDE.md change)

Most engineers write what TO do. The most impactful section is what NOT to do:

MARKDOWN
## Forbidden patterns  never use these

### Code
- Never use `dynamic` or `object`  use proper generics
- Never catch `Exception`  catch specific exception types
- Never use `Thread.Sleep`  use `Task.Delay` + CancellationToken
- Never add `Console.WriteLine`  use `ILogger<T>`
- Never use string interpolation in SQL  always parameterised queries
- Never use `DateTime.Now`  use `TimeProvider` (injectable, testable)

### Architecture
- Never put business logic in controllers
- Never access the DB from domain models
- Never return domain entities from API endpoints  always DTOs
- Never use static state or singleton scope for request-scoped services

### Git
- Never commit directly to main
- Never commit with a message like "fix" or "wip"
- Never `git push --force` without asking first

The Skills System — Deep Dive

Skills replaced the older commands/ system. The difference matters:

| | Commands (old) | Skills (new) | |---|---|---| | Invocation | /project:name | /name | | Auto-invoke | ❌ | ✅ Based on description | | Tool restriction | ❌ | ✅ allowed-tools | | Model selection | ❌ | ✅ Per-skill model | | Shell injection | ✅ | ✅ More powerful | | Isolated context | ❌ | ✅ Via agents |

Full SKILL.md frontmatter reference

MARKDOWN
---
name: code-review
description: >
  Perform a structured code review of changed files.
  Auto-invoke when the user asks to review a PR, diff, or branch.
allowed-tools: Read, Glob, Grep, Bash(git diff *), Bash(git log *)
model: sonnet          # use cheaper model for read-only tasks
maxTurns: 30           # prevent runaway execution
argument-hint: "[branch or path]"
isolation: false       # run in main context (not isolated)
---

Shell injection — the powerful part

The ! backtick syntax runs a shell command before Claude sees the prompt. The output is substituted inline:

MARKDOWN
---
name: pr-description
allowed-tools: Read, Bash(git *)
model: sonnet
---

Write a pull request description for the current branch.

## Branch context
Current branch: !`git branch --show-current`
Target branch: !`git merge-base HEAD main | xargs git log --oneline -1`

## Changes
Files changed: !`git diff main...HEAD --name-status`

## Commits
!`git log main...HEAD --pretty=format:"- %s (%h)" | head -20`

## Diff summary
!`git diff main...HEAD --stat`

Write:
1. Title (≤70 chars, imperative mood)
2. Summary: what changed and why (3-5 bullets)
3. Test plan: checkboxes for what to verify
4. Breaking changes (if any)
5. Migration steps (if any)

Multi-shell-injection skill (generate migration)

MARKDOWN
---
name: add-migration
argument-hint: "MigrationName"
allowed-tools: Bash(dotnet ef *), Read, Glob
model: haiku            # simple task  use cheapest model
---

Add a new EF Core migration named "$ARGUMENTS".

## Current migration state
Latest migration: !`ls src/Infrastructure/Migrations/*.cs | sort | tail -1 | xargs basename`
Migration count: !`ls src/Infrastructure/Migrations/*.cs | wc -l`

## Recent schema changes
!`git diff HEAD -- "src/**/*Entity*.cs" "src/**/*Config*.cs" | head -100`

Steps:
1. Review the schema changes above
2. Run: `dotnet ef migrations add $ARGUMENTS --project src/Infrastructure --startup-project src/Api`
3. Review the generated migration for correctness
4. Report what was added/changed and whether the migration looks safe to apply

Skill that chains to an agent

MARKDOWN
---
name: security-audit
description: >
  Run a security audit of the codebase. Auto-invoke when reviewing
  auth, payments, or HIPAA-sensitive code.
allowed-tools: Agent
---

Spawn the security-auditor agent to review the current changes.

Focus on:
!`git diff main...HEAD --name-only`

Agents — Isolated Context Execution

Agents are the most powerful and least understood feature. When Claude spawns an agent, it creates a completely separate Claude instance with its own context window. The agent does its work, compresses findings, and returns a summary to your main session.

Why this matters:

  • The agent's tool calls and file reads don't pollute your main context
  • You can run agents in parallel for independent tasks
  • Agents can be specialised with restricted toolsets and different system prompts
  • Long exploratory searches don't eat your main session's token budget

Full agent definition

MARKDOWN
---
name: security-auditor
description: >
  Senior security engineer. Auto-invoke when reviewing auth flows,
  payment processing, session management, or HIPAA data handling.
  Also invoke on explicit "security review" or "audit" requests.
tools: Read, Glob, Grep
model: sonnet
maxTurns: 50
isolation: true
---

You are a senior application security engineer with 10 years of
experience in enterprise security, OWASP top 10, and HIPAA compliance.

## Your audit checklist

### Injection
- SQL injection in raw queries, string interpolation, dynamic LINQ
- Command injection in Bash/shell calls
- LDAP injection, XPath injection in legacy integrations

### Authentication & Session
- JWT: algorithm confusion (accept only RS256/ES256), expiry validation
- Missing token validation on internal service calls
- Refresh token rotation  are old tokens invalidated?
- Session fixation after privilege escalation

### Authorization
- IDOR: direct object references not validated against caller's identity
- Missing `[Authorize]` attributes on sensitive endpoints
- Horizontal privilege escalation (user A accessing user B's data)
- Admin functionality accessible to non-admin roles

### HIPAA (if applicable)
- PHI in log statements (names, DOBs, diagnoses, MRNs)
- PHI in URL parameters (should be in body/headers)
- Unencrypted PHI at rest in DynamoDB/PostgreSQL
- Missing audit trail for PHI access

### Secrets
- Hardcoded credentials, API keys, connection strings
- Secrets in environment variable names logged at startup
- Private keys in config files or comments

## Output format
Prioritise findings: CRITICAL  HIGH  MEDIUM  LOW  INFO
For each finding: Location | Issue | Impact | Recommended fix
End with: overall risk rating and top 3 immediate actions.

Running agents in parallel

When you ask Claude to run independent tasks, it can spawn multiple agents concurrently:

"Review the security of the auth module AND the payment module simultaneously"

Claude spawns:
├── Agent 1: security-auditor → reviews src/Auth/
└── Agent 2: security-auditor → reviews src/Payments/

Both complete → Claude merges findings → reports to you

Worktree isolation

For tasks that might make destructive changes, use worktree isolation. Each agent gets a temporary copy of the repo on a new branch:

JSON
// .claude/settings.json
{
  "agents": {
    "defaultIsolation": "worktree"
  }
}

With worktree isolation:

  • Agent creates a temporary git worktree (isolated directory)
  • Makes all changes there
  • If the task succeeds: changes are ready to merge
  • If you reject: worktree is cleaned up automatically
  • Your main working directory is never touched

MCP — Model Context Protocol

MCP lets Claude Code connect to external data sources and tools beyond the filesystem. It's what turns Claude Code from a code editor into a system that can query your database, call your APIs, check your monitoring, and read your ticketing system.

What MCP gives you

Without MCP:                    With MCP:
Claude reads files only    →    Claude queries your live DB
Claude can't call APIs     →    Claude checks Grafana metrics
Claude doesn't know JIRA   →    Claude reads the ticket context
Claude can't search Slack  →    Claude finds the relevant discussion

Configuring an MCP server

Add MCP servers to ~/.claude/settings.json (global) or .claude/settings.json (project):

JSON
{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "postgresql://user:pass@localhost/clinic_db"
      }
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
    }
  }
}

Useful MCP servers

| Server | What it provides | |---|---| | @modelcontextprotocol/server-postgres | Query PostgreSQL — Claude reads live schema and data | | @modelcontextprotocol/server-github | PR details, issues, comments, CI status | | @modelcontextprotocol/server-slack | Search messages, read threads | | @modelcontextprotocol/server-filesystem | Extended file access beyond current dir | | @modelcontextprotocol/server-brave-search | Web search for documentation | | @modelcontextprotocol/server-aws-kb-retrieval | Query AWS knowledge bases |

Building a custom MCP server (Node.js)

When no off-the-shelf server exists, build one:

TYPESCRIPT
// mcp-clinic-analytics/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  { name: "clinic-analytics", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "get_appointment_metrics",
      description: "Get real-time appointment funnel metrics for a clinic",
      inputSchema: {
        type: "object",
        properties: {
          clinic_id: { type: "string", description: "The clinic ID" },
          date_range: { type: "string", enum: ["today", "week", "month"] },
        },
        required: ["clinic_id"],
      },
    },
    {
      name: "get_slow_queries",
      description: "Get the top 10 slowest DB queries from the last hour",
      inputSchema: { type: "object", properties: {} },
    },
  ],
}));

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "get_appointment_metrics") {
    const { clinic_id, date_range = "today" } = request.params.arguments as any;
    const metrics = await fetchMetricsFromDB(clinic_id, date_range);
    return {
      content: [{ type: "text", text: JSON.stringify(metrics, null, 2) }],
    };
  }

  if (request.params.name === "get_slow_queries") {
    const queries = await fetchSlowQueriesFromPGStatStatements();
    return {
      content: [{ type: "text", text: JSON.stringify(queries, null, 2) }],
    };
  }

  throw new Error(`Unknown tool: ${request.params.name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);

Register it:

JSON
{
  "mcpServers": {
    "clinic-analytics": {
      "command": "node",
      "args": ["./mcp-clinic-analytics/dist/index.js"]
    }
  }
}

Now Claude can answer: "Why are appointment confirmations slower this week?" by querying your live database, reading slow query logs, and correlating with recent code changes — all in one session.


Advanced Hook Patterns

Hooks are shell commands that fire at lifecycle events. The advanced patterns go far beyond simple linting.

Hook event reference

JSON
{
  "hooks": {
    "PreToolUse":  [...],   // before Claude calls a tool
    "PostToolUse": [...],   // after Claude calls a tool
    "Notification":[...],   // when Claude sends a notification
    "Stop":        [...]    // when Claude finishes a task
  }
}

Each hook entry has a matcher (tool name pattern) and a hooks array.

Auto-format and build on every write

JSON
{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          {
            "type": "command",
            "command": "cd \"$(git rev-parse --show-toplevel)\" && dotnet format --include $(git diff --name-only HEAD | grep '\\.cs$' | tr '\\n' ' ') 2>/dev/null; exit 0"
          },
          {
            "type": "command",
            "command": "cd \"$(git rev-parse --show-toplevel)\" && dotnet build --no-restore -v q 2>&1 | tail -3"
          }
        ]
      }
    ]
  }
}

Auto-run tests on test file changes

JSON
{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          {
            "type": "command",
            "command": "FILE=$(echo '$CLAUDE_TOOL_INPUT' | jq -r '.file_path // empty'); if echo \"$FILE\" | grep -q 'Tests\\|Test\\.cs'; then dotnet test --filter $(basename $FILE .cs) --no-build 2>&1 | tail -10; fi"
          }
        ]
      }
    ]
  }
}

Desktop notification on task completion (macOS)

JSON
{
  "hooks": {
    "Stop": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "osascript -e 'display notification \"Claude finished the task\" with title \"Claude Code\" sound name \"Glass\"'"
          }
        ]
      }
    ]
  }
}

Windows notification

JSON
{
  "hooks": {
    "Stop": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "powershell -Command \"Add-Type -AssemblyName System.Windows.Forms; [System.Windows.Forms.MessageBox]::Show('Claude finished', 'Claude Code')\""
          }
        ]
      }
    ]
  }
}

Git auto-commit after every successful task

JSON
{
  "hooks": {
    "Stop": [
      {
        "hooks": [
          {
            "type": "command",
            "command": "cd \"$(git rev-parse --show-toplevel)\" && if [ -n \"$(git status --porcelain)\" ]; then git add -A && git commit -m \"chore: claude code checkpoint $(date '+%H:%M:%S')\"; fi"
          }
        ]
      }
    ]
  }
}

Prevent Claude from touching migration files

JSON
{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Write|Edit",
        "hooks": [
          {
            "type": "command",
            "command": "FILE=$(echo '$CLAUDE_TOOL_INPUT' | jq -r '.file_path // empty'); if echo \"$FILE\" | grep -q 'Migrations/'; then echo 'BLOCKED: Do not modify migration files directly. Create a new migration instead.' && exit 1; fi"
          }
        ]
      }
    ]
  }
}

A hook that exits with code 1 blocks the tool call and shows the message to Claude, which then adjusts its approach.


Permission System — Production Hardening

The permissions model is a three-tier firewall. Understanding the evaluation order is critical:

1. Deny rules (always evaluated first — always win)
         ↓
2. Allow rules (explicit permissions)
         ↓
3. Ask (anything not matched above — Claude prompts you)

Production-hardened settings.json

JSON
{
  "permissions": {
    "allow": [
      "Read",
      "Write",
      "Edit",
      "Glob",
      "Grep",
      "Bash(git status)",
      "Bash(git diff *)",
      "Bash(git log *)",
      "Bash(git add *)",
      "Bash(git commit *)",
      "Bash(git branch *)",
      "Bash(git checkout *)",
      "Bash(git stash *)",
      "Bash(dotnet build *)",
      "Bash(dotnet test *)",
      "Bash(dotnet ef migrations *)",
      "Bash(dotnet run *)",
      "Bash(npm run *)",
      "Bash(npm install)",
      "Bash(ls *)",
      "Bash(cat *)",
      "Bash(grep *)",
      "Bash(find *)",
      "Bash(echo *)",
      "WebFetch(domain:docs.microsoft.com)",
      "WebFetch(domain:learn.microsoft.com)",
      "WebFetch(domain:www.nuget.org)",
      "WebFetch(domain:npmjs.com)",
      "WebFetch(domain:developer.mozilla.org)",
      "Agent(Explore)",
      "Agent(claude-code-guide)"
    ],
    "deny": [
      "Bash(rm *)",
      "Bash(rmdir *)",
      "Bash(git push *)",
      "Bash(git reset --hard *)",
      "Bash(git rebase *)",
      "Bash(git force *)",
      "Bash(DROP *)",
      "Bash(DELETE FROM *)",
      "Bash(TRUNCATE *)",
      "Bash(kubectl delete *)",
      "Bash(terraform destroy *)",
      "Bash(aws s3 rm *)",
      "Bash(sudo *)",
      "Bash(chmod 777 *)",
      "Read(./.env*)",
      "Read(**/secrets/**)",
      "Read(**/.ssh/**)",
      "Read(**/credentials*)"
    ]
  },
  "env": {
    "ASPNETCORE_ENVIRONMENT": "Development",
    "NODE_ENV": "development"
  }
}

Personal override (settings.local.json)

JSON
{
  "permissions": {
    "allow": [
      "Bash(git push origin feature/*)",
      "Bash(rm *.tmp)",
      "Bash(dotnet ef database update)"
    ]
  }
}

settings.local.json is automatically gitignored. Use it for permissions you trust yourself with but don't want to grant to every team member.


Context Optimization at Scale

The 200k token window sounds enormous — until you're in a .NET monorepo with 300 files and Claude starts loading everything.

What costs tokens

| Content | Approx tokens | |---|---| | One 200-line C# file | ~1,000 tokens | | EF Core migration file | ~500 tokens | | Full project directory tree | ~2,000 tokens | | Complete test class | ~1,500 tokens | | Your CLAUDE.md (200 lines) | ~1,000 tokens | | This article | ~6,000 tokens |

A 50-file project loaded entirely = ~50,000 tokens. That's 25% of your window before you've typed a word.

Strategies to preserve context

1. On-demand loading with @ syntax

Instead of "read the whole project", reference files explicitly:

"In @src/Application/Appointments/Commands/ScheduleAppointment.cs,
add validation that the appointment DateTime is not in the past."

Claude reads only that file, not the whole codebase.

2. Scoped exploration

"Read only src/Application/Auth/ and tell me how JWT auth is currently implemented."

Constraining the search scope keeps context clean.

3. Session segmentation

Split work into focused sessions:

  • Session 1: Add the domain model changes
  • /clear → Session 2: Add the application layer handlers
  • /clear → Session 3: Add the API endpoints
  • /clear → Session 4: Write the tests

Each session starts fresh with only CLAUDE.md loaded. Fewer stale results, sharper focus.

4. Subagent delegation

"Use the Explore agent to find every place we validate appointment 
date ranges in the codebase. Report back file paths and line numbers only."

The Explore agent does the file-reading work in an isolated context. It returns a compact summary — not 40 full files — into your main session.

5. Summary-first pattern

Before starting a complex task:

"Before we start: read the project structure and give me a 10-line 
summary of the key files I'll need for adding a new feature that 
lets patients reschedule appointments."

This focuses Claude's attention for the rest of the session without loading everything.

Monorepo CLAUDE.md pattern

MARKDOWN
# Monorepo: Clinic Platform

## Repo structure
apps/
├── api/         .NET 9 Web API (see apps/api/CLAUDE.md)
├── web/         React 19 frontend (see apps/web/CLAUDE.md)
├── worker/      Kafka consumer workers (see apps/worker/CLAUDE.md)
└── mobile/      React Native (see apps/mobile/CLAUDE.md)

libs/
├── domain/      Shared domain models (never modified directly)
└── contracts/   Shared event schemas (Avro)

infra/           Terraform (see infra/CLAUDE.md)

## Cross-cutting rules
- Never import from apps/  apps/ (apps are independent)
- Never modify libs/domain without a migration plan
- All cross-service communication via Kafka events  no direct HTTP between services

## Which CLAUDE.md to read
If you're working in apps/api/ → read apps/api/CLAUDE.md
If you're working in infra/  read infra/CLAUDE.md

Rate Limit Strategy

How the window works

Claude Code Pro gives ~45 messages per 5-hour rolling window. The window isn't fixed (e.g. 9am–2pm) — it's rolling. Your 45th message of the last 5 hours is what matters, not the clock.

Practical rate calculation:

| Plan | Messages/window | Minutes between msgs | Real daily capacity | |---|---|---|---| | Pro ($20) | 45 | ~6.7 min average | ~216 if used all day | | Max 5x ($100) | 225 | ~1.3 min average | ~1,080 | | Max 20x ($200) | 900 | ~20 sec average | ~4,320 |

Batching for Pro users

On Pro, every message matters. Batch related requests:

❌ Inefficient (3 messages, 3 context loads):
1. "Add the appointment entity"
2. "Add the repository interface"
3. "Add the EF Core configuration"

✅ Efficient (1 message, 1 context load):
"Add the Appointment entity, IAppointmentRepository interface,
and EF Core configuration (AppointmentConfiguration.cs) using
Fluent API. Follow the Patient entity pattern in Domain/Patients/."

Use Plan Mode to reduce execution cycles

Plan Mode is free in the sense that misaligned implementations waste messages. One Plan Mode exploration that corrects your approach before execution saves 3–5 corrective messages.

Pro tip: Plan Mode → agree on approach → exit Plan Mode → one clean execution
vs.
Executing immediately → 2–3 rounds of "no, not like that, more like this"

Use cheaper models for read-only skills

In your SKILL.md, set model: haiku for skills that only read and summarise:

MARKDOWN
---
name: summarise-changes
model: haiku          # cheapest  reads and summarises only
allowed-tools: Bash(git *), Read
---

Summarise the changes since main in 5 bullet points.
!`git diff main...HEAD --stat`

Haiku is ~20× cheaper than Sonnet per token, and for read/summarise tasks the quality difference is negligible.


Team Workflow Patterns

Shared skills library

Commit your skills to the repo. Every engineer gets the same automation:

.claude/
├── settings.json        ← shared permissions
├── skills/
│   ├── code-review/     ← every engineer runs the same review
│   ├── pr-description/  ← standardised PR format
│   ├── add-migration/   ← prevents migration mistakes
│   ├── check-security/  ← security audit on any change
│   └── explain-change/  ← generates commit messages
├── agents/
│   ├── security-auditor.md
│   ├── test-writer.md
│   └── db-analyst.md
└── docs/
    ├── architecture.md
    ├── api-conventions.md
    └── deployment.md

The test-writer agent

MARKDOWN
---
name: test-writer
description: >
  Writes comprehensive unit and integration tests. Auto-invoke when 
  asked to "add tests", "write tests", or "cover this with tests".
tools: Read, Glob, Grep, Write, Bash(dotnet test *)
model: sonnet
maxTurns: 40
---

You are a senior engineer specialising in test design for .NET systems.

## Testing philosophy
- Unit tests: test behaviour, not implementation
- Integration tests: use Testcontainers for real DB (never mock EF Core)
- Test names: `MethodName_StateUnderTest_ExpectedBehaviour`
- Arrange/Act/Assert with blank lines between sections
- Use TestFixtures factory pattern for test data  never hardcode IDs

## Testcontainers pattern (follow exactly)
```csharp
[Collection("Integration")]
public class AppointmentRepositoryTests : IAsyncLifetime
{
    private readonly PostgreSqlContainer _postgres = new PostgreSqlBuilder()
        .WithImage("postgres:16-alpine")
        .Build();

    public async Task InitializeAsync() => await _postgres.StartAsync();
    public async Task DisposeAsync() => await _postgres.DisposeAsync();

    [Fact]
    public async Task GetByClinicId_WhenAppointmentsExist_ReturnsPagedResults()
    {
        // Arrange
        await using var context = CreateContext();
        var repo = new AppointmentRepository(context);
        // ...

        // Act
        var result = await repo.GetByClinicIdAsync("CLN-1", page: 1, size: 10);

        // Assert
        result.Items.Should().HaveCount(3);
        result.TotalCount.Should().Be(3);
    }
}

Always run dotnet test after writing tests and fix any failures before reporting done.


### The explain-change skill (commit messages)

```markdown
---
name: commit-msg
model: haiku
allowed-tools: Bash(git diff *), Bash(git status)
---

Generate a git commit message for the current staged changes.

Staged diff:
!`git diff --cached`

Rules:
- Imperative mood: "Add", "Fix", "Remove", not "Added", "Fixed"
- First line ≤ 72 chars
- Blank line after subject
- Body explains WHY, not what (the diff shows what)
- Reference issue number if branch name contains one:
  !`git branch --show-current | grep -oP 'CLINIC-\d+'`

Output just the commit message, nothing else.

CI/CD Integration

Claude Code isn't just for local development. You can use it in CI pipelines via the non-interactive API mode.

GitHub Actions: automated code review on every PR

YAML
# .github/workflows/claude-review.yml
name: Claude Code Review

on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Install Claude Code
        run: npm install -g @anthropic-ai/claude-code

      - name: Run code review
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: |
          claude --print "
          Review this pull request for: bugs, security issues, 
          missing tests, and violations of our conventions in CLAUDE.md.
          
          Changed files:
          $(git diff origin/main...HEAD --name-only)
          
          Full diff:
          $(git diff origin/main...HEAD | head -500)
          
          Output findings as GitHub markdown with severity labels.
          " > review.md
          
      - name: Post review comment
        uses: actions/github-script@v7
        with:
          script: |
            const fs = require('fs');
            const review = fs.readFileSync('review.md', 'utf8');
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: `## 🤖 Claude Code Review\n\n${review}`
            });

Using --print for non-interactive mode

Bash
# One-shot execution  returns result and exits
claude --print "What does the ScheduleAppointment command handler do?"

# Pipe output to a file
claude --print "Generate a CHANGELOG entry for the changes since v1.2.0: $(git log v1.2.0...HEAD --oneline)" > CHANGELOG_ENTRY.md

# Use in scripts
REVIEW=$(claude --print "Review $(cat src/Api/Controllers/AppointmentController.cs) for security issues")
echo "$REVIEW"

Advanced Debugging Patterns

Reproduce a production bug from logs

"Here is a production error from our logs. Analyse it, find the root cause,
and fix it.

Stack trace:
System.InvalidOperationException: Sequence contains no elements
   at Microsoft.EntityFrameworkCore.RelationalQueryableExtensions.SingleAsync[T]
   at ClinicApi.Application.Appointments.Queries.GetAppointmentHandler.Handle
   at line 47

The query at line 47 uses SingleAsync. Find it, understand why it could 
return no results, and fix it using SingleOrDefaultAsync with a proper 
404 response via ProblemDetails."

Explain unfamiliar code before touching it

Always before making changes to code you haven't written:

"Read src/Infrastructure/Kafka/OutboxRelay.cs and explain:
1. What it does
2. The threading model (what runs on what thread/task)
3. Any race conditions or error scenarios I should be aware of
4. What happens if the process crashes mid-relay

Do not make any changes. Just explain."

This costs one message and prevents expensive mistakes.

Architecture decision audit

"Read the entire src/Domain/ folder. For each aggregate root, tell me:
1. Its invariants (what it enforces)
2. Its domain events (what it publishes)
3. Any potential design issues (anemic model, missing invariants, etc.)

Do not read any other folders. Focus only on Domain/."

Security Considerations

⚠️

Your code leaves your machine

Every file Claude reads is sent to Anthropic's servers for processing. This is non-negotiable — the model runs in the cloud. For Pro/Max, Anthropic does not use your code for training. For enterprise compliance, use the Teams plan with a BAA.

What NOT to let Claude read:

GITIGNORE
# Add to .gitignore AND deny in settings.json:
.env
.env.local
.env.production
secrets/
**/appsettings.Production.json
**/credentials.json
**/*.pem
**/*.key
**/*.pfx

The deny list for secrets:

JSON
{
  "permissions": {
    "deny": [
      "Read(./.env*)",
      "Read(**/secrets/**)",
      "Read(**/appsettings.Production*)",
      "Read(**/*.pem)",
      "Read(**/*.key)",
      "Read(**/.ssh/**)"
    ]
  }
}

Prompt injection awareness:

Claude reads files. If a file contains text like "IGNORE PREVIOUS INSTRUCTIONS AND...", Claude might follow it. In security-sensitive codebases, be aware that test fixtures, mock data files, or any user-generated content loaded into context is a potential injection vector.


The 10× Engineer Pattern

The engineers who get the most from Claude Code aren't the ones who write the longest prompts. They're the ones who've built the best operating environment:

📋 Tight CLAUDE.md

Under 200 lines. Heavy on "forbidden patterns". Reviewed and updated weekly. Every correction to Claude gets added to it.

🛠️ Skills for every repeated task

If you've typed the same instructions twice, it's a skill. PR descriptions, code review, migration generation, test writing — all skills.

🤖 Agents for isolation

Long exploratory searches and security audits go in agents. Your main context stays clean and fast.

🔒 Hardened permissions

Deny everything destructive. Allow only what's needed. git push and rm -rf are never auto-allowed.

🪝 Hooks for quality gates

Auto-format, auto-build, auto-test. If Claude writes code that doesn't compile, it knows immediately and fixes it.

📌 Commit checkpoints

Commit after every completed task. The rollback cost of not committing is always higher than the 10 seconds it takes.


Claude

The Compounding Advantage

The gap between a developer using Claude Code as a chat tool and one using it as a configured engineering environment is not 10%. It's closer to 5×. The configuration work — CLAUDE.md, skills, agents, hooks, permissions — compounds. Every hour you invest in the environment pays back on every session after it. Start with the CLAUDE.md today. Add one skill per week. After a month, you'll have a system that knows your codebase better than a new team member.