Security Interview Prep — Senior Level (50 Questions)
50 in-depth security interview questions for senior and lead developers. Covers JWT algorithm confusion, OAuth 2.0 PKCE, mTLS, SSRF, timing attacks, bcrypt vs Argon2, multi-tenant API design, GDPR technical requirements, CI/CD security, and incident response.
How to Use This Guide
Senior security interviews go beyond "what is XSS." You're expected to explain the mechanics of attacks, articulate trade-offs between mitigations, design secure systems from scratch, and demonstrate that you've thought about security as an architectural concern. These answers reflect the depth expected at senior/lead level.
Q1: Explain JWT algorithm confusion attacks and how to prevent them.
A: JWT algorithm confusion attacks exploit libraries that trust the alg header in the JWT itself. If a server uses RS256 (RSA — asymmetric), the public key is publicly known. An attacker can create a JWT with alg: HS256 (HMAC — symmetric) and sign it using the RS256 public key as the HMAC secret. A vulnerable library sees a valid HMAC signature and accepts the token. The root cause: the application didn't specify which algorithm is expected, so it honors whatever the token claims. Prevention: explicitly configure the allowed algorithm(s) on the server — never derive it from the token. ValidAlgorithms = new[] { "RS256" } in .NET. Also verify the none algorithm is not accepted (ValidateIssuerSigningKey = true).
Q2: Why is OAuth 2.0 implicit flow deprecated? What replaced it?
A: The implicit flow returned the access token directly in the URL fragment (#access_token=...) after authorization. This has several problems: the token is exposed in the browser history and server logs; if JavaScript is compromised (XSS), tokens can be exfiltrated; the fragment is accessible to any JavaScript on the page including third-party scripts. The replacement is Authorization Code with PKCE (Proof Key for Code Exchange). Instead of returning a token directly, the auth server returns a short-lived authorization code. The client exchanges this code for tokens in a back-channel request. PKCE adds a code_verifier / code_challenge pair that prevents authorization code interception. PKCE is now recommended for all OAuth 2.0 flows, including public clients (SPAs, mobile apps) that cannot keep a client secret.
Q3: Explain mTLS. When would you use it?
A: Standard TLS authenticates only the server (the server presents a certificate; the client verifies it). Mutual TLS (mTLS) additionally authenticates the client — both parties present and verify certificates. The handshake: (1) server presents cert, (2) client verifies it, (3) client presents cert, (4) server verifies it. After mTLS establishment, both parties know they're talking to the expected counterpart. Use it for: service-to-service communication in a microservices architecture (each service has a cert proving its identity), zero trust east-west traffic control, client authentication for high-value APIs where API keys are insufficient. Service meshes (Istio, Linkerd, Dapr) implement mTLS transparently between services without application code changes. The operational cost: certificate issuance, rotation, and revocation management — workload certificates should be short-lived (24h) with automatic rotation.
Q4: How does Zero Trust differ from perimeter security?
A: Perimeter security assumes everything inside the network boundary is trusted — firewall on the edge, once inside you can reach internal services. Zero Trust assumes no implicit trust from network location — every access request is authenticated, authorized, and continuously verified regardless of origin (inside or outside). Zero Trust is driven by the failure of perimeter security in modern environments: remote work, cloud infrastructure, and lateral movement after perimeter breach. The key pillars: verify explicitly (identity, device, risk signals on every request), use least privilege (JIT access, minimal permissions), assume breach (encrypt east-west traffic, monitor internal movement). Identity becomes the primary security perimeter — a user with valid credentials on an unmanaged device from an unusual location can be denied even on the "internal" network.
Q5: Walk me through the STRIDE threat model.
A: STRIDE is a per-threat-category framework applied to each component and data flow in a system: Spoofing (impersonating an identity — mitigated by authentication, mTLS); Tampering (modifying data in transit or at rest — mitigated by TLS, HMAC, input validation); Repudiation (denying an action occurred — mitigated by non-repudiable audit logs, signed events); Information Disclosure (exposing data to unauthorized parties — mitigated by authorization, encryption, least-privilege data access); Denial of Service (making the system unavailable — mitigated by rate limiting, auto-scaling, circuit breakers); Elevation of Privilege (gaining more permissions than authorized — mitigated by least privilege, server-side authorization, no trust of client-supplied roles). You apply STRIDE to every element of your data flow diagram and every trust boundary crossing to generate a comprehensive threat list.
Q6: What are the OWASP Top 10 2021 with mitigation strategies?
A: (1) Broken Access Control — implement deny-by-default, resource-level authorization, IDOR checks; (2) Cryptographic Failures — HTTPS everywhere, strong hashing for passwords (Argon2/bcrypt), encrypt sensitive data at rest; (3) Injection — parameterized queries, ORM, input validation; (4) Insecure Design — threat modeling, security requirements, secure design patterns; (5) Security Misconfiguration — secure defaults, IaC for auditable config, automated scanning; (6) Vulnerable and Outdated Components — dependency scanning, Dependabot, SBOM; (7) Identification and Authentication Failures — MFA, short-lived tokens, secure session management, breached password checks; (8) Software and Data Integrity Failures — code signing, SLSA provenance, verify dependencies; (9) Security Logging and Monitoring Failures — comprehensive audit logs, alerting, SIEM integration; (10) SSRF — validate/allowlist URLs, block internal IP ranges, use egress filtering.
Q7: What is a timing attack? How do you defend against it?
A: A timing attack is a side-channel attack where an attacker infers information from how long an operation takes. Classic example: comparing a provided HMAC signature against the expected signature with a naive string comparison. Most string comparisons short-circuit — they return false at the first mismatched byte. This means an attacker can determine how many bytes of the signature matched based on response time (longer time = more bytes matched). By iteratively adjusting bytes, the attacker can brute-force the correct signature without knowing the key. Prevention: constant-time comparison — compare all bytes regardless of whether a mismatch is found early. In .NET: CryptographicOperations.FixedTimeEquals(a, b). In Python: hmac.compare_digest(a, b). Never use == or string.Equals() for comparing secrets, tokens, or cryptographic values.
Q8: Compare bcrypt and Argon2 for password hashing. When would you choose each?
A: Both are purpose-built password hashing functions that are slow by design to resist brute-force attacks. bcrypt: work factor is configurable (cost factor 10–14 typical), 72-character password limit (passwords longer than 72 chars are truncated — a gotcha), designed in 1999, widely supported, well-tested. Argon2: won the Password Hashing Competition (2015), configurable memory cost (making GPU/ASIC attacks more expensive), time cost, and parallelism. Argon2id (hybrid of Argon2i and Argon2d) is the recommended variant. Better resistance to GPU-accelerated cracking because of memory hardness. Choose: Argon2id for new systems — it's the current best practice. bcrypt for systems with legacy constraints or when library support for Argon2 is limited. Key point: bcrypt's 72-char limit means pre-hashing with SHA-256 before bcrypt is sometimes used for long passwords — but this is complex; prefer Argon2id which has no such limit.
Q9: Explain SSRF. Give a real example with the AWS metadata service.
A: SSRF (Server-Side Request Forgery) is when an attacker causes the server to make HTTP requests to an attacker-controlled URL. The dangerous case: the server can reach internal services the attacker cannot directly access. AWS EC2 instances have a metadata service at http://169.254.169.254/ that returns instance metadata and, critically, temporary IAM credentials. Attack scenario: a web app takes a URL parameter and fetches the content (/api/preview?url=http://example.com). An attacker sends ?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/my-role. The server makes the request, retrieves the AWS IAM credentials for the instance role, and returns them in the response. The attacker now has AWS credentials with whatever permissions the EC2 instance role has. Mitigations: validate/allowlist permitted URLs (only allow specific external domains), block requests to 169.254.x.x, 10.x.x.x, 172.16-31.x.x, 192.168.x.x (RFC 1918 private ranges), resolve DNS and verify the resolved IP is not private, use IMDSv2 on AWS (token-required metadata access).
Q10: What is supply chain security? What is an SBOM?
A: Supply chain security addresses threats to software that occur before the software reaches you — malicious or vulnerable code in your dependencies (direct or transitive), compromised build pipelines, tampered artifacts. Key practices: dependency scanning (dotnet list package --vulnerable, npm audit), lock files for reproducible installs, signed commits, SLSA build provenance. An SBOM (Software Bill of Materials) is a machine-readable inventory of every component in your software — all dependencies with versions, licenses, and vulnerability status. It enables rapid response to vulnerability disclosures (Log4Shell scenario: with an SBOM you answer "do we use this?" in minutes). CycloneDX and SPDX are the two main SBOM standards. SBOM generation is becoming a regulatory requirement in some industries (US Executive Order 14028 requires SBOMs for software sold to the US government).
Q11: Describe a secrets rotation strategy for a production system.
A: Rotation means replacing an existing secret (password, API key, certificate) with a new one without service interruption. Strategy: (1) Dual-write period — new secret is added alongside old; application accepts both; (2) Deploy — application updated to use new secret; (3) Drain — wait for all in-flight requests using old secret to complete; (4) Revoke — old secret is invalidated. For database passwords with Azure Key Vault: configure a rotation policy (Key Vault can rotate automatically); use a nearExpiry event trigger to kick off rotation. Application uses managed identity to fetch the current secret from Key Vault — on rotation, the app retrieves the new value at next fetch (or on a configurable refresh interval). Zero downtime, no manual intervention. For TLS certificates: automate with Let's Encrypt or Azure Key Vault managed certificates with auto-renewal.
Q12: Design a secure multi-tenant REST API. What are the key concerns?
A: Key concerns and solutions: (1) Data isolation — every database query must filter by TenantId. Implement this at the data access layer, not the controller. In EF Core, a tenant-aware DbContext that automatically applies a global query filter: modelBuilder.Entity<Order>().HasQueryFilter(o => o.TenantId == _currentTenantId). (2) Tenant context extraction — determine the tenant from the JWT claim (tenant ID in the token), subdomain (tenant1.api.com), or header. Validate the claim server-side — never trust client-provided tenant ID without verifying it matches the token. (3) Prevent cross-tenant requests — validate the requested resource's TenantId matches the authenticated tenant. Return 404 (not 403) for cross-tenant access to prevent tenant enumeration. (4) Row-level security at the database — add database-level RLS as a defense-in-depth measure even if the application layer already filters. (5) Tenant isolation in logging — every log entry must include TenantId. (6) Blast radius — a compromised tenant should not be able to affect other tenants (rate limits per tenant, resource quotas).
Q13: What are the technical requirements of GDPR?
A: Key technical obligations: Encryption — personal data encrypted at rest and in transit (not mandated specifically, but required as "appropriate technical measures"). Access controls — only authorized personnel can access personal data; audit logs of who accessed what. Data minimization — only collect and process what's necessary. Right to erasure — ability to delete a user's personal data across all systems (hard deletes or anonymization, not just soft deletes). Right to access — ability to export all personal data for a user in a portable format (JSON/CSV). Data breach notification — systems must detect breaches and notify the supervisory authority within 72 hours. Pseudonymization — where possible, separate identifying information from data records. Data retention — data deleted after its purpose expires; retention periods must be defined and enforced. Consent management — log consent (what, when, IP); withdrawal must be as easy as giving consent.
Q14: Explain CSP nonces vs. hashes. When would you use each?
A: When you need to allow an inline script (not external), CSP unsafe-inline permits all inline scripts — too permissive. Nonces and hashes allow specific inline scripts without enabling all inline scripts. Nonce: server generates a random value per request, sets it in the CSP header (script-src 'nonce-abc123'), and adds nonce="abc123" to the script tag. The browser only executes inline scripts with the matching nonce. Requires server-side rendering — the nonce must be unique per page load. Hash: compute a SHA-256 hash of the exact script content ('sha256-base64hash') and add it to the CSP header. The browser executes only scripts whose hash matches. Better for static inline scripts that don't change per request (works with static sites/CDNs). Doesn't require a server-side nonce generator. Use nonce for dynamically rendered pages; use hash for static scripts. For new systems, avoid inline scripts entirely by externalizing scripts to .js files.
Q15: What is content sniffing? What does X-Content-Type-Options: nosniff prevent?
A: Content sniffing (MIME sniffing) is a browser behavior where it infers the content type from the content itself rather than the Content-Type header. A server returns a file with Content-Type: text/plain containing JavaScript — some browsers would execute it anyway because the content looks like JavaScript. Attack scenario: a file upload endpoint allows text files. An attacker uploads a file containing JavaScript with a .txt extension. Another user views the file — without nosniff, the browser sniffs it as JavaScript and executes it. X-Content-Type-Options: nosniff tells the browser to strictly honor the server-provided MIME type and never sniff. With nosniff, text/plain is always rendered as text, never executed as JavaScript or CSS.
Q16: What is HSTS preloading?
A: HSTS (HTTP Strict Transport Security) tells browsers to always use HTTPS for a domain, refusing to connect via HTTP. But the first visit to a site could still be over HTTP — the browser doesn't know about HSTS until it receives the header over HTTPS. HSTS preloading solves this: a hardcoded list of domains (maintained by browsers) is embedded in the browser binary. These domains are always accessed via HTTPS from the very first request, eliminating the first-visit HTTP window. To preload: serve Strict-Transport-Security: max-age=31536000; includeSubDomains; preload and submit to hstspreload.org. Requirements: 1-year max-age, includeSubDomains, preload directive. Caveat: once preloaded, removing a domain takes months — all subdomains must support HTTPS before you submit.
Q17: Explain the requirements for SameSite=None cookies.
A: SameSite=None is required when a cookie must be sent on cross-origin requests — common in third-party embed scenarios (payment iframes, analytics, OAuth flows in iframes). Requirements: (1) The Secure attribute must be set alongside SameSite=None — browsers reject SameSite=None without Secure. (2) The connection must be HTTPS. Chrome introduced this change in 2020 — cookies without a SameSite attribute default to Lax, and SameSite=None without Secure is treated as Strict. Implication: any third-party cookie use requires HTTPS. If you're building a feature that embeds in third-party sites (widget, payment flow), plan for SameSite=None; Secure and ensure your domain is HTTPS-only.
Q18: How do you secure a CI/CD pipeline?
A: CI/CD pipelines are high-value targets — compromising the pipeline allows injecting malicious code into every artifact it produces. Key controls: (1) Least-privilege service accounts — pipeline service accounts have only the permissions needed (deploy to staging, push to specific registry). (2) Secrets in vault, not pipeline config — use GitHub Actions secrets, Azure Key Vault, or HashiCorp Vault — never hardcode credentials. (3) Signed commits — require GPG-signed commits on protected branches; the pipeline only builds signed code. (4) Dependency pinning — pin all action versions to commit SHA (uses: actions/checkout@a81bbbf...), not floating tags (maintainer compromise can change what @v3 points to). (5) SAST in pipeline — run static analysis (SonarQube, Semgrep) and fail on new high-severity findings. (6) Dependency scanning — fail builds on known CVEs in dependencies. (7) Container image scanning — scan Docker images with Trivy or Snyk before push. (8) Immutable artifacts — don't allow pipeline stages to modify artifacts after they're built and signed. (9) Audit pipeline changes — treat pipeline configuration as production code, with PR review required.
Q19: How should you respond to a security incident?
A: Incident response follows a structured playbook: (1) Identify — confirm the incident is real (vs. false positive), establish initial scope. Who is affected? What data? Is it ongoing? (2) Contain — stop the bleeding. Revoke compromised credentials, block attacker IP, disable the affected feature, isolate compromised systems. Prioritize containment over investigation — stop ongoing damage first. (3) Eradicate — remove the attacker (malware, backdoors, persistence mechanisms). Patch the vulnerability. (4) Recover — restore services from known-good backups, with verification. Gradually restore access with monitoring. (5) Post-incident review — root cause analysis (5 Whys). What failed? What worked? Update runbooks, monitoring, and controls. Communicate to stakeholders and (if required) regulators and affected users. Key developer actions: have audit logs that cover auth events, data access, and admin actions — they are essential during investigation. Without logs, you cannot determine what the attacker accessed. Practice the playbook — tabletop exercises before an incident reveals gaps without real damage.
Q20: Explain OAuth 2.0 PKCE in detail.
A: PKCE (Proof Key for Code Exchange, pronounced "pixy") prevents authorization code interception attacks in public clients. Flow: (1) Client generates a cryptographically random code_verifier (43–128 chars). (2) Client creates code_challenge = BASE64URL(SHA256(code_verifier)). (3) Client sends the authorization request with code_challenge and code_challenge_method=S256. (4) Auth server stores the challenge. (5) User authenticates, auth server issues an authorization code. (6) Client exchanges the code for tokens, sending the code_verifier. (7) Auth server verifies SHA256(code_verifier) matches the stored code_challenge. If the authorization code was intercepted by a malicious app, it cannot exchange the code because it doesn't have the code_verifier. PKCE is now mandatory for all public clients per RFC 9700 (2023 Security Best Current Practice).
Q21: What is a replay attack? How do you prevent it?
A: A replay attack occurs when an attacker captures a valid request (or token) and re-sends it to perform unauthorized actions. Example: capture a password reset link with a token — if it can be replayed, the attacker can reset the password even after the legitimate user used it. Prevention: nonces (unique random values that can only be used once — server tracks used nonces), short token expiry (reduces the window for replay), jti claim in JWTs (unique token identifier — server can blacklist used jti values for one-time tokens), timestamping with tight validity windows.
Q22: What is PKCE and how does it compare to client_secret?
A: client_secret is a shared secret between the client and auth server — it proves the client is the registered application. Suitable only for confidential clients (server-side apps that can keep secrets). Public clients (SPAs, mobile apps) cannot keep client_secret — it would be visible in the app bundle or browser JavaScript. PKCE solves this for public clients without requiring a shared secret. It proves that the entity exchanging the code is the same entity that started the authorization flow (cryptographic proof via the verifier/challenge pair), even without a pre-shared secret. PKCE and client_secret are complementary — confidential clients should use both for defense in depth.
Q23: How does row-level security work in SQL Server/PostgreSQL?
A: Row-level security (RLS) is a database feature that restricts which rows a query can return based on who is executing it. The database enforces the filter — even if the application ORM forgets to include WHERE TenantId = @tenant, the database automatically applies the tenant filter. SQL Server: CREATE SECURITY POLICY TenantFilter ADD FILTER PREDICATE dbo.fn_tenantAccessPredicate(TenantId) ON dbo.Orders. The predicate function returns 1 (allow) or 0 (deny) based on SESSION_CONTEXT(N'TenantId'). Application sets the context before queries: EXEC sp_set_session_context 'TenantId', @tenantId. PostgreSQL uses ALTER TABLE orders ENABLE ROW LEVEL SECURITY and CREATE POLICY tenant_isolation ON orders USING (tenant_id = current_setting('app.current_tenant_id')::uuid). RLS is defense-in-depth — the application layer should still filter, but RLS catches application layer bugs.
Q24: What is credential stuffing? How do you defend against it?
A: Credential stuffing is using large lists of username/password pairs leaked from other sites to attack your authentication. Since many users reuse passwords, leaked credentials from Site A succeed on Site B. Attackers automate millions of login attempts using botnets to avoid IP rate limiting. Defenses: MFA (stolen password alone is insufficient), rate limiting per username and per IP (distributed attacks make IP-only limiting insufficient — also use device fingerprinting, CAPTCHA for anomalous patterns), check new passwords against known breach databases (Have I Been Pwned Passwords API) and prompt change if found, monitor for high-volume failed logins, use device reputation signals.
Q25: What is HPKP? Why was it deprecated?
A: HTTP Public Key Pinning allowed websites to specify which TLS certificate public keys should be trusted for the domain — providing protection against rogue CAs issuing fraudulent certificates. The header pinned specific certificate public key hashes. It was deprecated because of a catastrophic failure mode: if you pin a certificate and then lose the private key, or your certificate expires and you can't match the pinned key, your site becomes inaccessible to all users who cached the pin — potentially for months. Legitimate misuse brought down real sites. The safer replacement is Certificate Transparency (CT) — all publicly trusted CAs must log issued certificates to public append-only logs. Browser/monitoring tools can detect unauthorized certificates without the site-breaking risk of pinning.
Q26: Explain OAuth 2.0 token introspection.
A: Token introspection (RFC 7662) allows resource servers to validate access tokens without having the signing key. Instead of validating locally (which requires the signing key and decoding the JWT), the resource server calls the auth server's introspection endpoint with the token. The auth server returns whether the token is active and its metadata. Useful for opaque tokens (non-JWT), or when you need real-time validity checking (token revocation reflected immediately — JWTs are hard to revoke before expiry). Trade-off: adds a network call per request (mitigated by caching with a short TTL). Compare: JWT validation is local (fast, no network, but can't check revocation until expiry), introspection is remote (slower, adds latency, but real-time revocation).
Q27: What is the difference between symmetric and asymmetric JWT signing? When would you use RS256 over HS256?
A: HS256 (HMAC-SHA256) uses a shared secret — both the issuer and every resource server that validates the token must know the secret. Simple, fast, but if any consumer is compromised, the secret is exposed and the attacker can forge tokens. RS256 (RSA-SHA256) uses a private key to sign (only the auth server) and a public key to verify (any resource server). Compromise of a resource server exposes only the public key (already public) — not the ability to forge tokens. Use RS256 when: tokens are validated by multiple independent services (microservices), third-party services validate your tokens, or you need to publish a public JWKS endpoint. Use HS256 when: a single trusted service validates tokens (monolith or tightly controlled services), performance is critical (HS256 is faster), or operational simplicity is prioritized.
Q28: How would you implement audit logging for compliance requirements?
A: Audit logs must answer: who did what, to which resource, from where, and when. Requirements for compliance-grade audit logging: (1) Immutability — logs must not be modifiable after creation. Use an append-only log store (Azure Immutable Storage, write-only S3 bucket with WORM policy). (2) Completeness — log auth events, data access, data modification, admin actions, failed access attempts. (3) Structured format — JSON with consistent fields: userId, tenantId, action, resourceType, resourceId, ipAddress, userAgent, result, timestamp (ISO 8601 UTC). (4) No sensitive data — no passwords, no full PII in log body (use opaque IDs, not names/emails). (5) Tamper evidence — cryptographic chaining (each log entry hashes the previous) or shipping to a separate immutable system immediately. (6) Retention — typically 1 year minimum, 7 years for financial regulatory requirements. (7) Searchable — logs in a queryable store (Log Analytics, Elasticsearch) with appropriate indexes.
Q29: What is CSP violation reporting?
A: When a browser blocks a resource due to CSP, it can send a report to a specified endpoint. Content-Security-Policy: default-src 'self'; report-uri /csp-report (legacy) or report-to (modern). The report contains: the page URL, the blocked resource URL, the violated directive, and the original policy. Value: (1) During rollout, use Content-Security-Policy-Report-Only header — the policy is enforced in report mode only (nothing blocked, only reported). You can observe what would be blocked before enabling enforcement, preventing legitimate resources from being blocked. (2) In production, reports alert you to CSP violations which may indicate XSS attempts. High-volume reports to specific directives warrant investigation.
Q30: What is an XXE attack?
A: XML External Entity (XXE) injection exploits XML parsers that process external entity references. An external entity is a reference to an external resource: <!ENTITY xxe SYSTEM "file:///etc/passwd">. If the XML parser resolves external entities and the application includes the parsed content in a response, the attacker reads arbitrary files from the server filesystem. Blind XXE uses out-of-band techniques (DNS lookups, HTTP requests to attacker-controlled server) to exfiltrate data when the response isn't directly visible. SSRF via XXE is also possible: <!ENTITY xxe SYSTEM "http://169.254.169.254/latest/meta-data/">. Prevention: disable external entity processing in XML parsers — this is the only reliable mitigation. In .NET: XmlReaderSettings { DtdProcessing = DtdProcessing.Prohibit }.
Q31: What is CORS misconfiguration? Give a real attack scenario.
A: A CORS misconfiguration allows unintended origins to read cross-origin responses. Attack scenario: an API sets Access-Control-Allow-Origin: [request origin] and Access-Control-Allow-Credentials: true without validating the origin. Any website can make credentialed requests to the API and read responses. Attack: user visits evil.com, which runs JavaScript that calls https://bank.com/api/account with credentials: include. The bank's API returns the account details, which evil.com's script reads and exfiltrates. This works because: (1) the bank's cookies are sent (credentialed), (2) the bank reflects the origin without allowlisting, (3) Allow-Credentials: true. Prevention: maintain an explicit allowlist of permitted origins; never reflect the Origin header directly; AllowCredentials requires an explicit non-wildcard origin.
Q32: Explain server-side template injection (SSTI).
A: SSTI occurs when user input is embedded into a server-side template (Jinja2, Handlebars, Razor, Freemarker) and the template engine evaluates it. Example: a greeting endpoint takes a name parameter: Hello {{name}}. Attacker sends {{7*7}} — if the response contains 49, the template engine evaluated it. Escalation: in Jinja2, {{config.items()}} leaks configuration; {{ ''.__class__.__mro__[1].__subclasses__() }} can lead to RCE. Prevention: never pass user input to template engines for rendering. If you must include user data in templates, use the template engine's context binding (variable substitution, not template evaluation), which treats the value as data, not template code.
Q33: What is OAuth 2.0 scope and how should you design it?
A: Scopes define the specific permissions an OAuth token grants. The client requests scopes during authorization; the user consents; the token is issued with the granted scopes. The resource server validates the token's scopes on every request. Design principles: granular (separate scopes for read vs. write, per resource type), meaningful (names that make sense to users seeing the consent screen — read:orders not api.access), not too granular (don't create a scope per individual endpoint — group logically), default minimal (default scope grants minimum access). Example: orders:read, orders:write, profile:read, admin:tenants. The resource server checks: if (!token.HasScope("orders:write")) return Forbidden(). Never issue tokens with broader scopes than needed — apply least privilege at the scope level.
Q34: What is the security impact of verbose error messages?
A: Verbose error messages in production leak: SQL query structure (revealing database schema, table names, column names — aids SQL injection); stack traces (revealing framework versions, file paths, internal class names — aids targeted exploitation); connection strings (in misconfigured apps — direct database access); validation error details that reveal internal logic. They also ease enumeration: if "Email not found" vs. "Password incorrect" are separate messages, attackers can enumerate valid accounts. Return generic messages in production with a correlation ID (traceId) that maps to the detailed internal log. The internal log has the full exception — the API response has only enough to get support.
Q35: What is OWASP API Security Top 10?
A: OWASP publishes a separate Top 10 specifically for APIs (2023): (1) Broken Object Level Authorization (IDOR), (2) Broken Authentication, (3) Broken Object Property Level Authorization (returning/accepting too many fields — mass assignment, over-fetching), (4) Unrestricted Resource Consumption (no rate limiting, large payloads), (5) Broken Function Level Authorization (regular users accessing admin endpoints), (6) Unrestricted Access to Sensitive Business Flows (mass account creation, mass purchase), (7) SSRF, (8) Security Misconfiguration, (9) Improper Inventory Management (shadow APIs, outdated API versions still exposed), (10) Unsafe Consumption of APIs (trusting third-party API responses without validation).
Q36: How does certificate pinning work and when should you use it?
A: Certificate pinning configures a client to only accept specific certificates (or certificate public keys) for a connection, rejecting all others even if signed by a trusted CA. Used in mobile apps to prevent MITM attacks using legitimate corporate proxies or rogue CAs. Implementation: pin the leaf certificate public key hash or an intermediate CA. On iOS/Android, store the expected hash and compare during TLS handshake. Challenges: certificate rotation requires app update (pin the intermediate CA, not the leaf, to allow rotation without app updates), difficult to deploy for web (HPKP was deprecated). Today: use primarily in high-security mobile apps for specific critical endpoints. Always include a backup pin and a migration path.
Q37: What is a confused deputy attack?
A: A confused deputy attack exploits a program that has more authority than the attacker but is tricked into misusing it. Classic web example: CSRF — the browser (deputy) has session cookies, so the bank trusts its requests. An attacker tricks the browser into sending requests on their behalf. The browser is the confused deputy — it has legitimate access but is confused about who is directing it. SSRF is another example: the server (deputy) can reach internal services the attacker cannot. The attacker tricks the server into fetching internal resources. Mitigation: the deputy must verify the intent of the requester matches what they're asking for (CSRF token verifies the request originated from the legitimate frontend; URL allowlisting prevents the server from fetching arbitrary internal URLs).
Q38: What is the principle of fail-safe defaults?
A: Fail-safe defaults means systems default to a secure state when they fail, rather than an insecure one. If the authorization system fails to load a user's permissions — deny access (fail closed), don't grant access (fail open). If a security check throws an exception — deny the request, don't bypass the check. If a feature flag for a security control is not found — default to the more secure behavior. Applied to access control: if the authorization query fails to execute, return 403, not 200. If the JWT library throws an unexpected exception, return 401. Never catch a security exception and proceed as if the check passed.
Q39: What is HSTS and how does it prevent downgrade attacks?
A: A downgrade attack (SSL stripping) intercepts the initial HTTP connection before a redirect to HTTPS, keeping the user on HTTP while proxying traffic to the HTTPS server. The user thinks they're on HTTP (or doesn't notice), and the attacker reads plaintext traffic. HSTS prevents this: once a browser has seen Strict-Transport-Security: max-age=31536000, it remembers that this domain must use HTTPS. For the duration of max-age, the browser refuses to connect over HTTP — it automatically upgrades to HTTPS internally without making an HTTP request. Combined with HSTS preloading, even the first visit uses HTTPS, eliminating any opportunity for SSL stripping.
Q40: How do you prevent mass assignment vulnerabilities?
A: Mass assignment occurs when an API automatically binds all request body properties to a model, including properties the client shouldn't be able to set (e.g., isAdmin, tenantId, userId). Prevention in .NET: use separate input DTOs that only include permitted fields (never bind directly to the domain entity); explicitly map DTO to entity. [Bind] attribute or [JsonIgnore] on sensitive fields are weaker alternatives. In ASP.NET Core Web API: model binding is explicit by default with [ApiController] — only bound from the request, but the DTO should still exclude sensitive fields. Never accept userId or role from the request body — set these server-side from the authenticated identity.
Q41: What is the security risk of using eval() or dynamic code execution?
A: eval() (JavaScript) or equivalent (Python's eval()/exec(), .NET's Roslyn compile-and-run) executes a string as code. If user input reaches eval(), the attacker can execute arbitrary code in the context of the application. This is Remote Code Execution (RCE) — the most critical class of vulnerability. Same risk exists in SQL template engines (non-parameterized queries), shell command construction (Process.Start("bash -c " + userInput)), and SSTI. Rule: never pass user input to any code execution primitive. If dynamic execution is required, use a strictly sandboxed environment with no access to sensitive resources.
Q42: What is defense against insecure deserialization?
A: Insecure deserialization allows attackers to manipulate serialized objects to achieve unexpected behavior — type confusion, property injection, or RCE (in gadget-chain attacks where deserialization instantiates classes that perform dangerous actions in their constructors). In .NET, BinaryFormatter was notoriously dangerous and is now obsolete. Newtonsoft.Json with TypeNameHandling != None is vulnerable to type confusion attacks. Mitigations: use System.Text.Json (type-safe by default), avoid polymorphic deserialization from untrusted input, validate and verify serialized data with a cryptographic signature before deserializing, don't deserialize data from untrusted sources into complex object graphs.
Q43: Explain the concept of security regression testing.
A: Security regression testing means adding automated tests for every security vulnerability found, so the same vulnerability cannot silently reintroduce itself. When a vulnerability is found (in a pentest, bug bounty, or code review), write a test that reproduces it, verify the test fails (proving the vulnerability exists), fix the vulnerability, verify the test passes, and add it to the permanent test suite. Categories: authorization tests (IDOR test: User B cannot access User A's resource), authentication tests (expired token returns 401), input validation tests (SQL injection payloads return 400, not 500). These tests run in CI on every PR — if code changes re-introduce a vulnerability, the build fails.
Q44: What is the security concern with JWT sub claim?
A: The sub (subject) claim identifies the entity (typically user ID) the token represents. Security concern: if the sub is predictable or sequential (e.g., sequential database IDs), an attacker who knows their own sub can attempt to forge tokens for other users' sub values (requires breaking the signature — unlikely with a strong key, but the predictability is a design concern). More practical concern: if application code uses sub to perform database lookups without verifying it came from a legitimate token, a developer might use it in a context where the JWT signature isn't validated (e.g., extracting from a cookie without validation). Use opaque, random UUIDs for sub values — not sequential database IDs.
Q45: How do you detect and prevent session fixation?
A: Session fixation attacks set a session ID before authentication, then wait for the victim to authenticate with that fixed session ID — the attacker's already-known session becomes an authenticated session. Prevention: always regenerate the session ID after a successful authentication. In ASP.NET Core with cookie authentication, this is handled automatically — the AuthenticationManager issues a new cookie after sign-in. Verify: observe the session cookie before login — after successful login, it must have a different value. If the session ID remains the same after login, the application is vulnerable to fixation.
Q46: What is an open redirect and why is it a security issue?
A: An open redirect is when an application uses a user-controlled URL parameter to redirect after an action (?returnUrl=https://evil.com). The application redirects to the attacker-controlled URL. Used in phishing: the attacker crafts a link to a legitimate domain that redirects to their malicious site, lending credibility to the link. Combined with OAuth: ?redirect_uri=https://evil.com in an OAuth flow that doesn't validate the redirect URI can send authorization codes to the attacker. Prevention: allowlist valid redirect destinations (only within your own domain), validate that the redirect URL's host matches the expected domain, don't include the full URL in the parameter — use an enum or code mapped to known safe destinations.
Q47: What is GraphQL-specific security concerns?
A: GraphQL introduces unique security challenges: (1) Introspection — by default, clients can query the entire schema. Disable or restrict in production. (2) Excessive data fetching — clients can request deeply nested queries that cause N+1 queries or massive DB load. Implement query depth limiting and query complexity scoring. (3) Batching attacks — GraphQL allows multiple operations in one request. Can be used to bypass per-request rate limiting. (4) Field-level authorization — unlike REST, authorization must be checked at the resolver level for every field, not just at the endpoint level. A user authorized to read order { id status } might not be authorized to read order { id status customer { creditCard } }. (5) Persistent queries — use persisted query IDs instead of accepting arbitrary query strings in production to limit the query surface.
Q48: How should you handle secrets in containerized environments?
A: Containers introduce specific secrets management challenges: environment variables are accessible to all processes in the container and visible in docker inspect; baking secrets into images is a critical anti-pattern (images are often stored in registries). Best practices: (1) Runtime secrets injection — Kubernetes secrets mounted as files, Azure Key Vault CSI driver, AWS Secrets Manager integration. (2) Short-lived credentials — use Workload Identity (GKE), AWS IRSA, or Azure Workload Identity so pods get ephemeral credentials rather than long-lived service account keys. (3) Never bake secrets into images — scan images for secrets (truffleHog, Trivy secret scanning) in CI. (4) Pod-level isolation — limit which pods can access which secrets via Kubernetes RBAC and network policies.
Q49: What is threat intelligence and how would you incorporate it into a development workflow?
A: Threat intelligence is information about current attack patterns, threat actors, and vulnerabilities relevant to your technology stack. Sources: MITRE ATT&CK (attacker tactics and techniques), CISA Known Exploited Vulnerabilities catalog (CVEs actively exploited in the wild — prioritize these over all others), vendor security advisories, security community feeds. Incorporation into development: subscribe to GitHub security advisories for your stack, configure Dependabot for automatic PR on new CVEs, monitor CISA KEV for your dependencies (if a KEV appears in your stack, it's an emergency), use threat intelligence to inform threat models (what are attackers actually targeting in your industry?), track incident reports from peers (if a major breach hits a company using your same stack, review your defenses).
Q50: Walk me through designing a secure password reset flow.
A: A secure password reset flow: (1) Request — user submits email. Response is always identical whether the email exists or not ("if an account exists, you'll receive a reset email") — prevents email enumeration. Log the request with IP. (2) Token generation — generate a cryptographically random token (32+ bytes, RandomNumberGenerator.GetBytes()). Hash the token with SHA-256 before storing (so a database breach doesn't let attackers use stored reset tokens). Store: hashed token, expiry (15 minutes), userId, used = false. (3) Delivery — send the plaintext token in the email link. Email is transmitted over TLS. (4) Consumption — validate token expiry, hash the provided token and compare against stored hash (constant-time comparison), verify used = false. Mark used = true immediately (prevent replay). Invalidate all existing sessions for this user. (5) New password — validate against password policy and breached password list. Hash with Argon2id. Rotate session. Log the password change event. (6) Post-reset — notify the user via email that their password was changed (alert to unauthorized resets).
Enjoyed this article?
Explore the Security & Compliance learning path for more.
Found this helpful?
Leave a comment
Have a question, correction, or just found this helpful? Leave a note below.