Security-focused frontend interview guide to DOM XSS: attacker-controlled sources, dangerous DOM sinks, safe rendering patterns, URL/protocol validation, and defense in depth with CSP and Trusted Types.
DOM XSS Prevention in JavaScript: Dangerous Sinks, Safe APIs, and CSP
Use guided tracks for structured prep, then practice company-specific question sets when you want targeted interview coverage.
Definition (above the fold)
DOM XSS occurs when untrusted input reaches a DOM sink that interprets it as code or HTML. This can happen entirely on the client side without a server-side template bug. The fix is data-flow control: identify untrusted sources, block dangerous sinks, and enforce safe APIs by default.
Core mental model
Model this as source -> transform -> sink. If source is attacker-controlled and sink interprets executable content, you have risk. Break the chain by sanitizing/validating at boundaries and preferring non-executable sink APIs.
Untrusted source | Example | Risk |
|---|---|---|
URL data |
| Attacker can share crafted links |
Cross-window messaging |
| Unvalidated origin/data injection |
Storage/state | localStorage/sessionStorage values | Persisted attacker payload reuse |
Third-party API fields | Profile bio, comments, markdown | Reflected or stored unsafe content |
Runnable example #1: dangerous sink vs safe text rendering
const payload = new URL(location.href).searchParams.get('q') || '';
// Dangerous: HTML interpretation
// result.innerHTML = payload;
// Safe default: text-only rendering
result.textContent = payload;
Using textContent prevents markup execution and should be the default for user-controlled text.
Runnable example #2: URL sink validation allowlist
function setSafeLink(anchor, raw) {
const u = new URL(raw, window.location.origin);
const okProtocols = new Set(['http:', 'https:', 'mailto:']);
if (!okProtocols.has(u.protocol)) {
throw new Error('Blocked unsafe protocol');
}
anchor.href = u.toString();
}
setSafeLink(document.querySelector('#profile'), userInputUrl);
Dangerous sink | Safer alternative | Notes |
|---|---|---|
|
| Only sanitize when rich HTML is required |
String-based | Function callback form | Avoid implicit eval behavior |
| Parsed URL + protocol allowlist | Block |
Direct script URL injection | Static script tags + CSP controls | Avoid dynamic script construction from input |
Common pitfalls
- Assuming framework escaping protects every manual DOM operation.
- Sanitizing once but later mutating string into another unsafe sink.
- Missing origin checks on
postMessagehandlers. - Relying on blacklist regex rules instead of structured parsing/allowlists.
Defense in depth
Use CSP to restrict script execution and reduce exploit impact if a sink slips through. Adopt Trusted Types in large apps to force controlled creation of HTML/script URLs. Pair client controls with server-side output encoding and validation for full coverage.
Interview follow-ups
Q1: Is escaping enough for all sinks? A: No, sink context matters; URL, HTML, and script contexts differ.
Q2: Why is DOM XSS tricky? A: It can be introduced entirely in client code after data leaves the server.
Q3: First practical hardening step? A: Replace dangerous sinks with safe defaults and add CSP policy reporting.
Implementation checklist / takeaway
Map sources and sinks, block dangerous DOM APIs by default, validate structured inputs (especially URLs), and enforce CSP/Trusted Types where possible. Strong interview answers focus on secure data flow, not just one sanitizer call.