Explain how to design tests that cover boundaries, equivalence classes, invariants, and failure modes. Include a practical checklist and JavaScript-specific pitfalls.
Testing Strategy: Edge Cases and Boundaries
Use guided tracks for structured prep, then practice company-specific question sets when you want targeted interview coverage.
Goal
A strong test strategy does not try every input. It finds the smallest set that still exposes boundary bugs, invalid inputs, and real-world failure modes. This is how you get high confidence with low test count.
Core idea | Why it matters | Example |
|---|---|---|
Boundary values | Most bugs happen at edges | min-1, min, max, max+1 |
Equivalence classes | Group similar inputs to reduce tests | valid vs invalid formats |
Invariants | Properties that should always hold | sorted output stays sorted |
Failure modes | Graceful handling of bad input | null, undefined, NaN |
Regression cases | Lock in prior bugs | add a test for every fix |
Practical checklist
1) List inputs and outputs.
2) Identify constraints (min/max, allowed chars, allowed states).
3) Pick boundaries and one representative per equivalence class.
4) Add invalid and unexpected types.
5) Add a real-world case and a worst-case case.
6) Add regression tests for any previous bugs.
// Example: testing clamp(value, min, max)
const cases = [
[-1, 0, 10, 0],
[0, 0, 10, 0],
[5, 0, 10, 5],
[10, 0, 10, 10],
[11, 0, 10, 10]
];
for (const [v, min, max, expected] of cases) {
expect(clamp(v, min, max)).toBe(expected);
}
// Invariant example: clamp output is always between min and max
for (const [v, min, max] of cases) {
const out = clamp(v, min, max);
expect(out).toBeGreaterThanOrEqual(min);
expect(out).toBeLessThanOrEqual(max);
}
JavaScript pitfall | Why it breaks tests | What to assert |
|---|---|---|
Type coercion | '0' vs 0 behave differently | add explicit type tests |
NaN | NaN !== NaN | use Number.isNaN |
Floating point | 0.1 + 0.2 !== 0.3 | use toBeCloseTo |
Empty/sparse arrays | holes behave like undefined | test length and iteration |
Dates/timezones | locale-dependent output | assert on UTC or iso strings |
Common anti-patterns
- Only testing happy paths.
- Too many tests that assert the same behavior.
- Tests that mirror the implementation instead of the requirements.
- Random tests without a seed (non-deterministic failures).
Design tests around boundaries, invariants, and failure modes. Use a small, deliberate set of cases that make bugs obvious and keep the suite fast and reliable.