If You Remember One Thing
In a system design interview, requirements are where seniority is most visible. If you can define scope, constraints, and measurable outcomes before architecture, your frontend system design answer becomes easier to defend. This is the strongest first move in your system design interview preparation workflow.
Why Requirements Decide the Interview
Interviewers are not scoring how fast you draw boxes. They are scoring whether you can choose the right problem before proposing a solution. In the RADIO framework, Requirements is the control point for everything else.
- Clarity: You define what success means before coding details.
- Prioritization: You separate must-haves from nice-to-haves.
- Risk management: You surface unknowns and assumptions early.
- Trade-off quality: Later architecture decisions become explicit and explainable.
The 90-Second Opening Script
Use this script verbatim when the prompt starts:
- "I will spend a few minutes clarifying requirements and non-goals, then propose architecture."
- "I want to confirm primary user flow, expected scale, and key constraints first."
- "I will define a Must/Nice/Out scope box so we protect depth under time."
- "I will call out assumptions and measurable success criteria before moving forward."
Requirements Question Bank (Frontend-Focused)
| Category | Ask this out loud | Why it matters | Artifact to produce |
|---|---|---|---|
| User and goal | Who is the primary user, and what is the top task? | Prevents feature drift. | One-line user goal statement |
| Scope boundaries | What is in scope for v1, and what is explicitly out? | Keeps answer deep, not broad. | Must / Nice / Out table |
| Scale and traffic | Expected DAU, peak concurrent users, and traffic spikes? | Shapes caching, rendering, and failure strategy. | Scale assumptions list |
| Performance and UX | Any latency targets for first paint and interaction? | Turns "fast" into measurable targets. | Perf target list (p95/p99) |
| a11y and global needs | Keyboard, screen reader, i18n, RTL requirements? | Distinguishes senior frontend answers. | a11y/i18n acceptance checklist |
| Reliability and security | Offline expectations, auth model, abuse/rate-limit concerns? | Covers real production constraints. | Failure + security baseline notes |
| Platform constraints | Existing stack, API limits, launch deadline, team size? | Keeps design grounded in delivery reality. | Constraint ledger |
Scope Box Template (Must / Nice / Out)
Use this immediately after clarifying questions.
| Must-have (v1) | Nice-to-have (if time) | Out-of-scope (explicitly parked) |
|---|---|---|
| Core interaction works end-to-end | Personalization and ranking | Advanced analytics dashboard |
| Loading/empty/error states | Offline sync polish | Multi-region active-active rollout |
| Basic a11y support | Animation refinement | ML-driven recommendations |
Assumptions and Risk Log
If the interviewer does not provide data, do not guess silently. State assumptions and attach a risk plan.
| Assumption | Risk if wrong | How to validate quickly |
|---|---|---|
| Peak traffic is 5x normal load | Cache strategy underestimates burst behavior | Ask for peak/event traffic pattern |
| SEO matters for entry pages only | Wrong rendering model choice | Confirm crawl/index requirements by route |
| Auth uses short-lived tokens | Session refresh failures in long-lived tabs | Confirm token refresh and expiration policy |
Frontend Signals You Should Always Cover
- Rendering needs: Which routes need SSR, which can stay CSR, and why.
- State coverage: idle, loading, success, empty, error, stale, partial.
- Performance constraints: p95 interaction latency, bundle limits, and network assumptions.
- Accessibility baseline: keyboard flow, focus management, and screen-reader announcements.
- Observability expectations: what metrics/logs prove correctness in production.
Success Metrics You Can Commit To
| Signal | Candidate target | How to measure |
|---|---|---|
| User interaction responsiveness | p95 interaction under 150ms | RUM event timings |
| Initial content speed | LCP under 2.5s on mid-tier device | Web Vitals dashboard |
| Reliability | Error rate under 1% for core flow | Client logs and alert thresholds |
| Task completion | Conversion or completion uplift target | Product analytics funnel |
Edge Cases and Failure States Checklist
- What should the user see on first load while data is pending?
- What if response is successful but empty?
- What if network fails, times out, or partially succeeds?
- What if data is stale while a refresh is in-flight?
- What if permissions differ by role or environment?
Common Mistakes (and Better Moves)
| Mistake | Better move |
|---|---|
| Jumping to architecture in minute one | Open with scope, constraints, and metrics first |
| Using vague words like "fast" and "scalable" | Attach numbers and observable targets |
| Ignoring non-goals | State what is explicitly out for v1 |
| Skipping failure states | Name empty/error/stale/partial states before architecture |
| No assumptions called out | Create a short assumptions + risk log |
Worked Example: Typeahead Search (5-Min Requirements Pass)
Minute 0:00-1:00
Confirm user and goal: users find relevant suggestions quickly while typing. Primary flow is query input to click/enter result.
Minute 1:00-2:00
Lock scope: suggestions, keyboard navigation, loading/error/empty states. Out-of-scope: personalization and ranking ML.
Minute 2:00-3:30
Confirm constraints: p95 suggestion response target, peak events, mobile network assumptions, and accessibility baseline.
Minute 3:30-5:00
Declare assumptions and success metrics, then summarize: "I have scope, constraints, and measurable targets; next I will propose architecture."
Requirements Timebox: 45 vs 60 Minute Interviews
| Interview length | Requirements budget | Expected output |
|---|---|---|
| 45 minutes | 6-8 minutes | Scope box + top constraints + 2-3 metrics + risk log |
| 60 minutes | 8-10 minutes | Everything above plus clearer edge-state and trade-off framing |
Before You Move to Architecture
- Problem statement is one sentence and user-focused.
- Must / Nice / Out is explicit.
- Critical constraints are confirmed or clearly assumed.
- At least two measurable success metrics are defined.
- Failure states are named and prioritized.
- You have a short assumptions + risk log to reference later.