This checks whether you know how to implement streaming responses (SSE/WebSocket/ReadableStream), update the UI incrementally, and handle cancellation or errors cleanly.
How should a front-end handle streaming data from an AI model?
Use guided tracks for structured prep, then practice company-specific question sets when you want targeted interview coverage.
The Core Idea
Streaming means you don't wait for the full response. You open a stream (SSE, WebSocket, or fetch + ReadableStream), append chunks to the UI as they arrive, and stop cleanly when the stream ends or errors.
Step | What happens | Why it matters |
|---|---|---|
Open stream | Connect via SSE/WebSocket or fetch stream | Start receiving tokens immediately |
Append chunks | Update UI incrementally as data arrives | Low latency UX |
Handle errors | Show retry/state on disconnect | Resilience on flaky networks |
Cancel/stop | AbortController or close socket | User can stop generation |
const controller = new AbortController();
const res = await fetch('/api/stream', { signal: controller.signal });
const reader = res.body.getReader();
const decoder = new TextDecoder();
let text = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
text += decoder.decode(value, { stream: true });
render(text); // append to UI
}
// controller.abort() to cancel
Open a stream, append chunks to the UI, and handle cancel + errors gracefully. The UX should feel responsive even before the full response finishes.