streamTextUse streamText when you want to display output progressively as the model generates it — great for chat UIs and long responses.
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
const model = openai("gpt-4o-mini");
const result = streamText({
model,
prompt: "Explain quantum entanglement in simple terms.",
});
for await (const textDelta of result.textStream) {
process.stdout.write(textDelta);
}
textDelta is a plain string — each iteration gives you the next chunk of tokens.
const result = streamText({
model,
system: "You are a helpful assistant.",
messages: [
{ role: "user", content: "Tell me a short story about a robot." },
],
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
const result = streamText({ model, prompt: "Summarise the French Revolution." });
// Stream while displaying
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
// Access the full text once done
const fullText = await result.text;
console.log("\n\nFull:", fullText);
| Property | Description |
|---|---|
result.textStream | Async iterable of text chunks |
await result.text | Full text once stream ends |
await result.usage | Token usage |
await result.finishReason | Why generation ended |