AI SDK 6

streamText

Stream text responses token-by-token using AI SDK 6.

streamText

Use streamText when you want to display output progressively as the model generates it — great for chat UIs and long responses.

Basic Usage

import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const model = openai("gpt-4o-mini");

const result = streamText({
  model,
  prompt: "Explain quantum entanglement in simple terms.",
});

for await (const textDelta of result.textStream) {
  process.stdout.write(textDelta);
}

textDelta is a plain string — each iteration gives you the next chunk of tokens.

With System Prompt & Messages

const result = streamText({
  model,
  system: "You are a helpful assistant.",
  messages: [
    { role: "user", content: "Tell me a short story about a robot." },
  ],
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Collecting the Full Text After Streaming

const result = streamText({ model, prompt: "Summarise the French Revolution." });

// Stream while displaying
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

// Access the full text once done
const fullText = await result.text;
console.log("\n\nFull:", fullText);

Useful Result Properties

PropertyDescription
result.textStreamAsync iterable of text chunks
await result.textFull text once stream ends
await result.usageToken usage
await result.finishReasonWhy generation ended

When to Use

  • Chat interfaces (stream to the UI in real time)
  • Long-form content generation
  • Any scenario where latency to first token matters