In SDK 6, generateObject and streamObject are deprecated. The replacement is passing an output option to generateText / streamText using the Output helper.
Install Zod if you haven't already:
pnpm add zod
Use generateText when you want the complete structured result at once:
import { generateText, Output } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const schema = z.object({
recipe: z.object({
name: z.string().describe("The name of the recipe"),
ingredients: z.array(
z.object({
name: z.string().describe("Ingredient name"),
amount: z.string().describe("Amount needed"),
})
),
steps: z.array(z.string().describe("Step-by-step instructions")),
}),
});
const { output: recipe } = await generateText({
model: openai("gpt-4o"),
output: Output.object({ schema }),
prompt: "Generate a recipe for a vegan chocolate cake.",
});
console.log(recipe.recipe.name); // → "Vegan Chocolate Cake"
console.log(recipe.recipe.ingredients); // → [{ name: "flour", amount: "200g" }, ...]
Use streamText to observe the object grow progressively — great for long outputs or when you want to show partial results in a UI:
import { streamText, Output } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const schema = z.object({
recipe: z.object({
name: z.string().describe("The name of the recipe"),
ingredients: z.array(
z.object({
name: z.string().describe("Ingredient name"),
amount: z.string().describe("Amount needed"),
})
),
steps: z.array(z.string().describe("Step instructions")),
}),
});
const { partialOutputStream } = await streamText({
model: openai("gpt-4o"),
output: Output.object({ schema }),
prompt: "Generate a recipe for a vegan chocolate cake.",
});
for await (const partialObject of partialOutputStream) {
// partialObject grows as more tokens arrive
console.log(partialObject);
}
Wrap a schema with z.array(...):
const ProductSchema = z.object({
productName: z.string(),
price: z.number().positive(),
rating: z.enum(["★", "★★", "★★★", "★★★★", "★★★★★"]),
});
const { output: recommendations } = await generateText({
model: openai("gpt-4o"),
output: Output.object({ schema: z.object({ items: z.array(ProductSchema) }) }),
prompt: "Give me three home office product recommendations.",
});
console.log(recommendations.items);
// → [{ productName: "Ergonomic Chair", price: 199, rating: "★★★★" }, ...]
Classify input into a fixed set of values:
const SentimentSchema = z.object({
sentiment: z.enum(["positive", "neutral", "negative"]),
confidence: z.number().min(0).max(1),
});
const { output } = await generateText({
model: openai("gpt-4o"),
output: Output.object({ schema: SentimentSchema }),
prompt: 'Classify the sentiment: "I absolutely love this product!"',
});
console.log(output.sentiment); // → "positive"
console.log(output.confidence); // → 0.97
generateText vs streamTextgenerateText | streamText | |
|---|---|---|
| When | Small–medium outputs, batch jobs | Large objects, real-time UI updates |
| Result | Fully parsed object on completion | partialOutputStream of growing object |
| Latency | Waits until fully generated | First tokens arrive immediately |
SchemaValidationError is thrown with a detailed path.output is always correctly typed at runtime and compile time.