Releases: VoltAgent/voltagent
@voltagent/[email protected]
Patch Changes
-
#492
17d73f2
Thanks @omeraplak! - feat: add addTools method and deprecate addItems for better developer experience - #487What Changed
- Added new
addTools()
method to Agent class for dynamically adding tools and toolkits - Deprecated
addItems()
method in favor of more intuitiveaddTools()
naming - Fixed type signature to accept
Tool<any, any>
instead ofTool<any>
to support tools with output schemas
Before
// ❌ Method didn't exist - would throw error agent.addTools([weatherTool]); // ❌ Type error with tools that have outputSchema agent.addItems([weatherTool]); // Type error if weatherTool has outputSchema
After
// ✅ Works with new addTools method agent.addTools([weatherTool]); // ✅ Also supports toolkits agent.addTools([myToolkit]); // ✅ No type errors with outputSchema tools const weatherTool = createTool({ name: "getWeather", outputSchema: weatherOutputSchema, // Works without type errors // ... }); agent.addTools([weatherTool]);
Migration
The
addItems()
method is deprecated but still works. Update your code to useaddTools()
:// Old (deprecated) agent.addItems([tool1, tool2]); // New (recommended) agent.addTools([tool1, tool2]);
This change improves developer experience by using more intuitive method naming and fixing TypeScript compatibility issues with tools that have output schemas.
- Added new
@voltagent/[email protected]
Patch Changes
-
#489
fc79d81
Thanks @omeraplak! - feat: add separate stream method for workflows with real-time event streamingWhat Changed
Workflows now have a dedicated
.stream()
method that returns an AsyncIterable for real-time event streaming, separate from the.run()
method. This provides better separation of concerns and improved developer experience.New Stream Method
// Stream workflow execution with real-time events const stream = workflow.stream(input); // Iterate through events as they happen for await (const event of stream) { console.log(`[${event.type}] ${event.from}`, event); if (event.type === "workflow-suspended") { // Resume continues the same stream await stream.resume({ approved: true }); } } // Get final result after stream completes const result = await stream.result;
Key Features
- Separate
.stream()
method: Clean API separation from.run()
- AsyncIterable interface: Native async iteration support
- Promise-based fields: Result, status, and usage resolve when execution completes
- Continuous streaming: Stream remains open across suspend/resume cycles (programmatic API)
- Type safety: Full TypeScript support with
WorkflowStreamResult
type
REST API Streaming
Added Server-Sent Events (SSE) endpoint for workflow streaming:
POST / workflows / { id } / stream; // Returns SSE stream with real-time workflow events // Note: Due to stateless architecture, stream closes on suspension // Resume operations return complete results (not streamed)
Technical Details
- Stream events flow through central
WorkflowStreamController
- No-op stream writer for non-streaming execution
- Suspension events properly emitted to stream
- Documentation updated with streaming examples and architecture notes
- Separate
-
#490
3d278cf
Thanks @omeraplak! - fix: InMemoryStorage timestamp field for VoltOps history displayFixed an issue where VoltOps history wasn't displaying when using InMemoryStorage. The problem was caused by using
updatedAt
field instead oftimestamp
when setting history entries.The fix ensures that the
timestamp
field is properly preserved when updating history entries in InMemoryStorage, allowing VoltOps to correctly display workflow execution history.
@voltagent/[email protected]
Patch Changes
-
#484
6a638f5
Thanks @omeraplak! - feat: add real-time stream support and usage tracking for workflowsWhat Changed for You
Workflows now support real-time event streaming and token usage tracking, providing complete visibility into workflow execution and resource consumption. Previously, workflows only returned final results without intermediate visibility or usage metrics.
Before - Limited Visibility
// ❌ OLD: Only final result, no streaming or usage tracking const workflow = createWorkflowChain(config) .andThen({ execute: async ({ data }) => processData(data) }) .andAgent(prompt, agent, { schema }); const result = await workflow.run(input); // Only got final result, no intermediate events or usage info
After - Full Stream Support and Usage Tracking
// ✅ NEW: Real-time streaming and usage tracking const workflow = createWorkflowChain(config) .andThen({ execute: async ({ data, writer }) => { // Emit custom events for monitoring writer.write({ type: "processing-started", metadata: { itemCount: data.items.length }, }); const processed = await processData(data); writer.write({ type: "processing-complete", output: { processedCount: processed.length }, }); return processed; }, }) .andAgent(prompt, agent, { schema }); // Get both result and stream const execution = await workflow.run(input); // Monitor events in real-time for await (const event of execution.stream) { console.log(`[${event.type}] ${event.from}:`, event); // Events: workflow-start, step-start, custom events, step-complete, workflow-complete } // Access token usage from all andAgent steps console.log("Total tokens used:", execution.usage); // { promptTokens: 250, completionTokens: 150, totalTokens: 400 }
Advanced: Agent Stream Piping
// ✅ NEW: Pipe agent's streaming output directly to workflow stream .andThen({ execute: async ({ data, writer }) => { const agent = new Agent({ /* ... */ }); // Stream agent's response with full visibility const response = await agent.streamText(prompt); // Pipe all agent events (text-delta, tool-call, etc.) to workflow stream if (response.fullStream) { await writer.pipeFrom(response.fullStream, { prefix: "agent-", // Optional: prefix event types filter: (part) => part.type !== "finish" // Optional: filter events }); } const result = await response.text; return { ...data, agentResponse: result }; } })
Key Features
1. Stream Events
Every workflow execution now includes a stream of events:
workflow-start
/workflow-complete
- Workflow lifecyclestep-start
/step-complete
- Step execution tracking- Custom events via
writer.write()
- Application-specific monitoring - Piped agent events via
writer.pipeFrom()
- Full agent visibility
2. Writer API in All Steps
The
writer
is available in all step types:// andThen .andThen({ execute: async ({ data, writer }) => { /* ... */ } }) // andTap (observe without modifying) .andTap({ execute: async ({ data, writer }) => { writer.write({ type: "checkpoint", metadata: { data } }); }}) // andWhen .andWhen({ condition: async ({ data, writer }) => { writer.write({ type: "condition-check", input: data }); return data.shouldProcess; }, execute: async ({ data, writer }) => { /* ... */ } })
3. Usage Tracking
Token usage from all
andAgent
steps is automatically accumulated:const execution = await workflow.run(input); // Total usage across all andAgent steps const { promptTokens, completionTokens, totalTokens } = execution.usage; // Usage is always available (defaults to 0 if no agents used) console.log(`Cost: ${totalTokens * 0.0001}`); // Example cost calculation
Why This Matters
- Real-time Monitoring: See what's happening as workflows execute
- Debugging: Track data flow through each step with custom events
- Cost Control: Monitor token usage across complex workflows
- Agent Integration: Full visibility into agent operations within workflows
- Production Ready: Stream events for logging, monitoring, and alerting
Technical Details
- Stream is always available (non-optional) for consistent API
- Events include execution context (executionId, timestamp, status)
- Writer functions are synchronous for
write()
, async forpipeFrom()
- Usage tracking only counts
andAgent
steps (not custom agent calls inandThen
) - All events flow through a central
WorkflowStreamController
for ordering
@voltagent/[email protected]
Patch Changes
-
#479
8b55691
Thanks @zrosenbauer! - feat: Addedlogger
to the SupabaseMemory provider and provided improved type safety for the constructorNew Features
logger
You can now pass in a
logger
to the SupabaseMemory provider and it will be used to log messages.import { createPinoLogger } from "@voltagent/logger"; const memory = new SupabaseMemory({ client: supabaseClient, logger: createPinoLogger({ name: "memory-supabase" }), });
Improved type safety for the constructor
The constructor now has improved type safety for the
client
andlogger
options.const memory = new SupabaseMemory({ client: supabaseClient, supabaseUrl: "https://test.supabase.co", // this will show a TypeScript error supabaseKey: "test-key", });
The
client
option also checks that theclient
is an instance ofSupabaseClient
const memory = new SupabaseMemory({ client: aNonSupabaseClient, // this will show a TypeScript error AND throw an error at runtime });
Internal Changes
- Cleaned up and reorganized the SupabaseMemory class
- Renamed files to be more descriptive and not in the
index.ts
file - Added improved mocking to the test implementation for the SupabaseClient
- Removed all
console.*
statements and added abiome
lint rule to prevent them from being added back
@voltagent/[email protected]
Patch Changes
-
#481
2fd8bb4
Thanks @omeraplak! - feat: add configurable subagent event forwarding for enhanced stream controlWhat Changed for You
You can now control which events from subagents are forwarded to the parent stream, providing fine-grained control over stream verbosity and performance. Previously, only
tool-call
andtool-result
events were forwarded with no way to customize this behavior.Before - Fixed Event Forwarding
// ❌ OLD: Only tool-call and tool-result events were forwarded (hardcoded) const supervisor = new Agent({ name: "Supervisor", subAgents: [writerAgent, editorAgent], // No way to change which events were forwarded }); const result = await supervisor.streamText("Create content"); // Stream only contained tool-call and tool-result from subagents for await (const event of result.fullStream) { console.log("Event", event); }
After - Full Control Over Event Forwarding
// ✅ NEW: Configure exactly which events to forward const supervisor = new Agent({ name: "Supervisor", subAgents: [writerAgent, editorAgent], supervisorConfig: { fullStreamEventForwarding: { // Choose which event types to forward (default: ['tool-call', 'tool-result']) types: ["tool-call", "tool-result", "text-delta"], // Control tool name prefixing (default: true) addSubAgentPrefix: true, // "WriterAgent: search_tool" vs "search_tool" }, }, }); // Stream only contains configured event types from subagents const result = await supervisor.streamText("Create content"); // Filter subagent events in your application for await (const event of result.fullStream) { if (event.subAgentId && event.subAgentName) { console.log(`Event from ${event.subAgentName}: ${event.type}`); } }
Configuration Options
// Minimal - Only tool events (default) fullStreamEventForwarding: { types: ['tool-call', 'tool-result'], } // Verbose - See what subagents are saying and doing fullStreamEventForwarding: { types: ['tool-call', 'tool-result', 'text-delta'], } // Full visibility - All events for debugging fullStreamEventForwarding: { types: ['tool-call', 'tool-result', 'text-delta', 'reasoning', 'source', 'error', 'finish'], } // Clean tool names without agent prefix fullStreamEventForwarding: { types: ['tool-call', 'tool-result'], addSubAgentPrefix: false, }
Why This Matters
- Better Performance: Reduce stream overhead by forwarding only necessary events
- Cleaner Streams: Focus on meaningful actions rather than all intermediate steps
- Type Safety: Use
StreamEventType[]
for compile-time validation of event types - Backward Compatible: Existing code continues to work with sensible defaults
Technical Details
- Default configuration:
['tool-call', 'tool-result']
withaddSubAgentPrefix: true
- Events from subagents include
subAgentId
andsubAgentName
properties for filtering - Configuration available through
supervisorConfig.fullStreamEventForwarding
- Utilizes the
streamEventForwarder
utility for consistent event filtering
@voltagent/[email protected]
Patch Changes
- #475
9b4ea38
Thanks @zrosenbauer! - fix: Remove other potentially problematicJSON.stringify
usages
@voltagent/[email protected]
Patch Changes
- #475
9b4ea38
Thanks @zrosenbauer! - fix: Remove other potentially problematicJSON.stringify
usages
@voltagent/[email protected]
Patch Changes
-
#466
730232e
Thanks @omeraplak! - fix: memory messages now return parsed objects instead of JSON stringsWhat Changed for You
Memory messages that contain structured content (like tool calls or multi-part messages) now return as parsed objects instead of JSON strings. This is a breaking change if you were manually parsing these messages.
Before - You Had to Parse JSON Manually
// ❌ OLD BEHAVIOR: Content came as JSON string const messages = await memory.getMessages({ conversationId: "123" }); // What you got from memory: console.log(messages[0]); // { // role: "user", // content: '[{"type":"text","text":"Hello"},{"type":"image","image":"data:..."}]', // STRING! // type: "text" // } // You had to manually parse the JSON string: const content = JSON.parse(messages[0].content); // Parse required! console.log(content); // [ // { type: "text", text: "Hello" }, // { type: "image", image: "data:..." } // ] // Tool calls were also JSON strings: console.log(messages[1].content); // '[{"type":"tool-call","toolCallId":"123","toolName":"weather"}]' // STRING!
After - You Get Parsed Objects Automatically
// ✅ NEW BEHAVIOR: Content comes as proper objects const messages = await memory.getMessages({ conversationId: "123" }); // What you get from memory NOW: console.log(messages[0]); // { // role: "user", // content: [ // { type: "text", text: "Hello" }, // OBJECT! // { type: "image", image: "data:..." } // OBJECT! // ], // type: "text" // } // Direct access - no JSON.parse needed! const content = messages[0].content; // Already parsed! console.log(content[0].text); // "Hello" // Tool calls are proper objects: console.log(messages[1].content); // [ // { type: "tool-call", toolCallId: "123", toolName: "weather" } // OBJECT! // ]
Breaking Change Warning
⚠️ If your code was doing this:
// This will now FAIL because content is already parsed const parsed = JSON.parse(msg.content); // ❌ Error: not a string!
Change it to:
// Just use the content directly const content = msg.content; // ✅ Already an object/array
What Gets Auto-Parsed
- String content → Stays as string ✅
- Structured content (arrays) → Auto-parsed to objects ✅
- Tool calls → Auto-parsed to objects ✅
- Tool results → Auto-parsed to objects ✅
- Metadata fields → Auto-parsed to objects ✅
Why This Matters
- No more JSON.parse errors in your application
- Type-safe access to structured content
- Cleaner code without try/catch blocks
- Consistent behavior with how agents handle messages
Migration Guide
- Remove JSON.parse calls for message content
- Remove try/catch blocks around parsing
- Use content directly as objects/arrays
Your memory messages now "just work" without manual parsing!
@voltagent/[email protected]
Patch Changes
-
#466
730232e
Thanks @omeraplak! - fix: memory messages now return parsed objects instead of JSON stringsWhat Changed for You
Memory messages that contain structured content (like tool calls or multi-part messages) now return as parsed objects instead of JSON strings. This is a breaking change if you were manually parsing these messages.
Before - You Had to Parse JSON Manually
// ❌ OLD BEHAVIOR: Content came as JSON string const messages = await memory.getMessages({ conversationId: "123" }); // What you got from memory: console.log(messages[0]); // { // role: "user", // content: '[{"type":"text","text":"Hello"},{"type":"image","image":"data:..."}]', // STRING! // type: "text" // } // You had to manually parse the JSON string: const content = JSON.parse(messages[0].content); // Parse required! console.log(content); // [ // { type: "text", text: "Hello" }, // { type: "image", image: "data:..." } // ] // Tool calls were also JSON strings: console.log(messages[1].content); // '[{"type":"tool-call","toolCallId":"123","toolName":"weather"}]' // STRING!
After - You Get Parsed Objects Automatically
// ✅ NEW BEHAVIOR: Content comes as proper objects const messages = await memory.getMessages({ conversationId: "123" }); // What you get from memory NOW: console.log(messages[0]); // { // role: "user", // content: [ // { type: "text", text: "Hello" }, // OBJECT! // { type: "image", image: "data:..." } // OBJECT! // ], // type: "text" // } // Direct access - no JSON.parse needed! const content = messages[0].content; // Already parsed! console.log(content[0].text); // "Hello" // Tool calls are proper objects: console.log(messages[1].content); // [ // { type: "tool-call", toolCallId: "123", toolName: "weather" } // OBJECT! // ]
Breaking Change Warning
⚠️ If your code was doing this:
// This will now FAIL because content is already parsed const parsed = JSON.parse(msg.content); // ❌ Error: not a string!
Change it to:
// Just use the content directly const content = msg.content; // ✅ Already an object/array
What Gets Auto-Parsed
- String content → Stays as string ✅
- Structured content (arrays) → Auto-parsed to objects ✅
- Tool calls → Auto-parsed to objects ✅
- Tool results → Auto-parsed to objects ✅
- Metadata fields → Auto-parsed to objects ✅
Why This Matters
- No more JSON.parse errors in your application
- Type-safe access to structured content
- Cleaner code without try/catch blocks
- Consistent behavior with how agents handle messages
Migration Guide
- Remove JSON.parse calls for message content
- Remove try/catch blocks around parsing
- Use content directly as objects/arrays
Your memory messages now "just work" without manual parsing!
@voltagent/[email protected]
Patch Changes
-
#466
730232e
Thanks @omeraplak! - feat: add message helper utilities to simplify working with complex message contentWhat Changed for You
Working with message content (which can be either a string or an array of content parts) used to require complex if/else blocks. Now you have simple helper functions that handle all the complexity.
Before - Your Old Code (Complex)
// Adding timestamps to messages - 30+ lines of code const enhancedMessages = messages.map((msg) => { if (msg.role === "user") { const timestamp = new Date().toLocaleTimeString(); // Handle string content if (typeof msg.content === "string") { return { ...msg, content: `[${timestamp}] ${msg.content}`, }; } // Handle structured content (array of content parts) if (Array.isArray(msg.content)) { return { ...msg, content: msg.content.map((part) => { if (part.type === "text") { return { ...part, text: `[${timestamp}] ${part.text}`, }; } return part; }), }; } } return msg; }); // Extracting text from content - another 15+ lines function getText(content) { if (typeof content === "string") { return content; } if (Array.isArray(content)) { return content .filter((part) => part.type === "text") .map((part) => part.text) .join(""); } return ""; }
After - Your New Code (Simple)
import { messageHelpers } from "@voltagent/core"; // Adding timestamps - 1 line! const enhancedMessages = messages.map((msg) => messageHelpers.addTimestampToMessage(msg, timestamp) ); // Extracting text - 1 line! const text = messageHelpers.extractText(content); // Check if has images - 1 line! if (messageHelpers.hasImagePart(content)) { // Handle image content } // Build complex content - fluent API const content = new messageHelpers.MessageContentBuilder() .addText("Here's an image:") .addImage("screenshot.png") .addText("And a file:") .addFile("document.pdf") .build();
Real Use Case in Hooks
import { Agent, messageHelpers } from "@voltagent/core"; const agent = new Agent({ name: "Assistant", hooks: { onPrepareMessages: async ({ messages }) => { // Before: 30+ lines of complex if/else // After: 2 lines! const timestamp = new Date().toLocaleTimeString(); return { messages: messages.map((msg) => messageHelpers.addTimestampToMessage(msg, timestamp)), }; }, }, });
What You Get
- No more if/else blocks for content type checking
- Type-safe operations with TypeScript support
- 30+ lines → 1 line for common operations
- Works everywhere: hooks, tools, custom logic
Available Helpers
import { messageHelpers } from "@voltagent/core"; // Check content type messageHelpers.isTextContent(content); // Is it a string? messageHelpers.hasImagePart(content); // Has images? // Extract content messageHelpers.extractText(content); // Get all text messageHelpers.extractImageParts(content); // Get all images // Transform content messageHelpers.transformTextContent(content, (text) => text.toUpperCase()); messageHelpers.addTimestampToMessage(message, "10:30:00"); // Build content new messageHelpers.MessageContentBuilder().addText("Hello").addImage("world.png").build();
Your message handling code just got 90% simpler!
-
#466
730232e
Thanks @omeraplak! - feat: add onPrepareMessages hook - transform messages before they reach the LLMWhat Changed for You
You can now modify, filter, or enhance messages before they're sent to the LLM. Previously impossible without forking the framework.
Before - What You Couldn't Do
// ❌ No way to: // - Add timestamps to messages // - Filter sensitive data (SSN, credit cards) // - Add user context to messages // - Remove duplicate messages // - Inject system prompts dynamically const agent = new Agent({ name: "Assistant", // Messages went straight to LLM - no control! });
After - What You Can Do Now
import { Agent, messageHelpers } from "@voltagent/core"; const agent = new Agent({ name: "Assistant", hooks: { // ✅ NEW: Intercept and transform messages! onPrepareMessages: async ({ messages, context }) => { // Add timestamps const timestamp = new Date().toLocaleTimeString(); const enhanced = messages.map((msg) => messageHelpers.addTimestampToMessage(msg, timestamp) ); return { messages: enhanced }; }, }, }); // Your message: "What time is it?" // LLM receives: "[14:30:45] What time is it?"
When It Runs
// 1. User sends message await agent.generateText("Hello"); // 2. Memory loads previous messages // [previous messages...] // 3. ✨ onPrepareMessages runs HERE // You can transform messages // 4. Messages sent to LLM // [your transformed messages]
What You Need to Know
- Runs on every LLM call: generateText, streamText, generateObject, streamObject
- Gets all messages: Including system prompt and memory messages
- Return transformed messages: Or return nothing to keep original
- Access to context: userContext, operationId, agent reference
Your app just got smarter without changing any existing code!
-
#466
730232e
Thanks @omeraplak! - fix: memory messages now return parsed objects instead of JSON stringsWhat Changed for You
Memory messages that contain structured content (like tool calls or multi-part messages) now return as parsed objects instead of JSON strings. This is a breaking change if you were manually parsing these messages.
Before - You Had to Parse JSON Manually
// ❌ OLD BEHAVIOR: Content came as JSON string const messages = await memory.getMessages({ conversationId: "123" }); // What you got from memory: console.log(messages[0]); // { // role: "user", // content: '[{"type":"text","text":"Hello"},{"type":"image","image":"data:..."}]', // STRING! // type: "text" // } // You had to manually parse the JSON string: const content = JSON.parse(messages[0].content); // Parse required! console.log(content); // [ // { type: "text", text: "Hello" }, // { type: "image", image: "data:..." } // ] // Tool calls were also JSON strings: console.log(messages[1].content); // '[{"type":"tool-call","toolCallId":"123","toolName":"weather"}]' // STRING!
After - You Get Parsed Objects Automatically
// ✅ NEW BEHAVIOR: Content comes as proper objects const messages = await memory.getMessages({ conversationId: "123" }); // What you get from memory NOW: console.log(messages[0]); // { // role: "user", // content: [ // { type: "text", text: "Hello" }, // OBJECT! // { type: "image", image: "data:..." } // OBJECT! // ], // type: "text" // } // Direct access - no JSON.parse needed! const content = messages[0].content; // Already parsed! console.log(content[0].text); // "Hello" // Tool calls are proper objects: console.log(messages[1].content); // [ // { type: "tool-call", toolCallId: "123", toolName: "weather" } // OBJECT! // ]
Breaking Change Warning
⚠️ If your code was doing this:
// This will now FAIL because content is already parsed const parsed = JSON.parse(msg.content); // ❌ Error: not a string!
Change it to:
// Just use the content directly const content = msg.content; // ✅ Already an object/array
What Gets Auto-Parsed
- String content → Stays as string ✅
- Structured content (arrays) → Auto-parsed to objects ✅
- Tool calls → Auto-parsed to objects ✅
- Tool results → Auto-parsed to objects ✅
- Metadata fields → Auto-parsed to objects ✅
Why This Matters
- No more JSON.parse errors in your application
- Type-safe access to structured content
- Cleaner code without try/catch blocks
- Consistent behavior with how agents handle messages
Migration Guide
- Remove JSON.parse calls for message content
- Remove try/catch blocks around parsing
- Use content directly as objects/arrays
Your memory messages now "just work" without manual parsing!