Releases: VoltAgent/voltagent
@voltagent/[email protected]
Patch Changes
-
#552
89f3f37
Thanks @omeraplak! - fix: improve shutdown handlers to properly stop server and clean up resources - #528What Changed
Fixed the shutdown handler to properly stop the VoltAgent server and clean up all resources when receiving SIGINT/SIGTERM signals. This ensures the process can exit cleanly when multiple signal handlers exist from other frameworks.
The Problem (Before)
When multiple SIGINT/SIGTERM handlers existed (from frameworks like Adonis, NestJS, etc.), the VoltAgent server would remain open after shutdown, preventing the process from exiting cleanly. The previous fix only addressed the
process.exit()
issue but didn't actually stop the server.The Solution (After)
- Server Cleanup: The shutdown handler now properly stops the server using
stopServer()
- Telemetry Shutdown: Added telemetry/observability shutdown for complete cleanup
- Public API: Added a new
shutdown()
method for programmatic cleanup - Resource Order: Resources are cleaned up in the correct order: server → workflows → telemetry
- Framework Compatibility: Still respects other frameworks' handlers using
isSoleSignalHandler
check
Usage
// Programmatic shutdown (new) const voltAgent = new VoltAgent({ agents, server }); await voltAgent.shutdown(); // Cleanly stops server, workflows, and telemetry // Automatic cleanup on SIGINT/SIGTERM still works // Server is now properly stopped, allowing the process to exit
This ensures VoltAgent plays nicely with other frameworks while properly cleaning up all resources during shutdown.
- Server Cleanup: The shutdown handler now properly stops the server using
@voltagent/[email protected]
Minor Changes
-
#549
63d4787
Thanks @omeraplak! - feat: ai sdk v5 ModelMessage support across Agent + Workflow; improved image/file handling and metadata preservation.What's new
- Agent I/O:
generateText
,streamText
,generateObject
,streamObject
now acceptstring | UIMessage[] | ModelMessage[]
(AI SDK v5) as input. No breaking changes for existing callers. - Conversion layer: Robust
ModelMessage → UIMessage
handling with:- Image support:
image
parts are mapped to UIfile
parts; URLs anddata:
URIs are preserved, raw/base64 strings becomedata:<mediaType>;base64,...
. - File support: string data is auto-detected as URL (
http(s)://
,data:
) or base64; binary is encoded to data URI. - Metadata:
providerOptions
on text/reasoning/image/file parts is preserved asproviderMetadata
on UI parts. - Step boundaries: Inserts
step-start
after tool results when followed by assistant text.
- Image support:
- Workflow:
andAgent
step andWorkflowInput
types now also acceptUIMessage[] | ModelMessage[]
in addition tostring
.
Usage examples
- Agent with AI SDK v5 ModelMessage input (multimodal)
import type { ModelMessage } from "@ai-sdk/provider-utils"; const messages: ModelMessage[] = [ { role: "user", content: [ { type: "image", image: "https://example.com/cat.jpg", mediaType: "image/jpeg" }, { type: "text", text: "What's in this picture?" }, ], }, ]; const result = await agent.generateText(messages); console.log(result.text);
- Agent with UIMessage input
import type { UIMessage } from "ai"; const uiMessages: UIMessage[] = [ { id: crypto.randomUUID(), role: "user", parts: [ { type: "file", url: "https://example.com/cat.jpg", mediaType: "image/jpeg" }, { type: "text", text: "What's in this picture?" }, ], }, ]; const result = await agent.generateText(uiMessages);
- Provider metadata preservation (files/images)
import type { ModelMessage } from "@ai-sdk/provider-utils"; const msgs: ModelMessage[] = [ { role: "assistant", content: [ { type: "file", mediaType: "image/png", data: "https://cdn.example.com/img.png", providerOptions: { source: "cdn" }, }, ], }, ]; // Internally preserved as providerMetadata on the UI file part await agent.generateText(msgs);
- Workflow andAgent with ModelMessage[] or UIMessage[]
import { z } from "zod"; import type { ModelMessage } from "@ai-sdk/provider-utils"; workflow .andAgent( ({ data }) => [ { role: "user", content: [{ type: "text", text: `Hello ${data.name}` }], }, ] as ModelMessage[], agent, { schema: z.object({ reply: z.string() }) } ) .andThen({ id: "extract", execute: async ({ data }) => data.reply, });
Notes
- No breaking changes. Existing string/UIMessage inputs continue to work.
- Multimodal inputs are passed through correctly to the model after conversion.
- Agent I/O:
@voltagent/[email protected]
Patch Changes
-
#546
f12f344
Thanks @omeraplak! - chore: align Zod to ^3.25.76 and fix type mismatch with AI SDKWe aligned Zod versions across packages to
^3.25.76
to match AI SDK peer ranges and avoid multiple Zod instances at runtime.Why this matters
- Fixes TypeScript narrowing issues in workflows when consuming
@voltagent/core
from npm with a different Zod instance (e.g.,ai
packages pulling newer Zod). - Prevents errors like "Spread types may only be created from object types" where
data
failed to narrow becausez.ZodTypeAny
checks saw different Zod identities.
What changed
@voltagent/server-core
,@voltagent/server-hono
: dependencies.zod →^3.25.76
.@voltagent/docs-mcp
,@voltagent/core
: devDependencies.zod →^3.25.76
.- Examples and templates updated to use
^3.25.76
for consistency (non-publishable).
Notes for consumers
- Ensure a single Zod version is installed (consider a workspace override to pin Zod to
3.25.76
). - This improves compatibility with
[email protected]
packages that requirezod@^3.25.76 || ^4
.
- Fixes TypeScript narrowing issues in workflows when consuming
@voltagent/[email protected]
Major Changes
-
a2b492e
Thanks @omeraplak! - # Core 1.x — AI SDK native, Memory V2, pluggable serverBreaking but simple to migrate. Key changes and copy‑paste examples below.
Full migration guide: Migration Guide
Agent: remove
llm
, use ai‑sdk model directlyBefore (0.1.x):
import { Agent } from "@voltagent/core"; import { VercelAIProvider } from "@voltagent/vercel-ai"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "app", instructions: "Helpful", llm: new VercelAIProvider(), model: openai("gpt-4o-mini"), });
After (1.x):
import { Agent } from "@voltagent/core"; import { openai } from "@ai-sdk/openai"; const agent = new Agent({ name: "app", instructions: "Helpful", model: openai("gpt-4o-mini"), // ai-sdk native });
Note:
@voltagent/[email protected]
has a peer dependency onai@^5
. Installai
and a provider like@ai-sdk/openai
.Memory V2: use
Memory({ storage: <Adapter> })
Before (0.1.x):
import { LibSQLStorage } from "@voltagent/libsql"; const agent = new Agent({ // ... memory: new LibSQLStorage({ url: "file:./.voltagent/memory.db" }), });
After (1.x):
import { Memory } from "@voltagent/core"; import { LibSQLMemoryAdapter } from "@voltagent/libsql"; const agent = new Agent({ // ... memory: new Memory({ storage: new LibSQLMemoryAdapter({ url: "file:./.voltagent/memory.db" }), }), });
Default memory is in‑memory when omitted.
Server: moved out of core → use
@voltagent/server-hono
Before (0.1.x):
import { VoltAgent } from "@voltagent/core"; new VoltAgent({ agents: { agent }, port: 3141, enableSwaggerUI: true });
After (1.x):
import { VoltAgent } from "@voltagent/core"; import { honoServer } from "@voltagent/server-hono"; new VoltAgent({ agents: { agent }, server: honoServer({ port: 3141, enableSwaggerUI: true }), });
Abort: option renamed
// 0.1.x await agent.generateText("...", { abortController: new AbortController() }); // 1.x const ac = new AbortController(); await agent.generateText("...", { abortSignal: ac.signal });
Observability: OTel‑based, zero code required
Set keys and run:
VOLTAGENT_PUBLIC_KEY=pk_... VOLTAGENT_SECRET_KEY=sk_...
Remote export auto‑enables when keys are present. Local Console streaming remains available.
@voltagent/[email protected]
Patch Changes
-
#532
dfbc2f2
Thanks @alfanzain! - fix(core): improve graceful shutdown by checking sole SIGINT/SIGTERM handlerWhat Changed
Fixed graceful shutdown calling
process.exit()
in aSIGINT/SIGTERM
handler when other frameworks add their ownSIGINT/SIGTERM
handlers. The shutdown handler now detect if the signal is only from VoltAgent or not. Also, changes the listener as on-time listener to handle the duplicate logs when there are anotherSIGINT/SIGTERM
.The Problem (Before)
Calling process.exit(0) directly in a
SIGINT/SIGTERM
handler, as is done in setupShutdownHandlers, can be problematic. It exits the process immediately and interrupts/short circuits any subsequent user-defined process handlers or async cleanup processes; not desirable in library code.Other frameworks and dev servers add their own
SIGINT/SIGTERM
handlers that could get skipped or interrupted.The Solution (After)
- Setup graceful shutdown handlers now detect if the signal is only from VoltAgent or not
- It causes duplicates log since the library doesn't terminated now. The
SIGINT/SIGTERM
handlers listened the signal more than once. From this, we changes the listener as on-time listener to handle the duplicate logs when there are anotherSIGINT/SIGTERM
@voltagent/[email protected]
Patch Changes
-
#522
cba72d0
Thanks @omeraplak! - fix: sub-agent stream error handling and propagation - #521What Changed
Fixed a critical issue where sub-agent stream errors were incorrectly reported as successful operations with empty responses. Supervisors now properly detect and handle sub-agent failures with configurable error handling behavior.
The Problem (Before)
When a sub-agent's
streamText
encountered an error event:- ❌ Supervisor received
status: "success"
with empty response - ❌ No way to distinguish between empty success and failure
- ❌ Error details were lost, making debugging difficult
- ❌ Supervisors would continue as if the operation succeeded
// Before: Sub-agent fails but supervisor doesn't know const result = await subAgentManager.handoffTask({ task: "Process data", targetAgent: failingAgent, }); // result.status === "success" (WRONG!) // result.result === "" (No error info)
The Solution (After)
Sub-agent errors are now properly detected and reported:
- ✅ Stream errors return
status: "error"
with error details - ✅ Error messages included in responses (configurable)
- ✅ Partial content preserved when errors occur after text generation
- ✅ New configuration options for flexible error handling
// After: Proper error detection and handling const result = await subAgentManager.handoffTask({ task: "Process data", targetAgent: failingAgent, }); // result.status === "error" (CORRECT!) // result.result === "Error in FailingAgent: Stream processing failed" // result.error === Error object with full details
New Configuration Options
Added
SupervisorConfig
options for customizable error handling:const supervisor = new Agent({ name: "Supervisor", subAgents: [agent1, agent2], supervisorConfig: { // Throw exceptions on stream errors instead of returning error results throwOnStreamError: false, // default: false // Include error messages in empty responses includeErrorInEmptyResponse: true, // default: true // Custom guidelines for error handling customGuidelines: ["When a sub-agent fails, provide alternative solutions"], }, });
- ❌ Supervisor received
@voltagent/[email protected]
Patch Changes
-
#515
f87aa97
Thanks @omeraplak! - fix: properly separate subagent text streams in UI message conversion - #508- Added agent transition detection in
toUIMessageStream
adapter - Text streams from different agents (subagents and supervisor) now have separate text part IDs
- Each agent transition ends the current text stream and starts a new one with a unique ID
- Emit
data-subagent
metadata events when switching to a subagent - Reset subagent tracking on stream flush for clean state
This fix ensures that text-delta events from different agents can be distinguished in the UI, preventing text from multiple agents being merged into a single text part. The fullStream now properly separates each agent's response with unique IDs (e.g., id:"1" for MathAssistant, id:"2" for DateTimeAssistant, id:"3" for Supervisor).
Fixes the issue where parallel subagent execution would result in mixed text streams that couldn't be distinguished by source agent.
- Added agent transition detection in
[email protected]
Patch Changes
- #462
23ecea4
Thanks @omeraplak! - Update Zod to v3.25.0 for compatibility with Vercel AI@5- Updated Zod dependency to ^3.25.0 across all packages
- Maintained compatibility with [email protected]
- Fixed TypeScript declaration build hanging issue
- Resolved circular dependency issues in the build process
@voltagent/[email protected]
Patch Changes
-
#462
23ecea4
Thanks @omeraplak! - Update Zod to v3.25.0 for compatibility with Vercel AI@5- Updated Zod dependency to ^3.25.0 across all packages
- Maintained compatibility with [email protected]
- Fixed TypeScript declaration build hanging issue
- Resolved circular dependency issues in the build process
-
Updated dependencies [
23ecea4
,23ecea4
]:- @voltagent/[email protected]
@voltagent/[email protected]
Major Changes
-
#462
23ecea4
Thanks @omeraplak! - feat!: Migrate from AI SDK v4 to v5 with new adapter patternBreaking Changes
- Removed all v4 exports and utilities:
convertToUIMessages()
- Message conversion utilitytoDataStream()
,mergeIntoDataStream()
,formatDataStreamPart()
- Data stream utilitiesfilterUIMessageParts
(alias:rejectUIMessageParts
) - Message filteringisSubAgent
(alias:isSubAgentStreamPart
) - Guard utilities- Type exports:
UIMessage
,UIMessagePart
,ToolInvocationUIPart
,DataStream
,DataStreamString
,DataStreamOptions
- Upgraded from AI SDK v4 (^4.3.16) to v5 (^5.0.8)
New Features
- New adapter pattern following LangChain/LlamaIndex conventions:
toUIMessageStream(stream, callbacks?)
- Converts VoltAgent'sStreamPart
to AI SDK v5'sUIMessageChunk
toDataStreamResponse(stream, options?)
- Creates HTTP Response for Next.js API routesStreamCallbacks
type for stream event handling
- Full AI SDK v5 compatibility:
- Proper
UIMessageChunk
streaming - Support for reasoning-delta chunks (start/delta/end pattern)
- Tool call and result streaming with dynamic tool names
- Text streaming with proper start/end events
- SubAgent metadata via
data-subagent
chunks
- Proper
Migration Guide
Data Stream Usage
Before (v4):
import { toDataStream, mergeIntoDataStream } from "@voltagent/vercel-ui"; // Direct stream conversion const stream = toDataStream(result.fullStream, { sendUsage: true, sendReasoning: false, }); // Or merging into existing stream mergeIntoDataStream(writer, result.fullStream);
After (v5):
import { toDataStreamResponse } from "@voltagent/vercel-ui"; // Direct HTTP response return toDataStreamResponse(result.fullStream); // Or with manual stream handling import { toUIMessageStream } from "@voltagent/vercel-ui"; const uiStream = toUIMessageStream(result.fullStream);
Message Conversion
Before (v4):
import { convertToUIMessages } from "@voltagent/vercel-ui"; const uiMessages = convertToUIMessages(operationContext);
After (v5):
// Message conversion is now handled by AI SDK v5's built-in utilities import { convertToModelMessages } from "ai"; const modelMessages = convertToModelMessages(messages);
Type Imports
Before (v4):
import type { UIMessage, UIMessagePart } from "@voltagent/vercel-ui";
After (v5):
import type { UIMessage, UIMessagePart } from "ai";
- Removed all v4 exports and utilities:
Patch Changes
- #462
23ecea4
Thanks @omeraplak! - Update Zod to v3.25.0 for compatibility with Vercel AI@5- Updated Zod dependency to ^3.25.0 across all packages
- Maintained compatibility with [email protected]
- Fixed TypeScript declaration build hanging issue
- Resolved circular dependency issues in the build process