The workflow runtime uses /.well-known/workflow/* endpoints internally. These MUST be public (not protected by auth middleware).
In proxy.ts, ensure this route is in the public routes list:
const isPublicRoute = createRouteMatcher([
// ... other routes
"/.well-known/(.*)", // Workflow runtime endpoints - must be public
]);Symptom if missing: [embedded world] Queue operation failed: TypeError: fetch failed with workflow runs stuck in "pending" status.
React Strict Mode double-invokes effects, which can abort the first stream request. Use a ref guard instead of disabling Strict Mode globally.
Pattern to prevent double-send:
const sendingRef = useRef(false);
const safeSendMessage = useCallback((message) => {
if (sendingRef.current) {
console.log('Skipping duplicate send (Strict Mode)');
return;
}
sendingRef.current = true;
sendMessage(message);
}, [sendMessage]);
// In onFinish callback:
onFinish(data) {
sendingRef.current = false; // Reset for next message
if (data.isAbort) {
console.log('Stream aborted, skipping save');
return; // Don't save on aborted requests
}
// ... save chat history
}Symptom without guard: isAbort: true in onFinish callback, messages array is empty, ResponseAborted errors.
These versions are tested and working together:
{
"dependencies": {
"@ai-sdk/react": "2.0.60",
"@workflow/ai": "4.0.1-beta.19",
"workflow": "4.0.1-beta.19",
"ai": "5.0.104"
},
"overrides": {
"ai": "5.0.104",
"@ai-sdk/react": "2.0.60"
}
}Symptom if mismatched: Cannot perform ArrayBuffer.prototype.slice on a detached ArrayBuffer errors.
Mastra packages (@mastra/*) pull in conflicting ai SDK versions. If using Vercel Workflows, remove Mastra:
npm uninstall @mastra/ai-sdk @mastra/client-js @mastra/core @mastra/evals @mastra/libsql @mastra/memory @mastra/observability @mastra/pg @mastra/react mastraThe instrumentation.ts file with @vercel/otel can interfere with the workflow's embedded world runtime. If experiencing issues, try renaming/removing it temporarily.
The reference implementation uses turbopack for development:
{
"scripts": {
"dev": "next dev --turbopack"
}
}workflows/
├── AGENTS.md # This file
├── tools/ # Shared tool definitions
│ ├── index.ts # Re-exports all tools
│ ├── legal-research.ts # Step function + tool definition
│ ├── child-support-guidelines.ts
│ └── ... # Other tools
├── legal-chat/
│ ├── index.ts # Main workflow function with "use workflow"
│ └── prompts.ts # System prompts
├── onboarding/
│ ├── index.ts # Main workflow function
│ ├── types.ts # Type definitions
│ └── steps/
│ └── tools.ts # Workflow-specific tools
└── [other-workflows]/
import { DurableAgent } from '@workflow/ai/agent';
import { convertToModelMessages, type UIMessage, type UIMessageChunk } from 'ai';
import { getWritable } from 'workflow';
import { SYSTEM_PROMPT, myTools } from './steps/tools';
export async function myWorkflow(messages: UIMessage[]) {
'use workflow';
const writable = getWritable<UIMessageChunk>();
const agent = new DurableAgent({
model: 'google/gemini-2.5-flash',
system: SYSTEM_PROMPT,
tools: myTools,
});
await agent.stream({
messages: convertToModelMessages(messages),
writable,
});
}import { z } from 'zod';
// Step function - runs in Node.js, auto-retries on failure
export async function myTool({ param }: { param: string }) {
'use step';
// Full Node.js access here - API calls, database, etc.
return { result: 'success' };
}
export const myTools = {
myTool: {
description: 'Description for the LLM',
inputSchema: z.object({
param: z.string().describe('Parameter description'),
}),
execute: myTool,
},
};
export const SYSTEM_PROMPT = `Your system prompt here`;// app/api/[name]/route.ts
import { createUIMessageStreamResponse, type UIMessage } from 'ai';
import { start } from 'workflow/api';
import { myWorkflow } from '@/workflows/[name]';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const run = await start(myWorkflow, [messages]);
return createUIMessageStreamResponse({
stream: run.readable,
headers: {
'x-workflow-run-id': run.runId,
},
});
}// app/api/[name]/[id]/stream/route.ts
import { createUIMessageStreamResponse } from 'ai';
import { getRun } from 'workflow/api';
export async function GET(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params;
const { searchParams } = new URL(request.url);
const startIndex = searchParams.get('startIndex');
const run = getRun(id);
const stream = run.getReadable({
startIndex: startIndex ? parseInt(startIndex, 10) : undefined
});
return createUIMessageStreamResponse({ stream });
}# List recent workflow runs
npx workflow inspect runs --limit 5 --json
# Inspect steps for a specific run
npx workflow inspect steps --run [runId] --json
# Get details for a specific step
npx workflow inspect step [stepId]When workflows aren't working, check:
- Is
/.well-known/(.*)in public routes? - Is Strict Mode handled with ref guard pattern? (see section 2)
- Are package versions correct with overrides?
- Are there conflicting packages (Mastra)?
- Is instrumentation.ts interfering?
- Is the API route public in middleware?
- Check workflow run status:
npx workflow inspect runs --limit 1 --json - Are there stale generated files? (see below)
The workflow runtime generates route files in app/.well-known/workflow/. After refactoring (e.g., moving code from src/mastra/ to workflows/), these generated files can contain stale imports that break the workflow runtime.
Symptom: [embedded world] Queue operation failed: TypeError: fetch failed with ECONNREFUSED, even though the workflow runs show as "completed".
Fix: Delete the generated directories and restart the dev server:
rm -rf app/.well-known
rm -rf .next
# Then restart dev serverThe workflow runtime will regenerate fresh routes based on the current codebase.