API REFERENCE

Type signatures, descriptions, and examples for the most important exports across all FABRK packages. Grouped by category — find what you need fast.

[FABRK RUNTIME]

The fabrk package is the framework runtime. It owns Vite 7 configuration, file-system routing, SSR, and the agent infrastructure. Import from 'fabrk' for server-side code, 'fabrk/client' for React hooks.

fabrk(options)function

Create the Vite plugin that wires file-system routing, SSR, agent runtime, and dev dashboard. Pass to Vite's plugins array.

// vite.config.ts
import { fabrk } from 'fabrk'
export default { plugins: [fabrk({ appDir: 'src/app', port: 3000 })] }
FabrkRuntimeOptionsinterface

Options accepted by fabrk(). Key fields: appDir (default "src/app"), port (default 3000), agents (agent config map), mcp (MCP client configs), voice (TTS/STT/realtime config).

fabrkPlugin()function

Lower-level Vite plugin factory used internally by fabrk(). Prefer fabrk() unless you need to compose plugins manually.

handleAgentRequest(req, name, def)function

Handle a POST request for an agent route. Validates messages, resolves tools, runs the agent loop, and returns a streaming SSE Response with security headers.

// app/api/assistant/route.ts
import { handleAgentRequest } from 'fabrk'
import { assistantAgent } from '@/agents/assistant'
export async function POST(req: Request) {
  return handleAgentRequest(req, 'assistant', assistantAgent)
}
registerTool(tool)function

Register a ToolDefinition in the global tool registry so agent loops can resolve it by name. Call once at module load time.

import { registerTool } from 'fabrk'
import { searchDocs } from '@/tools/search'
registerTool(searchDocs)

[AGENTS]

An agent runs a ReAct loop: read a message, decide whether to call a tool, call it, then repeat. defineAgent declares the agent. handleAgentRequest serves it over HTTP. runAgentLoop is the core primitive when you need direct control over the loop.

defineAgent(options): AgentDefinitionfunction

Declare an agent. Returns an AgentDefinition consumed by handleAgentRequest or createTestAgent.

import { defineAgent } from 'fabrk'
export const agent = defineAgent({
  model: 'claude-sonnet-4-5-20250929',
  systemPrompt: 'You are a helpful assistant.',
  tools: ['search_docs'],
  stream: true,
  auth: 'optional',
  budget: { daily: 10, perSession: 0.50 },
  memory: true,
})
AgentDefinitioninterface

model: string; fallback?: string[]; systemPrompt?: string; tools: string[]; budget?: AgentBudget; stream: boolean; auth: "required"|"optional"|"none"; memory?: boolean|AgentMemoryConfig; inputGuardrails?: Guardrail[]; outputGuardrails?: Guardrail[]; handoffs?: string[]; outputSchema?: Record<string,unknown>

AgentBudgetinterface

daily?: number; perSession?: number; alertThreshold?: number (0–1); perUser?: number; perTenant?: number — all values in USD.

defineTool(options): ToolDefinitionfunction

Create a tool the agent can call. Accepts name, description, schema (JSON Schema object), handler(input) => Promise<ToolResult>, optional hooks, and a requiresApproval flag.

import { defineTool, textResult } from 'fabrk'
export const myTool = defineTool({
  name: 'my_tool',
  description: 'Does something useful',
  schema: { type: 'object', properties: { q: { type: 'string' } }, required: ['q'] },
  handler: async ({ q }) => textResult(`Result for ${q}`),
})
textResult(text): ToolResultfunction

Wrap a string as a ToolResult. Returns { content: [{ type: "text", text }] }. Use this in most tool handlers.

runAgentLoop(options): AsyncGenerator<AgentLoopEvent>function

Run the ReAct loop directly. Yields AgentLoopEvents and handles tool execution, budget checks, guardrail runs, stop conditions, and handoffs. Hard cap: 25 iterations.

for await (const event of runAgentLoop(opts)) {
  if (event.type === 'text-delta') process.stdout.write(event.content)
  if (event.type === 'done') break
}
AgentLoopEventtype

Union: { type:"text-delta"; content:string } | { type:"text"; content:string } | { type:"tool-call"; name; input; iteration } | { type:"tool-result"; name; output; durationMs; iteration } | { type:"usage"; promptTokens; completionTokens; cost } | { type:"done"; structuredOutput? } | { type:"error"; message } | { type:"approval-required"; toolName; input; approvalId; iteration } | { type:"handoff"; targetAgent; input; iteration }

AgentLoopOptionsinterface

messages, toolExecutor, toolSchemas, agentName, sessionId, model, budget?, budgetContext?, maxIterations?, stream, generateWithTools, streamWithTools?, calculateCost, inputGuardrails?, outputGuardrails?, stopWhen?, handoffs?, outputSchema?

createToolExecutor(tools, hooks?): ToolExecutorfunction

Build a ToolExecutor from an array of ToolDefinitions. Validates required fields, enforces a 30s timeout, truncates output at 50K chars, and calls lifecycle hooks.

const executor = createToolExecutor([searchTool, calcTool], {
  onBefore: (name, input) => console.log('[tool]', name, input),
  onError:  (name, input, err) => logger.error(err),
})
ToolExecutorHooksinterface

onBefore?(name, input) | onAfter?(name, input, output, durationMs) | onTimeout?(name, input, timeoutMs) | onError?(name, input, error) | onApprovalRequired?(name, input, approvalId) => Promise<{ approved; modifiedInput? }>

[MEMORY]

All stores implement MemoryStore, so you can swap them out without changing agent code. Use InMemoryMemoryStore during development and tests. Swap in a Prisma-backed store in production.

MemoryStoreinterface

createThread(agentName): Promise<Thread>; getThread(id): Promise<Thread|null>; appendMessage(threadId, msg): Promise<ThreadMessage>; getMessages(threadId, opts?): Promise<ThreadMessage[]>; deleteThread(id): Promise<void>; replaceMessages?(threadId, messages): Promise<void> — optional, used by memory compression.

InMemoryMemoryStoreclass

Default in-memory store. Caps at 1,000 threads (LRU eviction) and 500 messages per thread. Implements replaceMessages for compression support.

import { InMemoryMemoryStore } from 'fabrk'
const store = new InMemoryMemoryStore()
const thread = await store.createThread('assistant')
await store.appendMessage(thread.id, { threadId: thread.id, role: 'user', content: 'Hello' })
SemanticMemoryStoreclass

Wrap any MemoryStore and add vector search. Embeds user and assistant messages asynchronously. Call search(query, opts?) to retrieve semantically similar ThreadMessages.

import { SemanticMemoryStore, InMemoryMemoryStore } from 'fabrk'
import { OpenAIEmbeddingProvider } from '@fabrk/ai'
const store = new SemanticMemoryStore(new InMemoryMemoryStore(), {
  embeddingProvider: new OpenAIEmbeddingProvider({ model: 'text-embedding-3-small' }),
  topK: 5,
  threshold: 0.7,
})
const hits = await store.search('user preference for dark mode', {
  agentName: 'assistant',
  messageRange: { before: 2, after: 2 },  // expand each match with context
})
InMemoryLongTermStoreclass

Key-value store for persistent agent facts. set/get/delete/list per namespace. search() does exact and substring matching. Inject via AgentMemoryConfig.longTerm.

import { InMemoryLongTermStore } from 'fabrk'
const store = new InMemoryLongTermStore()
await store.set('user:123', 'theme', 'dark')
const val = await store.get('user:123', 'theme')  // 'dark'
buildWorkingMemory(messages, config): stringfunction

Render a working memory string from recent thread messages using the config template. The result is injected as a system message prefix before each LLM call.

WorkingMemoryConfiginterface

template: (messages: ThreadMessage[]) => string; readOnly?: boolean — if true, working memory is computed once at session start and not updated mid-session.

const wm: WorkingMemoryConfig = {
  template: (msgs) => {
    const facts = msgs.filter(m => m.metadata?.isFact)
    return facts.map(m => m.content).join('\n')
  },
}
AgentMemoryConfiginterface

maxMessages?: number; semantic?: boolean | { topK?, threshold? }; compression?: { enabled?, triggerAt?, keepRecent?, summarize(messages) }; workingMemory?: WorkingMemoryConfig; longTerm?: { store, namespace?, autoInjectTool? }

[WORKFLOWS]

A workflow is a sequence of steps that run one after another — with branching, parallel execution, and the ability to pause mid-run for human input. A suspended workflow picks up exactly where it left off when you call resumeWorkflow.

WorkflowDefinitioninterface

name: string; steps: WorkflowStep[]; maxSteps?: number (hard cap 50) — the top-level descriptor passed to runWorkflow.

WorkflowSteptype

Union of AgentStep | ToolStep | ConditionStep | ParallelStep | SuspendableAgentStep | SuspendableToolStep. Each has an id: string and a type discriminant.

const steps: WorkflowStep[] = [
  { type: 'agent', id: 'draft', run: async (ctx) => await draftContent(ctx.input) },
  { type: 'condition', id: 'check', condition: (ctx) => ctx.input.length > 100,
    then: [{ type: 'tool', id: 'shorten', run: shortenFn }],
    else: [] },
  { type: 'parallel', id: 'enrich', steps: [translateStep, tagsStep] },
]
WorkflowResulttype

{ status:"completed"; output:string; stepResults:StepResult[]; durationMs:number } | { status:"suspended"; suspendedAtStepId:string; suspendData:unknown; completedSteps:StepResult[]; durationMs:number }

runWorkflow(def, input, metadata?, opts?): Promise<WorkflowResult>function

Run a WorkflowDefinition sequentially, with parallel steps running concurrently. Handles suspension via SuspendError. opts.onProgress emits step lifecycle events.

const result = await runWorkflow(myWorkflow, userInput, { userId })
if (result.status === 'suspended') {
  // persist result, wait for human approval, then:
  const final = await resumeWorkflow(myWorkflow, result, approvalPayload)
}
resumeWorkflow(def, partialResult, resumeData, opts?): Promise<WorkflowResult>function

Continue a suspended workflow from the step that called suspend(). Skips already-completed steps and injects resumeData into WorkflowContext.metadata.

createWorkflowStream(): { stream, writer }function

Create a ReadableStream and WritableStreamDefaultWriter pair. Pass writer to runWorkflow opts.writer; return stream in your HTTP response for real-time step output.

const { stream, writer } = createWorkflowStream()
const promise = runWorkflow(def, input, {}, { writer })
return new Response(stream, { headers: { 'Content-Type': 'text/event-stream' } })
SuspendableStepContextinterface

suspend(data: unknown): never — call inside a suspendable-agent or suspendable-tool step to pause execution. Resume later with resumeWorkflow().

full workflow example
import { runWorkflow } from 'fabrk'
import type { WorkflowDefinition } from 'fabrk'

const pipeline: WorkflowDefinition = {
  name: 'content-pipeline',
  steps: [
    { type: 'agent',    id: 'draft',    run: async (ctx) => await draftAgent(ctx.input) },
    { type: 'condition', id: 'review',
      condition: (ctx) => ctx.input.length > 500,
      then: [{ type: 'tool', id: 'summarize', run: summarizeFn }],
    },
    { type: 'parallel', id: 'enrich', steps: [
      { type: 'tool', id: 'translate', run: translateFn },
      { type: 'tool', id: 'tag',       run: tagFn },
    ]},
  ],
}

const result = await runWorkflow(pipeline, 'Write about climate change')
if (result.status === 'completed') {
  console.log(result.output)
}

[STATEGRAPH]

A state graph is a directed graph where each node is an async function. Nodes return the next node name, the updated state, and optional output. Edges are static or conditional. The graph supports interrupt/resume, subgraphs, and state reducers.

defineStateGraph<S>(config): CompiledStateGraph<S>function

Build a compiled state graph from a StateGraphConfig. Returns an object with a run() async generator. Use createStateGraph() for the fluent builder API.

createStateGraph<S>(initialState, reducers?): StateGraphBuilder<S>function

Fluent builder. Chain addNode / addEdge / addConditionalEdges / addSubgraph / setInitial / setMaxCycles then call compile().

const graph = createStateGraph({ count: 0 })
  .addNode('inc', async (input, state) => ({
    nextNode: state.count < 3 ? 'inc' : 'END',
    state: { count: state.count + 1 },
    output: state.count + 1,
  }))
  .setInitial('inc')
  .compile()

for await (const event of graph.run(null)) {
  if (event.type === 'done') console.log('final count:', event.output)
}
StateGraphConfig<S>interface

nodes: GraphNode<S>[]; edges: GraphEdge[]; initial: string; initialState: S; maxCycles?: number (default 50); reducers?: StateReducers<S>; interruptBefore?: string[]; interruptAfter?: string[]

GraphNode<S>interface

name: string; run(input: unknown, state: S): Promise<NodeResult<S>> where NodeResult = { nextNode: string|"END"; state: S; output?: unknown }

GraphEdgeinterface

from: string; to: string | ((output: unknown, state: unknown) => string) — static or conditional router. The router return value is the next node name.

StateGraphEvent<S>type

type: "node-enter"|"node-exit"|"edge"|"done"|"error"|"interrupt"; node?; nextNode?; state; output?; error?; cycles; interruptType?: "before"|"after"|"node"; value?

interrupt(value): neverfunction

Call inside any graph node to pause execution. The graph yields { type:"interrupt", value } and stops. Resume by calling graph.run(input, { resumeFrom: { node, command } }).

import { interrupt } from 'fabrk'
// Inside a node:
interrupt({ question: 'Approve this action?', pendingTool: state.tool })
// Resuming:
graph.run(input, { resumeFrom: { node: 'review', command: { goto: 'execute', update: { approved: true } } } })

[MCP]

MCP (Model Context Protocol) uses JSON-RPC 2.0 to connect tools and LLM clients. createMCPServer exposes your tools to any MCP-compatible client. connectMCPServer lets your agents consume tools from any MCP server, over HTTP or stdio.

createMCPServer(options): MCPServerfunction

Expose your tools as a JSON-RPC 2.0 MCP server. Includes built-in rate limiting (60 req/min/IP), a 1MB request cap, and security headers on all responses.

import { createMCPServer } from 'fabrk'
const server = createMCPServer({
  name: 'my-mcp',
  version: '1.0.0',
  tools: [searchTool, calcTool],
  rateLimit: 30,  // req/min
})
// In your HTTP handler:
return server.httpHandler(req)
MCPServerinterface

name: string; version: string; handleRequest(jsonRpc): Promise<unknown>; httpHandler(req: Request): Promise<Response>

connectMCPServer(options): Promise<MCPConnection>function

Connect to an external MCP server over HTTP or stdio. Discovers the server's tools and returns them as ToolDefinitions ready for an agent.

import { connectMCPServer } from 'fabrk'
// HTTP transport with bearer auth:
const conn = await connectMCPServer({
  transport: 'http',
  url: 'https://mcp.example.com/rpc',
  auth: { type: 'bearer', token: process.env.MCP_TOKEN! },
  timeout: 30_000,
})
// stdio transport for local subprocess:
const local = await connectMCPServer({
  transport: 'stdio',
  command: 'npx',
  args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
})
MCPClientOptionsinterface

url?: string; command?: string; args?: string[]; transport: "http"|"stdio"; timeout?: number; auth?: { type:"bearer"; token } | { type:"oauth2"; clientId; clientSecret?; tokenUrl; scopes? }; elicitation?: (prompt, schema) => Promise<unknown>

MCPConnectioninterface

tools: ToolDefinition[]; disconnect(): Promise<void>; listResources(): Promise<MCPResource[]>; readResource(uri): Promise<string>; listPrompts(): Promise<MCPPrompt[]>; getPrompt(name, args?): Promise<string>

[GUARDRAILS]

Guardrails are functions that inspect content before and after each LLM call. Write your own or compose the built-ins. The loop runs them in series. Return pass:false without a replacement and the loop halts with an error event.

Guardrailtype

(content: string, ctx: GuardrailContext) => GuardrailResult — synchronous gate. pass:false blocks; pass:true with replacement mutates content in place.

AsyncGuardrailtype

(content: string, ctx: GuardrailContext) => GuardrailResult | Promise<GuardrailResult> — async variant for external validation calls.

GuardrailResultinterface

pass: boolean; reason?: string; replacement?: string — if replacement is set, content is mutated even when pass:true (used by piiRedactor).

GuardrailContextinterface

agentName: string; sessionId: string; direction: "input"|"output"

maxLength(n): Guardrailfunction

Block content longer than n characters.

maxLength(10_000)
denyList(patterns: RegExp[]): Guardrailfunction

Block content matching any of the provided regular expressions.

denyList([/\bpassword\b/i, /\bsecret key\b/i])
piiRedactor(): Guardrailfunction

Redact emails, US phone numbers, and SSNs in place with [REDACTED]. Returns pass:true so the request continues — the redacted content reaches the LLM.

requireJsonSchema(schema): Guardrailfunction

Block content that is not valid JSON or does not match the required fields and property types in the schema.

requireJsonSchema({ required: ['action', 'payload'], properties: { action: { type: 'string' } } })
runGuardrails(guardrails, content, ctx)function

Run guardrails in series. Returns { content, blocked, reason? }. The agent loop calls this internally.

runGuardrailsParallel(guardrails, content, ctx)function

Run all guardrails concurrently via Promise.all. Returns the first blocked result by array order, or pass if all pass.

custom guardrail example
import type { Guardrail } from 'fabrk'

const noSQLInjection: Guardrail = (content) => {
  const patterns = [/drop\s+table/i, /union\s+select/i, /--;?$/m]
  for (const p of patterns) {
    if (p.test(content)) return { pass: false, reason: 'SQL injection pattern detected' }
  }
  return { pass: true }
}

[TESTING]

Test agents with no API keys and no network. MockLLM intercepts LLM calls and returns what you configure. createTestAgent runs a real agent loop so your tool handlers, guardrails, and stop conditions all execute exactly as they would in production. Add runEvals for dataset-driven regression testing.

mockLLM(): MockLLMfunction

Create a MockLLM. Chain .onMessage(pattern).respondWith(text) or .callTool(name, input) to configure pattern-matched responses.

const mock = mockLLM()
  .onMessage(/weather/).callTool('get_weather', { city: 'SF' })
  .setDefault('I can help with that.')
MockLLMclass

onMessage(pattern) — set response for messages matching string or RegExp. onToolCall(name).returnResult(str) — set tool execution return value. setDefault(content) — fallback response. getCalls() — read call log. callCount — number of LLM invocations. reset() — clear call log.

createTestAgent(options)function

Wire a full agent loop (real runAgentLoop, real tool executor) around a MockLLM. Returns { send(message): Promise<TestAgentResult> }.

const agent = createTestAgent({ tools: [myTool], mock, stream: false })
const result = await agent.send('search for docs on memory')
expect(result.toolCalls[0].name).toBe('search_docs')
TestAgentOptionsinterface

name?: string; systemPrompt?: string; tools?: ToolDefinition[]; mock: MockLLM; stream?: boolean; maxIterations?: number

TestAgentResultinterface

content: string; toolCalls: Array<{ name, input }>; usage: { promptTokens, completionTokens, cost }; events: AgentLoopEvent[]

defineEval(suite): EvalSuitefunction

Declare an eval suite with named cases, scorers, and a pass threshold. Identity function — used for type checking.

runEvals(suite, opts?): Promise<EvalSuiteResult>function

Run all cases against a test agent, score each output, and return passRate. opts accepts: dataset, store, compareWith (regression detection), concurrency (max 20).

const result = await runEvals(mySuite, { store: fileStore, dataset: myDataset })
console.log(`Pass rate: ${(result.passRate * 100).toFixed(0)}%`)
EvalSuiteinterface

name: string; agent: { systemPrompt?, tools?, mock, maxIterations? }; cases: EvalCase[]; scorers: Scorer[]; threshold?: number (default 1.0)

EvalCaseinterface

input: string; expected?: string

Scorer / ScorerResulttype

Scorer = (ctx: { input, output, expected?, toolCalls }) => Promise<ScorerResult>. ScorerResult = { pass: boolean; score: number; reason?: string }. Built-in scorers: exactMatch, containsAll, toolCalled.

import { exactMatch, containsAll, toolCalled } from 'fabrk'
scorers: [
  exactMatch(),
  containsAll(['San Francisco', '°F']),
  toolCalled('get_weather'),
]

[ROUTING & SSR]

The router reads your app/ directory at startup and builds a Route array. The matcher resolves URLs to route and params. Both are pure functions — you can test them without starting a server.

Routeinterface

pattern: string; regex: RegExp; paramNames: string[]; filePath: string; layoutPaths: string[]; type: "page"|"api"; errorPath?; loadingPath?; notFoundPath?; catchAll?; optionalCatchAll?; slots?; islands?; ppr?; runtime?: "node"|"edge"

RouteMatchinterface

route: Route; params: Record<string, string>

scanRoutes(appDir): Route[]function

Walk the app directory and return a sorted Route[] from file-system conventions. Handles dynamic segments, catch-alls, parallel slots, intercepting routes, and server islands.

import { scanRoutes, matchRoute } from 'fabrk'
const routes = scanRoutes('./src/app')
const match = matchRoute(routes, '/dashboard/settings')
console.log(match?.params)  // {}
matchRoute(routes, pathname, softNavigation?): RouteMatch|nullfunction

Find the first route matching pathname. softNavigation=true prefers intercepting routes.

handleRequest(req, routes, modules): Promise<Response>function

Match an incoming request against routes, load the handler module, run middleware, and return a Response. Entry point for the production server and Vite dev middleware.

buildPageTree(routes): PageTreefunction

Build a nested layout tree from a Route array for SSR rendering with nested layouts, loading boundaries, and error boundaries.

export const ppr = trueconvention

Export from a route file to enable Partial Pre-Rendering. The static shell renders synchronously; dynamic holes stream via React 19 Suspense.

// app/dashboard/page.tsx
export const ppr = true
export default function DashboardPage() { return <Suspense fallback={<Shell />}><DynamicData /></Suspense> }
export const runtime = "edge"convention

Export from a route file to run it in the Edge runtime (fetch API only, no Node.js built-ins). In production, the route compiles to a Request/Response fetch handler.

[ROUTE FILE CONVENTIONS]
  • app/dashboard/page.tsx — page component, renders in layout
  • app/dashboard/layout.tsx — wraps all child pages
  • app/dashboard/loading.tsx — Suspense fallback for this segment
  • app/dashboard/error.tsx — error boundary for this segment
  • app/api/users/route.ts — API route, exports GET/POST/PUT/DELETE
  • app/blog/[slug]/page.tsx — dynamic segment
  • app/[...rest]/page.tsx — catch-all segment
  • app/@modal/page.tsx — parallel route slot
  • island.sidebar.tsx — server island (independent Suspense boundary)

[CLIENT HOOKS]

React hooks for the browser. Import from 'fabrk/client'. All hooks run client-side — they are marked 'use client' internally.

useAgent(agentName)hook

Connect a React component to an agent SSE stream. Manages message history, streaming state, cost tracking, tool call state, and abort control.

import { useAgent } from 'fabrk/client'
const { send, stop, messages, isStreaming, cost, usage, error, toolCalls } = useAgent('assistant')
// messages: AgentMessage[] (max 50 history entries sent per request)
// cost: cumulative USD this session
// toolCalls: AgentToolCall[] — name, input, output?, durationMs?, iteration
AgentMessageinterface

role: "user"|"assistant"; content: string | AgentContentPart[]

AgentContentParttype

{ type:"text"; text:string } | { type:"image"; url?:string; base64?:string; mimeType?:string }

AgentToolCallinterface

name: string; input: Record<string,unknown>; output?: string; durationMs?: number; iteration: number

InferChatMessages<T>type

Extract the messages array type from a useAgent return value. T extends { messages: AgentMessage[] }.

type MyMessages = InferChatMessages<ReturnType<typeof useAgent>>
// = AgentMessage[]
useObject<T>(options)hook

Stream a structured JSON object from an API endpoint. Updates progressively as JSON accumulates. Returns { submit, stop, object, isLoading, error }.

import { useObject } from 'fabrk/client'
const { submit, object, isLoading } = useObject<{ name: string; tags: string[] }>({
  api: '/api/generate-profile',
  onFinish: (obj) => console.log('done', obj),
})
UseObjectOptions<T>interface

api: string; onFinish?: (object: T) => void

useViewTransition()hook

Wrap state updates in document.startViewTransition() when available. Returns { startTransition } — a drop-in for React's useTransition that adds smooth page transitions.