COMPARISON
How FABRK stacks up against Next.js + Vercel AI SDK and LangGraph. An honest look at what each tool does well and where each falls short.
[FEATURE MATRIX]
| FEATURE | FABRK | NEXT.JS + AI SDK | LANGGRAPH |
|---|---|---|---|
| RUNTIME | |||
| File-system routing | YESYes — own Vite 7 router | YESYes — Next.js App Router | NONo — bring your own framework |
| SSR / RSC streaming | YESYes — built in | YESYes — Next.js RSC | NONo |
| Edge runtime support | YESYes — Cloudflare Workers fetch handler | YESYes — Vercel Edge Functions | NONo |
| i18n routing | NONo — not yet implemented | YESYes — next-intl, next/i18n | NONo |
| OG image generation | NONo — no next/og equivalent yet | YESYes — next/og | NONo |
| Font optimization | NONo — no next/font equivalent yet | YESYes — next/font | NONo |
| AI AGENTS | |||
| Agent definition API | YESYes — defineAgent() | PARTIALPartial — manual wiring per endpoint | YESYes — graph nodes |
| Tool calling | YESYes — defineTool(), auto-executed loop | PARTIALPartial — manual tool parsing required | YESYes — node functions |
| Agent orchestration / supervisor | YESYes — supervisor + agent-as-tool delegation | NONo — build it yourself | YESYes — graph-based state machines |
| Streaming SSE responses | YESYes — createSSEResponse(), useAgent() hook | YESYes — useChat() from Vercel AI SDK | PARTIALPartial — framework-agnostic, manual wiring |
| Agent memory (threads) | YESYes — thread-per-session | NONo — build it yourself | YESYes — state persistence |
| Skills system | YESYes — composable prompt + tool bundles | NONo | NONo |
| RAG helper | YESYes — ragTool() with pluggable vector store | NONo — use LangChain or LlamaIndex separately | PARTIALPartial — manual integration |
| SQL query tool | YESYes — read-only by default, parameterized | NONo | NONo |
| Agent testing framework | YESYes — createTestAgent(), mockLLM(), assertion helpers | NONo — test manually | PARTIALPartial — some test utilities |
| COST & BUDGET | |||
| Per-call cost tracking | YESYes — AICostTracker, model pricing table | NONo — build it yourself | NONo |
| Budget enforcement | YESYes — per-agent, per-session, daily limits | NONo | NONo |
| Cost alerts | YESYes — configurable thresholds | NONo | NONo |
| MCP | |||
| MCP server (JSON-RPC) | YESYes — HTTP + stdio transports | NONo | NONo |
| MCP client | YESYes — built in | NONo | NONo |
| DEV EXPERIENCE | |||
| Dev dashboard | YESYes — /__ai: cost trends, tool stats, errors | NONo | PARTIALPartial — LangSmith (separate product) |
| CLI scaffolding | YESYes — create-fabrk-app | YESYes — create-next-app | NONo |
| TypeScript-first | YESYes — 24/24 type-check, 0 errors | YESYes | PARTIALPartial — Python-first, TS port less mature |
| UI & DESIGN | |||
| Pre-built UI components | YESYes — 109+ components | NONo — use shadcn/ui, Radix, etc. | NONo — framework-agnostic |
| Design system / themes | YESYes — 18 themes, runtime switching | NONo — bring your own | NONo |
| Charts (built in) | YESYes — 11 chart types | NONo | NONo |
| FULL-STACK | |||
| Auth (NextAuth, API keys, MFA) | YESYes — @fabrk/auth | PARTIALPartial — NextAuth separate install | NONo |
| Payments (Stripe, Polar) | YESYes — @fabrk/payments | NONo — manual integration | NONo |
| Email delivery | YESYes — @fabrk/email (Resend) | NONo — manual integration | NONo |
| File storage (S3, R2) | YESYes — @fabrk/storage | NONo — manual integration | NONo |
| Security (CSRF, CSP, rate limiting) | YESYes — @fabrk/security | PARTIALPartial — manual or third-party | NONo |
| ECOSYSTEM | |||
| Production battle-testing at scale | NONo — early stage | YESYes — massive scale | YESYes — production use |
| Community / third-party plugins | NONo — early ecosystem | YESYes — huge ecosystem | YESYes — Python ecosystem |
| Test coverage | YESYes — 1,832 tests | YESYes | YESYes |
[SAME TASK IN THREE TOOLS]
Defining an AI agent that can search documents and answer questions. This shows the API surface each tool exposes and how much wiring you write yourself.
FABRK
One file. Agent loop, tool calling, budget, and SSE streaming handled by the framework.
import { defineAgent, defineTool, textResult } from '@fabrk/framework'
// Define a tool
const searchDocs = defineTool({
name: 'search-docs',
description: 'Search the documentation for a query',
parameters: { query: { type: 'string', description: 'Search query' } },
async execute({ query }) {
const results = await vectorStore.search(query, { limit: 5 })
return textResult(results.map(r => r.content).join('\n\n'))
},
})
// Define the agent — framework handles the loop, streaming, and budget
export default defineAgent({
model: 'claude-sonnet-4-5-20250514',
systemPrompt: 'You are a docs assistant. Use search-docs to find relevant information.',
tools: [searchDocs],
budget: { daily: 5.0, perSession: 0.25 },
})
// Agent is automatically mounted at /agents/docs-assistant
// Client: import { useAgent } from '@fabrk/framework/client/use-agent'NEXT.JS + VERCEL AI SDK
More files. You wire the route handler, define tools with zod, parse the stream manually, and there is no built-in budget enforcement.
import { streamText, tool } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'
// You define the route handler yourself
export async function POST(req: Request) {
const { messages } = await req.json()
// No built-in budget enforcement — add your own tracking
// No agent loop — stream ends after one turn unless you build multi-turn
const result = streamText({
model: anthropic('claude-sonnet-4-5-20250514'),
system: 'You are a docs assistant. Use searchDocs to find relevant information.',
messages,
tools: {
searchDocs: tool({
description: 'Search the documentation for a query',
parameters: z.object({ query: z.string().describe('Search query') }),
execute: async ({ query }) => {
const results = await vectorStore.search(query, { limit: 5 })
return results.map(r => r.content).join('\n\n')
},
}),
},
maxSteps: 5, // Tool loop — must opt in manually
})
return result.toDataStreamResponse()
}
// Client: useChat() from 'ai/react'
// Cost tracking: build your own middleware or use separate service
// Budget enforcement: build your ownLANGGRAPH (TYPESCRIPT)
Graph-based. Powerful orchestration primitives, but no web layer, no streaming HTTP handler, no UI. You build those yourself on top.
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph'
import { ChatAnthropic } from '@langchain/anthropic'
import { tool } from '@langchain/core/tools'
import { z } from 'zod'
import { ToolNode } from '@langchain/langgraph/prebuilt'
// Define the tool
const searchDocs = tool(
async ({ query }) => {
const results = await vectorStore.search(query, { limit: 5 })
return results.map(r => r.content).join('\n\n')
},
{
name: 'search_docs',
description: 'Search the documentation for a query',
schema: z.object({ query: z.string().describe('Search query') }),
}
)
const model = new ChatAnthropic({ model: 'claude-sonnet-4-5-20250514' })
.bindTools([searchDocs])
// Build the graph — powerful but verbose for simple agents
function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
const last = messages[messages.length - 1]
return last.tool_calls?.length ? 'tools' : '__end__'
}
async function callModel(state: typeof MessagesAnnotation.State) {
const response = await model.invoke(state.messages)
return { messages: [response] }
}
const workflow = new StateGraph(MessagesAnnotation)
.addNode('agent', callModel)
.addNode('tools', new ToolNode([searchDocs]))
.addEdge('__start__', 'agent')
.addConditionalEdges('agent', shouldContinue)
.addEdge('tools', 'agent')
export const graph = workflow.compile()
// Now you need to build: HTTP route, streaming, session management,
// budget enforcement, UI, and cost tracking — all separately[WHEN TO USE FABRK]
- AI agents are a core feature — not an afterthought bolted on with a separate library
- You want cost control baked in — budget limits, per-agent tracking, and alerts from day one
- You want one stack — routing, SSR, AI agents, UI components, auth, payments, all from one config
- You are building for AI coding agents — Claude Code, Cursor, Copilot can scaffold entire apps from AGENTS.md docs
- You need MCP — Model Context Protocol server and client built in
- Fast iteration on a new product — 109+ components, 18 themes, full-stack packages eliminate boilerplate
- You need i18n routing — next-intl and next/i18n are mature, FABRK has none yet
- OG images matter — next/og is excellent, FABRK has no equivalent
- Ecosystem breadth is critical — thousands of community packages, large hiring pool, vast documentation
- You are already on Next.js — migrating a large production app is high risk with minimal gain
- AI is a minor feature — one chat box on a content site does not justify switching frameworks
- Complex multi-agent graphs — LangGraph's state machine model handles branching orchestration better than a linear loop
- Python-first team — the Python SDK is significantly more mature than the TypeScript port
- Framework-agnostic backend — you have a separate frontend and only need agent orchestration logic
- LangSmith observability — deep tracing and eval tooling in the LangChain ecosystem
[HONEST LIMITATIONS]
| LIMITATION | IMPACT | WORKAROUND |
|---|---|---|
| No i18n routing | Cannot do locale-prefixed URLs (/en/about, /fr/about) out of the box | Manual middleware or use Next.js instead |
| No OG image generation | Dynamic Open Graph images require custom implementation | Use a separate service (Cloudinary, Vercel OG) or build a handler |
| No next/font equivalent | Font optimization and subsetting must be done manually | Self-host fonts and configure Vite asset handling |
| App Router only — no Pages Router | Cannot adopt FABRK routing on an existing Pages Router codebase | Use FABRK component packages (@fabrk/components, etc.) without the framework runtime |
| Early ecosystem | Fewer third-party plugins, smaller community, less Stack Overflow coverage | The framework is open source — AGENTS.md docs help AI coding assistants fill gaps |
| Not production battle-tested at scale | Unknown behavior under extreme load or edge cases not yet encountered | Run load tests before production. Start with lower-risk workloads. |
[ARCHITECTURE SUMMARY]
The fundamental difference is where AI lives in each stack.
# Next.js + Vercel AI SDK
next — routing, SSR, RSC
ai (vercel AI SDK) — streaming text, useChat hook
langchain/llamaindex — RAG (separate install)
stripe / resend / … — payments, email (separate installs, manual wiring)
shadcn/ui — UI components (separate install, copy-paste)
[No built-in cost tracking, budget, MCP, or agent testing]
# LangGraph
@langchain/langgraph — agent orchestration (state machines)
@langchain/core — tools, messages
[No web layer, no UI, no routing — bring your own everything]
# FABRK
@fabrk/framework — Vite 7 runtime, routing, SSR, RSC streaming
defineAgent(), defineTool(), MCP server + client
agent loop, budget enforcement, cost tracking
dev dashboard (/__ai), agent testing framework
@fabrk/components — 109+ UI components, 11 chart types, 18 themes
@fabrk/auth — NextAuth, API keys (SHA-256), MFA (TOTP)
@fabrk/payments — Stripe, Polar, Lemon Squeezy adapters
@fabrk/email — Resend + console adapter, 4 templates
@fabrk/storage — S3, Cloudflare R2, local filesystem
@fabrk/security — CSRF, CSP, rate limiting, audit logging, GDPR
@fabrk/store-prisma — 7 Prisma store adapters for production persistence[AI AGENT LAYER]
How FABRK's agent primitives compare against the dedicated agent frameworks — LangChain JS, Mastra, and Vercel AI SDK — on a feature-by-feature basis.
| FEATURE | FABRK | LANGCHAIN JS | MASTRA | VERCEL AI SDK |
|---|---|---|---|---|
| File-system routing + SSR | YESBuilt-in Vite 7 router | NONo web layer | NONo web layer | PARTIALNext.js only |
| Agent definition | YESdefineAgent() | YESAgentExecutor | YESAgent class | NONo agent primitive |
| Built-in memory | YESThread + semantic + long-term | YESVia LangChain memory | YESBuilt-in memory layer | NOBuild your own |
| Workflows (linear) | YESdefineWorkflow() | YESLCEL chains | YESWorkflow DSL | NONot supported |
| Cyclic workflows | YESdefineStateGraph() | YESLangGraph state machines | NOLinear only | NONot supported |
| Multi-agent orchestration | YESagentAsTool + supervisor + network | YESMulti-agent graph | YESAgent networks | NONot supported |
| MCP client/server | YESBoth — HTTP + stdio | PARTIALClient only | YESBoth | PARTIALClient only |
| Built-in evals | YESdefineEval + scorers + MockLLM | NOLangSmith (separate product) | YESBuilt-in eval framework | NONot supported |
| Guardrails | YESInput + output + parallel async | NOManual implementation | YESBuilt-in guardrails | NONot supported |
| Durable agents (checkpoint) | YESCheckpoint/resume/rollback | NONot built in | YESDurable execution | NONot supported |
| OTel tracing | YESAuto-instrumented | NOLangSmith only | YESBuilt-in OTel | YESAI SDK telemetry |
| UI components | YES109+ components, 18 themes | NONo UI layer | NONo UI layer | NONo UI layer |
| Voice (TTS/STT/realtime) | YESBuilt-in /__ai/tts, /__ai/stt, /__ai/realtime | NONot supported | NONot supported | NONot supported |
| A2A protocol | YESAgent-to-agent via agentAsTool | NONot supported | YESSupported | NONot supported |
FABRK is the only framework that combines a full-stack Vite 7 runtime — routing, SSR, file-system conventions — with a complete AI agent layer. LangChain JS and Mastra are powerful orchestration libraries, but they have no web layer: you still need to reach for Next.js or Express, wire up a UI library, and stitch cost tracking together yourself. Vercel AI SDK gives you great streaming primitives on top of Next.js, but has no agent primitive, no memory, no workflows, and no evals. With FABRK you do not glue three separate tools together — the runtime, the agent layer, and the UI components ship as one coherent stack.