COMPARISON

How FABRK stacks up against Next.js + Vercel AI SDK and LangGraph. An honest look at what each tool does well and where each falls short.

[SCOPE]
These three tools solve different problems. Next.js + Vercel AI SDK is a web framework with AI bolted on. LangGraph is an agent orchestration library with no web layer. FABRK is a full-stack framework designed around AI agents as a first-class primitive. This comparison is honest about what FABRK does not have yet.

[FEATURE MATRIX]

FEATUREFABRKNEXT.JS + AI SDKLANGGRAPH
RUNTIME
File-system routingYESYes — own Vite 7 routerYESYes — Next.js App RouterNONo — bring your own framework
SSR / RSC streamingYESYes — built inYESYes — Next.js RSCNONo
Edge runtime supportYESYes — Cloudflare Workers fetch handlerYESYes — Vercel Edge FunctionsNONo
i18n routingNONo — not yet implementedYESYes — next-intl, next/i18nNONo
OG image generationNONo — no next/og equivalent yetYESYes — next/ogNONo
Font optimizationNONo — no next/font equivalent yetYESYes — next/fontNONo
AI AGENTS
Agent definition APIYESYes — defineAgent()PARTIALPartial — manual wiring per endpointYESYes — graph nodes
Tool callingYESYes — defineTool(), auto-executed loopPARTIALPartial — manual tool parsing requiredYESYes — node functions
Agent orchestration / supervisorYESYes — supervisor + agent-as-tool delegationNONo — build it yourselfYESYes — graph-based state machines
Streaming SSE responsesYESYes — createSSEResponse(), useAgent() hookYESYes — useChat() from Vercel AI SDKPARTIALPartial — framework-agnostic, manual wiring
Agent memory (threads)YESYes — thread-per-sessionNONo — build it yourselfYESYes — state persistence
Skills systemYESYes — composable prompt + tool bundlesNONoNONo
RAG helperYESYes — ragTool() with pluggable vector storeNONo — use LangChain or LlamaIndex separatelyPARTIALPartial — manual integration
SQL query toolYESYes — read-only by default, parameterizedNONoNONo
Agent testing frameworkYESYes — createTestAgent(), mockLLM(), assertion helpersNONo — test manuallyPARTIALPartial — some test utilities
COST & BUDGET
Per-call cost trackingYESYes — AICostTracker, model pricing tableNONo — build it yourselfNONo
Budget enforcementYESYes — per-agent, per-session, daily limitsNONoNONo
Cost alertsYESYes — configurable thresholdsNONoNONo
MCP
MCP server (JSON-RPC)YESYes — HTTP + stdio transportsNONoNONo
MCP clientYESYes — built inNONoNONo
DEV EXPERIENCE
Dev dashboardYESYes — /__ai: cost trends, tool stats, errorsNONoPARTIALPartial — LangSmith (separate product)
CLI scaffoldingYESYes — create-fabrk-appYESYes — create-next-appNONo
TypeScript-firstYESYes — 24/24 type-check, 0 errorsYESYesPARTIALPartial — Python-first, TS port less mature
UI & DESIGN
Pre-built UI componentsYESYes — 109+ componentsNONo — use shadcn/ui, Radix, etc.NONo — framework-agnostic
Design system / themesYESYes — 18 themes, runtime switchingNONo — bring your ownNONo
Charts (built in)YESYes — 11 chart typesNONoNONo
FULL-STACK
Auth (NextAuth, API keys, MFA)YESYes — @fabrk/authPARTIALPartial — NextAuth separate installNONo
Payments (Stripe, Polar)YESYes — @fabrk/paymentsNONo — manual integrationNONo
Email deliveryYESYes — @fabrk/email (Resend)NONo — manual integrationNONo
File storage (S3, R2)YESYes — @fabrk/storageNONo — manual integrationNONo
Security (CSRF, CSP, rate limiting)YESYes — @fabrk/securityPARTIALPartial — manual or third-partyNONo
ECOSYSTEM
Production battle-testing at scaleNONo — early stageYESYes — massive scaleYESYes — production use
Community / third-party pluginsNONo — early ecosystemYESYes — huge ecosystemYESYes — Python ecosystem
Test coverageYESYes — 1,832 testsYESYesYESYes
YESBuilt inPARTIALPossible with extra workNONot available

[SAME TASK IN THREE TOOLS]

Defining an AI agent that can search documents and answer questions. This shows the API surface each tool exposes and how much wiring you write yourself.

FABRK

One file. Agent loop, tool calling, budget, and SSE streaming handled by the framework.

agents/docs-assistant/agent.ts
import { defineAgent, defineTool, textResult } from '@fabrk/framework'

// Define a tool
const searchDocs = defineTool({
  name: 'search-docs',
  description: 'Search the documentation for a query',
  parameters: { query: { type: 'string', description: 'Search query' } },
  async execute({ query }) {
    const results = await vectorStore.search(query, { limit: 5 })
    return textResult(results.map(r => r.content).join('\n\n'))
  },
})

// Define the agent — framework handles the loop, streaming, and budget
export default defineAgent({
  model: 'claude-sonnet-4-5-20250514',
  systemPrompt: 'You are a docs assistant. Use search-docs to find relevant information.',
  tools: [searchDocs],
  budget: { daily: 5.0, perSession: 0.25 },
})

// Agent is automatically mounted at /agents/docs-assistant
// Client: import { useAgent } from '@fabrk/framework/client/use-agent'

NEXT.JS + VERCEL AI SDK

More files. You wire the route handler, define tools with zod, parse the stream manually, and there is no built-in budget enforcement.

app/api/chat/route.ts
import { streamText, tool } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'

// You define the route handler yourself
export async function POST(req: Request) {
  const { messages } = await req.json()

  // No built-in budget enforcement — add your own tracking
  // No agent loop — stream ends after one turn unless you build multi-turn

  const result = streamText({
    model: anthropic('claude-sonnet-4-5-20250514'),
    system: 'You are a docs assistant. Use searchDocs to find relevant information.',
    messages,
    tools: {
      searchDocs: tool({
        description: 'Search the documentation for a query',
        parameters: z.object({ query: z.string().describe('Search query') }),
        execute: async ({ query }) => {
          const results = await vectorStore.search(query, { limit: 5 })
          return results.map(r => r.content).join('\n\n')
        },
      }),
    },
    maxSteps: 5, // Tool loop — must opt in manually
  })

  return result.toDataStreamResponse()
}

// Client: useChat() from 'ai/react'
// Cost tracking: build your own middleware or use separate service
// Budget enforcement: build your own

LANGGRAPH (TYPESCRIPT)

Graph-based. Powerful orchestration primitives, but no web layer, no streaming HTTP handler, no UI. You build those yourself on top.

agent/graph.ts
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph'
import { ChatAnthropic } from '@langchain/anthropic'
import { tool } from '@langchain/core/tools'
import { z } from 'zod'
import { ToolNode } from '@langchain/langgraph/prebuilt'

// Define the tool
const searchDocs = tool(
  async ({ query }) => {
    const results = await vectorStore.search(query, { limit: 5 })
    return results.map(r => r.content).join('\n\n')
  },
  {
    name: 'search_docs',
    description: 'Search the documentation for a query',
    schema: z.object({ query: z.string().describe('Search query') }),
  }
)

const model = new ChatAnthropic({ model: 'claude-sonnet-4-5-20250514' })
  .bindTools([searchDocs])

// Build the graph — powerful but verbose for simple agents
function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
  const last = messages[messages.length - 1]
  return last.tool_calls?.length ? 'tools' : '__end__'
}

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages)
  return { messages: [response] }
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode('agent', callModel)
  .addNode('tools', new ToolNode([searchDocs]))
  .addEdge('__start__', 'agent')
  .addConditionalEdges('agent', shouldContinue)
  .addEdge('tools', 'agent')

export const graph = workflow.compile()

// Now you need to build: HTTP route, streaming, session management,
// budget enforcement, UI, and cost tracking — all separately

[WHEN TO USE FABRK]

[FABRK IS THE RIGHT CHOICE WHEN]
  • AI agents are a core feature — not an afterthought bolted on with a separate library
  • You want cost control baked in — budget limits, per-agent tracking, and alerts from day one
  • You want one stack — routing, SSR, AI agents, UI components, auth, payments, all from one config
  • You are building for AI coding agents — Claude Code, Cursor, Copilot can scaffold entire apps from AGENTS.md docs
  • You need MCP — Model Context Protocol server and client built in
  • Fast iteration on a new product — 109+ components, 18 themes, full-stack packages eliminate boilerplate
[NEXT.JS + AI SDK IS BETTER WHEN]
  • You need i18n routing — next-intl and next/i18n are mature, FABRK has none yet
  • OG images matter — next/og is excellent, FABRK has no equivalent
  • Ecosystem breadth is critical — thousands of community packages, large hiring pool, vast documentation
  • You are already on Next.js — migrating a large production app is high risk with minimal gain
  • AI is a minor feature — one chat box on a content site does not justify switching frameworks
[LANGGRAPH IS BETTER WHEN]
  • Complex multi-agent graphs — LangGraph's state machine model handles branching orchestration better than a linear loop
  • Python-first team — the Python SDK is significantly more mature than the TypeScript port
  • Framework-agnostic backend — you have a separate frontend and only need agent orchestration logic
  • LangSmith observability — deep tracing and eval tooling in the LangChain ecosystem

[HONEST LIMITATIONS]

[WHERE FABRK IS NOT READY]
This is an honest list of gaps. These are not spin — they are real limitations you will hit if you choose FABRK for the wrong use case.
LIMITATIONIMPACTWORKAROUND
No i18n routingCannot do locale-prefixed URLs (/en/about, /fr/about) out of the boxManual middleware or use Next.js instead
No OG image generationDynamic Open Graph images require custom implementationUse a separate service (Cloudinary, Vercel OG) or build a handler
No next/font equivalentFont optimization and subsetting must be done manuallySelf-host fonts and configure Vite asset handling
App Router only — no Pages RouterCannot adopt FABRK routing on an existing Pages Router codebaseUse FABRK component packages (@fabrk/components, etc.) without the framework runtime
Early ecosystemFewer third-party plugins, smaller community, less Stack Overflow coverageThe framework is open source — AGENTS.md docs help AI coding assistants fill gaps
Not production battle-tested at scaleUnknown behavior under extreme load or edge cases not yet encounteredRun load tests before production. Start with lower-risk workloads.

[ARCHITECTURE SUMMARY]

The fundamental difference is where AI lives in each stack.

how each stack is assembled
# Next.js + Vercel AI SDK
next                 — routing, SSR, RSC
ai (vercel AI SDK)  — streaming text, useChat hook
langchain/llamaindex — RAG (separate install)
stripe / resend / …  — payments, email (separate installs, manual wiring)
shadcn/ui            — UI components (separate install, copy-paste)
[No built-in cost tracking, budget, MCP, or agent testing]

# LangGraph
@langchain/langgraph  — agent orchestration (state machines)
@langchain/core       — tools, messages
[No web layer, no UI, no routing — bring your own everything]

# FABRK
@fabrk/framework      — Vite 7 runtime, routing, SSR, RSC streaming
                         defineAgent(), defineTool(), MCP server + client
                         agent loop, budget enforcement, cost tracking
                         dev dashboard (/__ai), agent testing framework
@fabrk/components     — 109+ UI components, 11 chart types, 18 themes
@fabrk/auth           — NextAuth, API keys (SHA-256), MFA (TOTP)
@fabrk/payments       — Stripe, Polar, Lemon Squeezy adapters
@fabrk/email          — Resend + console adapter, 4 templates
@fabrk/storage        — S3, Cloudflare R2, local filesystem
@fabrk/security       — CSRF, CSP, rate limiting, audit logging, GDPR
@fabrk/store-prisma   — 7 Prisma store adapters for production persistence

[AI AGENT LAYER]

How FABRK's agent primitives compare against the dedicated agent frameworks — LangChain JS, Mastra, and Vercel AI SDK — on a feature-by-feature basis.

FEATUREFABRKLANGCHAIN JSMASTRAVERCEL AI SDK
File-system routing + SSRYESBuilt-in Vite 7 routerNONo web layerNONo web layerPARTIALNext.js only
Agent definitionYESdefineAgent()YESAgentExecutorYESAgent classNONo agent primitive
Built-in memoryYESThread + semantic + long-termYESVia LangChain memoryYESBuilt-in memory layerNOBuild your own
Workflows (linear)YESdefineWorkflow()YESLCEL chainsYESWorkflow DSLNONot supported
Cyclic workflowsYESdefineStateGraph()YESLangGraph state machinesNOLinear onlyNONot supported
Multi-agent orchestrationYESagentAsTool + supervisor + networkYESMulti-agent graphYESAgent networksNONot supported
MCP client/serverYESBoth — HTTP + stdioPARTIALClient onlyYESBothPARTIALClient only
Built-in evalsYESdefineEval + scorers + MockLLMNOLangSmith (separate product)YESBuilt-in eval frameworkNONot supported
GuardrailsYESInput + output + parallel asyncNOManual implementationYESBuilt-in guardrailsNONot supported
Durable agents (checkpoint)YESCheckpoint/resume/rollbackNONot built inYESDurable executionNONot supported
OTel tracingYESAuto-instrumentedNOLangSmith onlyYESBuilt-in OTelYESAI SDK telemetry
UI componentsYES109+ components, 18 themesNONo UI layerNONo UI layerNONo UI layer
Voice (TTS/STT/realtime)YESBuilt-in /__ai/tts, /__ai/stt, /__ai/realtimeNONot supportedNONot supportedNONot supported
A2A protocolYESAgent-to-agent via agentAsToolNONot supportedYESSupportedNONot supported
YESBuilt inPARTIALPossible with extra workNONot available
[THE KEY DIFFERENTIATOR]

FABRK is the only framework that combines a full-stack Vite 7 runtime — routing, SSR, file-system conventions — with a complete AI agent layer. LangChain JS and Mastra are powerful orchestration libraries, but they have no web layer: you still need to reach for Next.js or Express, wire up a UI library, and stitch cost tracking together yourself. Vercel AI SDK gives you great streaming primitives on top of Next.js, but has no agent primitive, no memory, no workflows, and no evals. With FABRK you do not glue three separate tools together — the runtime, the agent layer, and the UI components ship as one coherent stack.