AGENTS.md: How Next.js 16.2 Gives AI Agents Perfect Framework Knowledge

nextjsaidx

TL;DR

  • Next.js 16.2 ships AGENTS.md by default — it points AI agents to version-matched docs bundled in node_modules/next/dist/docs/.
  • Vercel’s evals: 100% pass rate with AGENTS.md, vs 53% baseline and 79% with skills.
  • GPT 5.3, Gemini 3.1 Pro, Claude Opus 4.6, and Claude Sonnet 4.6 all hit 100%.
  • Already have a project? Add a short AGENTS.md to your root (steps below). On older versions: npx @next/codemod@latest agents-md.
  • Passive context beats on-demand retrieval — agents often don’t realize they should look things up.

The Problem

If you’ve used AI agents with Next.js recently, you’ve probably run into this: the agent generates code using APIs that don’t exist, or misses newer ones like 'use cache', connection(), or forbidden(). Training data goes stale, and agents don’t know what they don’t know.

I kept running into this myself. The agent would confidently write a getServerSideProps in an App Router project, or ignore proxy.ts entirely because it had never seen it.

Next.js 16.2 fixes this with AGENTS.md — and it’s almost ridiculously simple.

What Is AGENTS.md?

A markdown file at the root of your project. That’s it.

Cursor, Claude Code, GitHub Copilot, Codex — they all read AGENTS.md automatically when they start a session. In Next.js 16.2, the next package now bundles the full documentation as Markdown files at node_modules/next/dist/docs/. The docs always match your installed version.

AGENTS.md tells the agent one thing: read these local docs before writing any code. No network requests, no version mismatches, no stale training data.

For Claude Code  users, you can also add a CLAUDE.md that imports it with @AGENTS.md.

The Numbers

Jude Gao at Vercel  ran evals comparing two approaches — skills (on-demand retrieval) vs AGENTS.md (passive context). The test suite targeted Next.js 16 APIs not in model training data.

ConfigurationPass Ratevs Baseline
Baseline (no docs)53%
Skill (default behavior)53%+0pp
Skill with explicit instructions79%+26pp
AGENTS.md100%+47pp

The Build/Lint/Test breakdown tells the same story:

ConfigurationBuildLintTest
Baseline84%95%63%
Skill (default)84%89%58%
Skill with instructions95%100%84%
AGENTS.md100%100%100%

The wildest part: in 56% of cases, the skill was available but the agent never used it. It had the docs right there and just… didn’t look.

That’s exactly why passive context wins. There’s no decision point — the agent doesn’t need to choose to look something up. The info is already in context every turn.

Cross-Agent Results

Vercel’s public eval dashboard  tested 21 evals across 12 models:

ModelAgentBase RateWith AGENTS.md
GPT 5.3 CodexCodex86%100%
Cursor Composer 2.0Cursor76%95%
Gemini 3.1 ProGemini CLI76%100%
Claude Opus 4.6Claude Code71%100%
Claude Sonnet 4.6Claude Code67%100%
Claude Sonnet 4.5Claude Code57%86%

Four models from three providers all hit 100%. The eval suite is open source  if you want to run it yourself.

Adding AGENTS.md to Your Existing Project

New create-next-app projects get this automatically. For existing projects, here’s what to do.

Next.js 16.2+

The docs are already bundled. You just need the file.

Step 1: Make sure you’re on 16.2+:

npx next --version

Step 2: Create AGENTS.md in your project root:

<!-- BEGIN:nextjs-agent-rules --> # Next.js: ALWAYS read docs before coding Before any Next.js work, find and read the relevant doc in `node_modules/next/dist/docs/`. Your training data is outdated — the docs are the source of truth. <!-- END:nextjs-agent-rules -->

The comment markers delimit the Next.js section. You can add your own rules outside them — future updates only touch what’s between the markers.

Step 3 (optional): Create CLAUDE.md for Claude Code:

@AGENTS.md

Step 4: Add project-specific instructions outside the markers:

<!-- BEGIN:nextjs-agent-rules --> # Next.js: ALWAYS read docs before coding Before any Next.js work, find and read the relevant doc in `node_modules/next/dist/docs/`. Your training data is outdated — the docs are the source of truth. <!-- END:nextjs-agent-rules --> ## My project rules - Tailwind CSS v4 with `@theme` configuration - Data fetching in Server Components only - API routes use Zod for validation

Done. Next time an agent opens your project, it reads the bundled docs first.

Next.js 16.1 and Earlier

Docs aren’t bundled in older versions, so use the codemod:

npx @next/codemod@latest agents-md

This detects your version, downloads matching docs to .next-docs/, and injects a compressed 8KB index into AGENTS.md. The agent reads the index and fetches specific files from .next-docs/ as needed.

Other AI Features in 16.2

AGENTS.md is the big one, but 16.2 also shipped:

  • Browser Log Forwarding — client-side errors now show up in the terminal. Agents can finally debug browser issues without a browser. Configure with logging.browserToTerminal in next.config.ts.
  • Dev Server Lock File.next/dev/lock prevents agents from accidentally starting duplicate dev servers. They get an actionable error with the PID to kill.
  • Experimental Agent DevTools@vercel/next-browser gives agents terminal access to React component trees, PPR shell analysis, screenshots, and network activity.

Plus the performance stuff: up to 350% faster RSC payload deserialization and ~87% faster next dev startup. Full details in the 16.2 release post .

My Take

I think AGENTS.md is going to become a standard across frameworks. The spec  already exists as an open convention. The pattern is dead simple — bundle docs with your package, add a root-level file that points agents to them.

What I find interesting is the core finding from Vercel’s research: don’t make agents decide when to look things up. Just give them the context. It goes against the instinct to build clever retrieval systems, but the data is clear.

If you’re on Next.js 16.2, go add the file. It takes 30 seconds and every AI tool you use will immediately benefit from it.