
AGENTS.md: The Open Standard That's Changing How AI Coding Agents Work With Your Codebase

The Problem Nobody Was Talking About
Every developer using AI coding assistants has hit the same wall.
You open Claude Code, Cursor, or GitHub Copilot in a new project. The AI starts making suggestions — but they're generic. It doesn't know your naming conventions. It doesn't know you're using a custom authentication pattern. It doesn't know that the /legacy folder should never be touched. It doesn't know your team prefers functional components over class components, or that you have a strict rule about never committing directly to main.
So you explain it. Again. And again. Every session. Every new team member who onboards an AI assistant. Every time you switch tools.
The README exists — but it's written for humans. It explains what the project does, how to install it, how to contribute. It's not written to tell an AI agent how to behave inside your codebase.
AGENTS.md is the fix. And it's simpler than you'd expect.
What Is AGENTS.md?
AGENTS.md is an open-source standard format — a dedicated file that lives in your repository and gives AI coding agents the project-specific context they need to work effectively with your code.
Think of it as a README for AI assistants. While your README explains your project to humans, your AGENTS.md explains your project to the AI tools your team uses every day.
The format is:
- Plain Markdown — no special syntax, no new tooling to learn
- No required fields — you write what's relevant to your project
- Hierarchical — supports monorepos with nested AGENTS.md files per directory
- Open standard — stewarded by the Agentic AI Foundation under the Linux Foundation
It's already integrated with every major AI coding tool: Claude Code, GitHub Copilot, Cursor, Devin, and more. And it's been adopted by over 60,000 open-source projects.
Why This Matters: The Separation of Concerns
The core insight behind AGENTS.md is elegant: AI assistants and humans need fundamentally different types of documentation.
Your README answers: What does this project do? How do I install it? How do I contribute?
Your AGENTS.md answers: How should an AI agent behave inside this codebase? What are the rules? What are the patterns? What should it never do?
These are completely different questions. Mixing them into a single README creates a document that serves neither audience well. AGENTS.md separates the concerns cleanly.
Here's what a typical AGENTS.md might contain:
# AGENTS.md
## Project Overview
This is a multi-tenant SaaS application built with Next.js 14 and PostgreSQL.
Each tenant has isolated data — never query across tenant boundaries.
## Code Style
- Use TypeScript strict mode — no `any` types
- Functional components only — no class components
- Use Zod for all input validation
- All API routes must include rate limiting middleware
## Architecture Rules
- Business logic lives in `/lib/services` — never in API routes or components
- Database queries go through `/lib/db` — never use Prisma directly in components
- The `/legacy` directory is read-only — do not modify these files
## Testing
- Write tests for all new service functions
- Use Vitest, not Jest
- Mock external APIs in tests — never make real HTTP calls
## What NOT to Do
- Never commit secrets or API keys
- Never modify migration files after they've been applied
- Never bypass the authentication middleware
That's it. Plain Markdown. No schema to learn. No tooling to install. Just write what your AI agent needs to know.
The Ecosystem: Who's Already Using It
The adoption story for AGENTS.md is impressive. Integration with the tools developers actually use is what makes a standard stick — and AGENTS.md has it.
Claude Code reads AGENTS.md automatically when you open a project. The instructions become part of Claude's context for every conversation in that workspace.
GitHub Copilot uses AGENTS.md to inform its suggestions across your entire codebase, not just the file you're currently editing.
Cursor picks up AGENTS.md as project-level rules that apply to all AI interactions in the workspace.
Devin (the autonomous AI software engineer) uses AGENTS.md to understand project constraints before it starts making changes.
The pattern is consistent: open the project, the AI reads AGENTS.md, the AI behaves according to your rules. No manual configuration. No copy-pasting instructions into every chat window.
Monorepo Support: Hierarchical File Resolution
One of the more technically thoughtful aspects of AGENTS.md is how it handles complex project structures.
In a monorepo, you might have:
/AGENTS.md ← Global rules for the entire repo
/packages/
/api/
AGENTS.md ← API-specific rules (Node.js, REST conventions)
/web/
AGENTS.md ← Frontend-specific rules (React, CSS conventions)
/mobile/
AGENTS.md ← Mobile-specific rules (React Native, platform guidelines)
AI agents resolve these hierarchically — global rules apply everywhere, package-specific rules apply within their scope. This means your frontend team can document their React conventions without polluting the API team's context, and vice versa.
For large engineering organizations managing complex codebases, this is a significant quality-of-life improvement.
Governance: Why the Linux Foundation Matters
Standards live or die by their governance. A standard controlled by a single company can be deprecated, changed arbitrarily, or abandoned. A standard with strong, neutral governance has staying power.
AGENTS.md is stewarded by the Agentic AI Foundation, operating under the Linux Foundation — the same organization that governs Linux, Kubernetes, Node.js, and dozens of other foundational open-source projects.
This matters for enterprise adoption. When a CTO asks “will this standard still exist in five years?”, the Linux Foundation affiliation is a credible answer. It signals that AGENTS.md is infrastructure, not a product.
The Simplicity Advantage
There's a temptation in software to over-engineer standards. Add a schema. Require specific fields. Build a validation layer. Create a DSL.
AGENTS.md resists all of this. It's just Markdown. And that's the right call.
The barrier to entry is essentially zero. If you can write a README, you can write an AGENTS.md. There's nothing to install, nothing to configure, no new syntax to learn. You create a file, write what your AI agent needs to know, and commit it.
This simplicity is also the standard's biggest risk — effectiveness depends entirely on:
- How well each AI agent actually reads and uses the file — implementation quality varies across tools
- How well developers write the instructions — a poorly written AGENTS.md is worse than none
- Keeping it updated — like all documentation, it becomes a liability if it drifts from reality
What Good Looks Like: Writing an Effective AGENTS.md
Not all AGENTS.md files are created equal. Here's what separates a useful one from a useless one.
Be Specific About Architecture
Weak: Follow our architecture patterns.
Strong:
## Architecture
- All business logic lives in `/src/services` — never put business logic in controllers or routes
- Controllers should only handle HTTP concerns (parsing, validation, response formatting)
- Use the Repository pattern for all database access — never query the database directly from services
Explain the Why, Not Just the What
Weak: Don't use any in TypeScript.
Strong:
## TypeScript Rules
- No `any` types — we have strict runtime validation requirements and `any` defeats our type safety guarantees
- Use `unknown` when the type is genuinely unknown, then narrow with type guards
- All external API responses must be validated with Zod before use
Document What NOT to Do
AI agents are optimistic — they'll try to help even when they shouldn't. Explicit prohibitions are valuable:
## Do Not
- Modify files in `/legacy` — this code is frozen pending migration
- Add new npm dependencies without noting them in your PR description
- Use `console.log` in production code — use the logger in `/src/lib/logger`
- Write raw SQL — use the ORM query builder
Include Context About Your Team
## Team Context
- We're a small team (4 engineers) — prefer simple, readable solutions over clever ones
- We do weekly code reviews — write code that's easy to review in small chunks
- We're migrating from REST to GraphQL — new endpoints should use GraphQL, existing REST endpoints should not be changed
Starter Templates by Stack
Python / Django
# AGENTS.md
## Stack
Python 3.12, Django 5.0, PostgreSQL, Redis, Celery
## Code Style
- Follow PEP 8 strictly
- Use type hints everywhere — we run mypy in CI
- Docstrings for all public functions and classes (Google style)
## Django Conventions
- Fat models, thin views — business logic belongs in models or service classes
- Use Django's ORM — no raw SQL except in migrations
- All views must use class-based views
- Use Django REST Framework for all API endpoints
## Testing
- pytest, not unittest
- Minimum 80% coverage — CI will fail below this
- Use factory_boy for test fixtures, not fixtures files
- Mock external services with responses library
Node.js / React
# AGENTS.md
## Stack
Node.js 20, React 18, TypeScript 5, PostgreSQL, Prisma
## TypeScript
- Strict mode enabled — no `any`, no `@ts-ignore`
- Prefer `interface` over `type` for object shapes
- Use Zod for runtime validation of all external data
## React Conventions
- Functional components only
- Custom hooks for all stateful logic — no logic in components
- Use React Query for server state, Zustand for client state
- CSS Modules for styling — no inline styles, no styled-components
## API
- REST API following OpenAPI 3.0 spec
- All endpoints documented in `/docs/api`
- Use middleware for auth, rate limiting, and logging — never inline these
Rust
# AGENTS.md
## Stack
Rust 1.77, Tokio async runtime, SQLx, Axum
## Code Style
- Run `cargo clippy` before committing — fix all warnings
- Use `thiserror` for error types — no `unwrap()` in library code
- Prefer `?` operator over explicit `match` for error propagation
## Architecture
- Hexagonal architecture — domain logic has no dependencies on infrastructure
- All database queries in `/src/infrastructure/db`
- Use trait objects for dependency injection in tests
## Performance
- Profile before optimizing — include benchmark results in PRs that claim performance improvements
- Avoid unnecessary allocations in hot paths
- Document any unsafe code with a safety comment explaining the invariants
What's Still Missing: The Opportunity
AGENTS.md is a strong foundation, but the ecosystem around it is still early. Here's what would make it significantly more powerful:
A Validation Tool — A linter that checks your AGENTS.md for common issues: instructions that are too vague, contradictory rules, references to files that don't exist, missing sections that are commonly useful.
Quality Examples from Real Projects — A curated gallery of exceptional AGENTS.md files from popular open-source projects. Seeing how the React team or the Kubernetes team writes their AI agent instructions would be enormously valuable.
Impact Metrics — Case studies with real numbers: “After adding AGENTS.md, our AI-generated PRs required 40% fewer review cycles.” These numbers would accelerate adoption dramatically.
Integration Guides with Screenshots — Each supported AI tool should have a dedicated guide showing exactly how it reads and uses AGENTS.md, with screenshots and setup videos.
A Community Hub — A place where developers can share their AGENTS.md files, rate them, and learn from each other. The best practices for writing AI agent instructions are still being discovered.
Why This Standard Has Staying Power
Some standards are clever solutions looking for a problem. AGENTS.md is the opposite — it's a simple solution to a problem that every developer using AI coding tools experiences daily.
The timing is right. AI coding assistants have crossed the mainstream threshold. Claude Code, Copilot, and Cursor are no longer experimental tools — they're part of the daily workflow for millions of developers.
The governance is right. Linux Foundation stewardship means this isn't going away when a startup pivots or gets acquired.
The simplicity is right. Zero barrier to entry means adoption can happen bottom-up — individual developers can add AGENTS.md to their projects without waiting for organizational approval.
The ecosystem is right. Integration with all major AI coding tools means the file actually gets used, not just committed and forgotten.
Getting Started in 5 Minutes
- Create the file — add
AGENTS.mdto your project root - Write your rules — start with architecture, code style, and what NOT to do
- Commit it — your AI coding tools will pick it up automatically
- Iterate — update it when you notice your AI agent making the same mistake twice
That's it. No installation. No configuration. No new tools.
The best AGENTS.md is the one that exists. Start simple, add more as you discover what your AI agent needs to know.
Bottom Line
AGENTS.md is one of those rare standards that's both technically sound and practically useful. It solves a real problem — the lack of a standard place for AI coding agents to find project-specific context — with the simplest possible solution: a Markdown file.
The 60,000+ project adoption and Linux Foundation governance suggest this isn't a flash in the pan. It's infrastructure for the AI-assisted development era.
If you're using any AI coding tool and you don't have an AGENTS.md in your project, you're leaving significant value on the table. Your AI assistant is working without the context it needs to be genuinely useful — and you're paying for that in repeated corrections, generic suggestions, and AI-generated code that doesn't fit your patterns.
Add the file. Write the rules. Let your AI agent actually understand your codebase.
Sources: AGENTS.md official documentation, Agentic AI Foundation, Linux Foundation, Claude Code documentation, GitHub Copilot documentation, Cursor documentation.
Related Articles


