Chapters
Try It For Free
March 16, 2026

The Agent-Native Repo: Why AGENTS.MD is the New Standard | Harness Blog

This is part 1 of a five-part series on building production-grade AI engineering systems.

Across this series, we will cover:

  1. How to make your repository agent-native
  2. How to prevent context decay in long AI sessions
  3. How to orchestrate tools, subagents, and external systems
  4. How to survive the multi-model reality with gateway layers
  5. How to measure and enforce quality with AI evals

Most teams experimenting with AI coding agents focus on prompts.

That is the wrong starting point.

Before you optimize how an agent thinks, you must standardize what it sees.

AI agents do not primarily fail because of reasoning limits. They fail because of environmental ambiguity. They are dropped into repositories designed exclusively for humans and expected to infer structure, conventions, workflows, and constraints from scattered documentation.

If AI agents are contributors, then the repository itself must become agent-native.

The foundational step is introducing a standardized instruction layer that every agent can read.

That layer is AGENTS.md.

The Real Problem: Context Silos

Every coding agent needs instructions. Where those instructions live depends on the tool.

One IDE reads from a hidden rules directory.
Another expects a specific markdown file.
Another uses proprietary configuration.

This fragmentation creates three systemic problems.

1. Tool-dependent prompt locations

Instructions are locked into IDE-specific paths. Change tools and you lose institutional knowledge.

2. Tribal knowledge never gets committed

When a developer discovers the right way to guide an agent through a complex module, that guidance often lives in chat history. It never reaches version control. It never becomes part of the repository’s operational contract.

3. Inconsistent agent behavior

Two engineers working on the same codebase but using different agents receive different outputs because the instruction surfaces are different.

The repository stops being the single source of truth.

For human collaboration, we solved this decades ago with READMEs, contribution guides, and ownership files. For AI collaboration, we are only beginning to standardize.

What AGENTS.md Is

AGENTS.md is a simple, open, tool-agnostic format for providing coding agents with project-specific instructions. It is now part of the broader open agentic ecosystem under the Agentic AI Foundation, with broad industry adoption.

It is not a replacement for README.md. It is a complement.

Design principle:

  • README.md is for humans.
  • AGENTS.md is for agents.

Humans need quick starts, architecture summaries, and contribution policies.

Agents need deterministic build commands, exact test execution steps, linter requirements, directory boundaries, prohibited patterns, and explicit assumptions.

Separating these concerns provides:

  • A predictable location for agent instructions
  • Cleaner, human-focused READMEs
  • Reduced duplication
  • Cross-tool compatibility

Several major open source repositories have already adopted AGENTS.md. The pattern is spreading because it addresses a real structural gap.

Recent evaluations have also shown that explicit repository-level agent instructions outperform loosely defined “skills” systems in practical coding scenarios. The implication is clear. Context must be explicit, not implied.

A Real Example: OpenAI’s Agents SDK

A practical example of this pattern can be seen in the OpenAI Agents Python SDK repository.

The project contains a root-level AGENTS.md file that defines operational instructions for contributors and AI agents working on the codebase. You can view the full file here.

Instead of leaving workflows implicit, the repository encodes them directly into agent-readable instructions. For example, the file requires contributors to run verification checks before completing changes:

Run `$code-change-verification` before marking work complete.

It also explicitly scopes where those rules apply, such as changes to core source code, tests, examples, or documentation within the repository.

Rather than expecting an agent to infer these processes from scattered documentation, the project defines them as explicit instructions inside the repository itself.

This is the core idea behind AGENTS.md.

Operational guidance that would normally live in prompts, chat history, or internal knowledge becomes version-controlled infrastructure.

Designing an Effective Root AGENTS.md

A root AGENTS.md should be concise. Under 300 lines is a good constraint. It should be structured, imperative, and operational.

A practical structure includes four required sections.

1. Project Overview

This section establishes the mental model.

Include:

  • Project purpose and high-level architecture
  • Directory structure and key components
  • Technology stack and critical dependencies

Agents are pattern matchers. The clearer the structural map, the fewer incorrect assumptions they make.

2. Build, Test, and Push Instructions

This section must be precise.

Include:

  • Exact build commands
  • Test execution commands
  • Linter and formatting requirements
  • Pre-push validation steps

Avoid vague language. Replace “run tests” with explicit commands.

Agents execute what they are told. Precision reduces drift.

3. Development Workflow

This section defines conventions.

Rather than bloating AGENTS.md, reference a separate coding standards document for:

  • Naming conventions
  • Logging patterns
  • Security requirements
  • Repository-specific architectural constraints

The root file should stay focused while linking to deeper guidance.

4. Common Pitfalls and Prohibited Patterns

This is where most teams underinvest.

Document:

  • Anti-patterns specific to the codebase
  • Deprecated APIs
  • Incorrect assumptions agents commonly make
  • Areas where public APIs must not change

Agents tend to repeat statistically common patterns. Your codebase may intentionally diverge from those patterns. This section is where you enforce that divergence.

Think of this as defensive programming for AI collaboration.

Hierarchical AGENTS.md: Scaling Context Correctly

Large repositories require scoped context.

A single root file cannot encode all module-specific constraints without becoming noisy. The solution is hierarchical AGENTS.md files.

Structure example:

root/
  AGENTS.md
  module-a/
    AGENTS.md
  module-b/
    AGENTS.md
    sub-feature/
      AGENTS.md

Agents automatically read nested AGENTS.md files when operating inside those directories. Context scales from general to specific.

Root defines global conventions.
Module-level files define local invariants.
Feature-level files encode edge-case constraints.

This reduces irrelevant context and increases precision.

It also mirrors how humans reason about codebases.

Compatibility Across Tools

A standard file location matters.

Some agents natively read AGENTS.md. Others require simple compatibility mechanisms such as symlinks that mirror AGENTS.md into tool-specific filenames.

The key idea is a single source of truth.

Do not maintain multiple divergent instruction files. Normalize on AGENTS.md and bridge outward if needed.

The goal is repository-level portability. Change tools without losing institutional knowledge.

Best Practices for Agent Instructions

To make AGENTS.md effective, follow these constraints.

Write imperatively.
Use direct commands. Avoid narrative descriptions.

Avoid redundancy.
Do not duplicate README content. Reference it.

Keep it operational.
Focus on what the agent must do, not why the project exists.

Update it as the code evolves.
If the build process changes, AGENTS.md must change.

Treat violations as signal.
If agents consistently ignore documented rules, either the instruction is unclear or the file is too long and context is being truncated. Reset sessions and re-anchor.

AGENTS.md is not static documentation. It is part of the execution surface.

Ownership and Governance

If agents are contributors, then their instruction layer requires ownership.

Each module-level AGENTS.md should be maintained by the same engineers responsible for that module. Changes to these files should follow the same review rigor as code changes.

Instruction drift is as dangerous as code drift.

Version-controlled agent guidance becomes part of your engineering contract.

Why Teams Are Adopting AGENTS.md

Repositories across the industry have begun implementing AGENTS.md as a first-class artifact. Large infrastructure projects, developer tools, SDKs, and platform teams are standardizing on this pattern.

The motivation is consistent:

  • Eliminate tool lock-in
  • Preserve institutional knowledge
  • Reduce hallucination caused by ambiguous workflows
  • Enable predictable agent behavior across environments

AGENTS.md transforms prompt engineering from a personal habit into a shared, reviewable, versioned discipline.

Vercel published evaluation results showing that repository-level AGENTS.md context outperformed tool-specific skills in agent benchmarks.

Why This Matters Now

AI agents are rapidly becoming embedded in daily development workflows.

Without a standardized instruction layer:

  • Output quality varies by developer setup
  • Context decays across sessions
  • Hidden assumptions leak into production code
  • Scaling agent usage multiplies inconsistency

The repository must become the stable contract between humans and machines.

AGENTS.md is the first structural step toward that contract.

It shifts agent collaboration from ad hoc prompting to engineered context.

Foundation Before Optimization

In the next post, we will examine a different failure mode.

Even with a perfectly structured AGENTS.md, long AI sessions degrade. Context accumulates. Signal dilutes. Hallucinations increase. Performance drops as token counts rise.

This phenomenon is often invisible until it causes subtle architectural damage.

Part 2 will focus on defeating context rot and enforcing session discipline using structured planning, checkpoints, and meta-prompting.

Before you scale orchestration.
Before you add subagents.
Before you optimize cost across multiple model providers.

You must first stabilize the environment.

An agent-native repository is the foundation.

Everything else builds on top of it.

Dewan Ahmed

Dewan Ahmed is a Principal Developer Advocate at Harness, a company that aims to enable every software engineering team in the world to deliver code reliably, efficiently and quickly to their users. Before joining Harness, he worked at IBM, Red Hat, and Aiven as a developer, QA lead, consultant, and developer advocate. For the last fifteen years, Dewan has worked to solve DevOps and infrastructure problems for small startups, large enterprises, and governments. Starting his public speaking at a toastmaster in 2016, he has been speaking at tech conferences and meetups for the last ten years. His work is fueled by a passion for open-source and a deep respect for the tech community. Dewan writes about app/data infrastructure, developer advocacy, and his thoughts around a career in tech on his personal blog. Outside of work, he’s an advocate for underrepresented groups in tech and offers pro bono career coaching as his way of giving back.

Similar Blogs

Harness Platform