Why we built this
The web wasn't built
for AI agents.
And that gap — between how the web delivers information and how AI agents consume it — is costing everyone, every day.
01 — A new kind of reader
When a human opens a browser, they see a rendered page — layout, colors, fonts, images. The browser does an enormous amount of work to make raw HTML and CSS look like a product. That work is invisible to users, and that's the point.
AI agents don't see any of that. They receive raw text — tokens — and reason over it. They don't need a sidebar. They don't need a navigation menu. They don't need JavaScript that animates a hero section.
AI agents are a fundamentally different class of client, and the web was not designed with them in mind.
02 — What really happens
When an AI agent needs to use a web service — look up the weather, search a product catalog, check an exchange rate — here's what actually happens if there's no structured interface:
The agent fetches the page. It receives HTML — the full document, including every script tag, every style block, every nav element, every ad placeholder.
All of that becomes tokens. A typical modern webpage is 500KB–2MB. At roughly 4 characters per token, that's 125,000–500,000 tokens loaded into context.
The agent has to parse and reason over this mess. Where's the actual data? Is that a price or a nav label? Is that an API endpoint or a marketing URL?
Maybe it finds what it needed. Maybe it hallucinated something from the noise. Maybe it hit a context limit and gave up.
This isn't a rare edge case. This is how most AI agents interact with most of the web, right now.
The cost isn't abstract. Token usage is directly billed by every major AI provider. Latency is real — fetching and processing a 2MB page takes seconds. Reliability is a problem — HTML structure changes constantly, and any hardcoded parsing breaks.
03 — The three bad options
Developers building AI agents have known about this problem. The workarounds they've reached for:
Scrape HTML
Load the page, strip tags, hope the text is structured enough to reason over. Token-heavy, fragile, noisy.
Hardcode the API
Read the docs manually, write a custom integration, maintain it as the service changes. Doesn't scale. Breaks constantly.
Find an MCP server
Heavy infrastructure. Developer-only. Exists for fewer than 1% of services.
None of these scale. None of them are the right foundation for an AI-native web.
04 — We've solved this before
The web has a long history of creating thin, machine-readable layers alongside the human-facing web. None of them required changing the web itself — they just added a predictable convention.
robots.txtA plain text file at /robots.txt
Crawlers needed to know which pages to skip. No central authority required it — it just made sense, so everyone adopted it.
sitemap.xmlA structured XML file listing all pages
Crawlers needed to find pages efficiently. Google proposed it. Within two years, every major CMS generated one automatically.
/aiA JSON endpoint at /ai
AI agents need to understand what a service can do and how to call it. Without this, they're left scraping noise.
robots.txt didn't require a standards committee. sitemap.xml didn't require legislation. They spread because they were useful, simple, and easy to implement. That's exactly what /ai is designed to be.
05 — What /ai actually is
Simple: any service exposes GET /ai returning a structured JSON object that answers three questions:
What is this service?
Name, description, categories — token-optimized for AI.
What can it do?
A list of capabilities: each with an endpoint, method, parameters, and return description.
How do I authenticate?
Auth type and documentation link. Nothing more.
That's it. No new infrastructure. No OAuth flows. No SDKs to install. Just a JSON endpoint at a predictable URL — implementable in under 10 minutes, in any language, on any framework.
06 — Who this is for
If you run an AI agent or build with LLM APIs
- ·Dramatically lower token costs per task
- ·No more brittle HTML parsing
- ·Reliable, structured results — less hallucination risk
- ·Discover new services without reading documentation
If you run a web service or API
- ·AI agents can discover and use your service correctly
- ·No more agents scraping your site and calling wrong endpoints
- ·A single endpoint that describes everything you offer
- ·Get listed in the AIEndpoint registry — agent-driven traffic
If you care about the web's direction
- ·An open, vendor-neutral standard — Apache 2.0
- ·No lock-in to any AI provider
- ·The same convention model that made robots.txt universal
- ·A foundation for the AI-native web, built in public
07 — Where this goes
Imagine a web where every service — every SaaS tool, every data API, every commerce platform — exposes /ai.
An AI agent building a travel itinerary can query the flight registry, the hotel API, and the weather service — each returning exactly what it needs, in under a second, for a few hundred tokens total.
An AI agent helping with research can discover and call ten specialized data services it's never encountered before — reading their capabilities, understanding their auth, and composing results — all without a single human in the loop.
That's not science fiction. It's a convention away. The same way the web became navigable with a 5-line text file in 1994.
Join the movement
A convention away.
robots.txt spread because it was useful, simple, and asked nothing of anyone but a single file.
/ai is the same idea. Open spec. 10 minutes to implement. No vendor lock-in. Every service that adds it makes the AI ecosystem a little less wasteful — for everyone.
Apache 2.0 · No vendor lock-in · Open source on GitHub