OpenTools
A web standard for LLMs to discover and interact with any web app — no plugins, no setup, just a URL.
User gives a URL to their LLM client. Client discovers available actions. LLM interacts with the app on behalf of the user.
The Problem
Today, connecting LLMs to web apps is either manual, proprietary, or fragile. There's no open, automatic way for an AI to discover what a web app can do.
How It Works
The x-llm Extension
OpenAPI extensions that give LLMs the context they need. Builds on top of OpenAPI — no separate spec to learn.
Root Level
x-llm: version: "0.1" name: "RecipeApp" description: "Save, organize, and discover recipes" defaultApproval: "per-call"
Operation Level
x-llm:
enabled: true # expose this to LLMs
approval: "auto" | "per-call" # minimum approval level
blanketApprovalAllowed: boolean # can user opt into "always allow"
destructive: boolean # UI hint: show warning
rateLimit: { max, window } # per-user throttle for LLM calls
hint: string # when to use this (richer than summary)
costIndicator: "free" | "credits" | "paid"Approval Model
Three layers of consent. The app sets the minimum. The user can only make it stricter, never looser.
| Layer | Who Decides | Example |
|---|---|---|
| Site policy | App developer | "Delete always requires confirmation" |
| User preference | End user | "I want to approve all writes" |
| LLM client | Chat app / agent | Shows confirmation UI before calling |
Comparison
How OpenTools compares to existing approaches for connecting LLMs to web apps.
| Approach | Discovery | Auth | Approval | LLM-native | Open |
|---|---|---|---|---|---|
| OpenTools | Yes | Yes | Yes | Yes | Yes |
| MCP | No | Partial | Partial | Yes | Yes |
| ChatGPT Plugins | No | Yes | Partial | Partial | No |
| Plain REST | No | Partial | No | No | Yes |
Core packages (spec, orpc, ai-sdk) have initial implementations
Demo apps are partially built
Not yet published, production-tested, or spec-finalized