Early-Stage Proposal

OpenTools

A web standard for LLMs to discover and interact with any web app — no plugins, no setup, just a URL.

User gives a URL to their LLM client. Client discovers available actions. LLM interacts with the app on behalf of the user.

User: "Add 'Buy groceries' to my Todoist, due tomorrow"

1. Discovery — fetches todoist.com/.well-known/llm.json
   → finds OpenAPI spec with x-llm extensions
   → auto-generates tools: createTask, listTasks, ...

2. Auth — OAuth2 via CIMD (zero-registration)
   → agent authenticates as the user, no API keys

3. Execution — calls createTask, user approves
   → task created in Todoist on behalf of the user

The Problem

Today, connecting LLMs to web apps is either manual, proprietary, or fragile. There's no open, automatic way for an AI to discover what a web app can do.

MCP Servers
Require manual setup and configuration. Can't auto-discover.
ChatGPT Plugins
Were platform-locked to a single vendor. And then killed entirely.
Plain REST APIs
No LLM-specific semantics — no approval model, hints, or rate limits.
Browser Automation
Screen scraping is fragile, slow, and breaks with every UI change.

How It Works

1
Discovery
Client fetches /.well-known/llm.json from the app's domain. Gets a pointer to the OpenAPI spec.
2
Read Spec
Parses the OpenAPI spec and finds operations with x-llm extensions — approval policies, hints, rate limits.
3
Generate Tools
Dynamically converts each x-llm-enabled operation into an AI SDK tool. No hardcoded integrations.
Pipeline: Zod to OpenAPI to LLM Tools
Zod schema + .describe()
        |
        v
   oRPC / Hono / FastAPI / NestJS
        |
        v
   OpenAPI spec (with x-llm extensions)
        |
        v
   /.well-known/openapi.json
        |
        v
   LLM client reads spec
        |
        v
   Dynamically generates AI SDK tools

The x-llm Extension

OpenAPI extensions that give LLMs the context they need. Builds on top of OpenAPI — no separate spec to learn.

Root Level

x-llm:
  version: "0.1"
  name: "RecipeApp"
  description: "Save, organize, and discover recipes"
  defaultApproval: "per-call"

Operation Level

x-llm:
  enabled: true                    # expose this to LLMs
  approval: "auto" | "per-call"   # minimum approval level
  blanketApprovalAllowed: boolean  # can user opt into "always allow"
  destructive: boolean             # UI hint: show warning
  rateLimit: { max, window }      # per-user throttle for LLM calls
  hint: string                     # when to use this (richer than summary)
  costIndicator: "free" | "credits" | "paid"
enabled
Whether this operation is exposed to LLMs
approval
"auto" or "per-call" — minimum approval level the app requires
destructive
UI hint to show a warning before executing
hint
Natural language guidance for when/why to use this action
costIndicator
"free", "credits", or "paid" — tells the LLM if this costs the user money
rateLimit
Per-user throttle so the LLM can self-limit API calls

Approval Model

Three layers of consent. The app sets the minimum. The user can only make it stricter, never looser.

LayerWho DecidesExample
Site policyApp developer"Delete always requires confirmation"
User preferenceEnd user"I want to approve all writes"
LLM clientChat app / agentShows confirmation UI before calling
Example: User Flow
User: "Find me a good pasta recipe and save it"

LLM thinks:
1. I see recipes.app is connected
2. Read manifest → search_recipes (auto-approve) + save_favorite (per-call)
3. Call search_recipes → get results
4. Present results to user
5. User picks one → LLM requests save_favorite
6. Client shows: "RecipeApp wants to save 'Cacio e Pepe'
   to favorites. Allow? [Yes] [Always allow]"
7. User approves → call executes

Comparison

How OpenTools compares to existing approaches for connecting LLMs to web apps.

ApproachDiscoveryAuthApprovalLLM-nativeOpen
OpenToolsYesYesYesYesYes
MCPNoPartialPartialYesYes
ChatGPT PluginsNoYesPartialPartialNo
Plain RESTNoPartialNoNoYes
Work in Progress

Core packages (spec, orpc, ai-sdk) have initial implementations

Demo apps are partially built

Not yet published, production-tested, or spec-finalized

Open Questions