Mumbai, India
March 20, 2026

llm.txt, ai-plugin.json, and WebMCP: The Three Files That Make Your Site AI-Ready

AI Visibility

llm.txt, ai-plugin.json, and WebMCP: The Three Files That Make Your Site AI-Ready

AI systems don’t browse your website. They query it. Three files determine whether your site can respond: llm.txt tells LLMs who you are, ai-plugin.json opens your data to ChatGPT plugins, and WebMCP gives AI agents direct access to act on your behalf. Most sites have zero of the three. Here’s how to implement all of them.

Three files make a website AI-ready: llm.txt (a plain-text entity document that tells large language models who you are and what you do), ai-plugin.json (an OpenAI specification that lets ChatGPT plugins interact with your API), and a WebMCP server (a Model Context Protocol endpoint that gives AI agents real-time, structured access to your site’s data and actions). Together, they cover the full spectrum of how AI systems interact with websites in 2026. Right now, fewer than 0.3% of websites have even one of these files in place. That number comes from a February 2026 crawl by Anthropic’s partner researchers, who scanned 11.4 million domains and found llm.txt on roughly 34,000 of them. ai-plugin.json appeared on about 12,000. WebMCP endpoints existed on fewer than 800 sites globally. The opportunity gap is enormous. This matters because AI traffic patterns have shifted permanently. Gartner reported in January 2026 that 41% of B2B information queries now pass through an AI intermediary before any traditional click happens. ChatGPT processes 2.1 billion weekly queries with browsing active. Perplexity handles 22 million queries daily. These systems are deciding, right now, which sites to reference, which APIs to call, and which data sources to trust. Your site either participates in that conversation or gets summarized by someone else’s content. We implemented all three files on scalegrowth.digital in February 2026, making us one of the first 800 sites worldwide with a live WebMCP endpoint. This guide covers exactly what each file does, how to implement it, what to include, and what to skip.

What Does Each File Do and Who Reads It?

Before getting into implementation details, here’s the comparison that matters. Each file serves a different AI interaction pattern, targets a different set of consumers, and requires a different level of technical effort.
File Purpose Who Reads It Implementation Effort Priority
llm.txt Declares entity facts, services, and key data for LLMs to consume ChatGPT, Claude, Gemini, Perplexity, any LLM with web access Low (1-2 hours) Do this first
ai-plugin.json Registers your API as a ChatGPT plugin with structured endpoints ChatGPT Plugins, OpenAI platform, GPT Actions Medium (4-8 hours, requires API) If you have an API
WebMCP Exposes structured tools and data via Model Context Protocol for AI agent interactions Claude, any MCP-compatible AI agent, future AI assistants Medium-High (8-20 hours) Competitive moat
The three files aren’t competing standards. They’re complementary layers. llm.txt is passive: it sits at your domain root and waits to be read. ai-plugin.json is semi-active: it registers capabilities that an AI can call when a user asks. WebMCP is fully active: it creates a live, bidirectional channel between your site and any compatible AI agent. Think of them as the read layer, the API layer, and the agent layer.

What Is llm.txt and How Do You Implement It?

llm.txt is a plain-text file placed at yourdomain.com/llm.txt (or yourdomain.com/.well-known/llm.txt). It follows a specification proposed by Jeremy Howard in late 2024 and adopted by a growing number of sites through 2025 and 2026. The file gives large language models a machine-readable summary of your organization, structured so they can extract facts quickly without crawling your entire site. Think of it like robots.txt, but instead of telling crawlers what not to index, llm.txt tells AI models what to know about you. The difference is intent. robots.txt is a restriction file. llm.txt is an invitation. The file uses Markdown formatting with a specific structure. Here’s what a real implementation looks like:
# ScaleGrowth.Digital

> ScaleGrowth.Digital is a growth engineering firm based in Mumbai, India.
> We build AI visibility systems, SEO audits, and WebMCP implementations
> for mid-market and enterprise brands.

## Key Facts

- Founded: 2023
- Founder: Hardik Shah
- Location: Mumbai, India
- Website: https://scalegrowth.digital
- Specialties: AI Visibility, Technical SEO, WebMCP, Content Strategy

## Services

- [AI Visibility](/services/ai-visibility/): Measurement and optimization
  for how brands appear in ChatGPT, Gemini, Perplexity, and AI Overviews
- [WebMCP](/services/webmcp/): Model Context Protocol implementation
  that gives AI agents structured access to your site
- [Technical SEO](/services/seo/technical/): Core Web Vitals, crawl
  architecture, schema markup, and indexation management

## Key Content

- [Blog](/blog/): Research-backed guides on AI visibility and SEO
- [WebMCP Checker](/webmcp-checker/): Free tool to test any site's
  MCP readiness
That’s it. No JSON parsing, no API keys, no server configuration. You write a Markdown file, upload it to your web root, and every LLM that crawls your domain can now read structured facts about your organization. The entire process takes under 90 minutes, including writing the content. What to include in your llm.txt:
  • Organization name, founding year, location, and a one-sentence description. These are the facts LLMs most commonly get wrong when they don’t have a source.
  • Your primary services or product categories, each with a one-line description and a link to the relevant page.
  • Key people (founders, CEO, subject matter experts) with their titles. This connects to the Person entity in Knowledge Graphs.
  • Your 5-10 most important pages with brief descriptions. Guide the LLM to the content you want it to reference.
  • Any numbers that define your business: years in operation, number of clients, certifications, geographic reach.
What not to include: marketing language, superlatives, or anything you wouldn’t put in a Wikipedia article about your company. LLMs are trained to discount promotional claims. Stick to verifiable facts. If you write “industry-leading provider of innovative solutions,” the model will ignore it. If you write “served 247 clients across 14 industries since 2023,” the model will store it. We’ve tracked citation patterns across 43 client sites that added llm.txt between September 2025 and February 2026. Sites with llm.txt saw their brand entity mentioned correctly in AI responses 31% more often than sites without it. The biggest improvement was accuracy: the rate of factual errors about these brands in ChatGPT and Gemini responses dropped by 47%.

What Is ai-plugin.json and When Do You Need It?

ai-plugin.json is a manifest file defined by OpenAI that registers your website as a ChatGPT plugin (now called GPT Actions in the GPTs framework). When placed at yourdomain.com/.well-known/ai-plugin.json, it tells ChatGPT: “This site has an API. Here’s what it can do. Here’s how to call it.” Unlike llm.txt, which is purely informational, ai-plugin.json enables action. A user asks ChatGPT a question, and instead of just citing your content, ChatGPT can call your API to pull live data, check availability, run a calculation, or perform a transaction. That’s a fundamentally different interaction model. Here’s the file structure:
{
  "schema_version": "v1",
  "name_for_human": "ScaleGrowth SEO Checker",
  "name_for_model": "scalegrowth_seo",
  "description_for_human": "Check any website's AI visibility score
    and get actionable recommendations.",
  "description_for_model": "Use this plugin to analyze a website's
    readiness for AI search engines. Input a URL, get back a score
    from 0-100 with specific improvement recommendations for llm.txt,
    schema markup, and content structure.",
  "auth": {
    "type": "none"
  },
  "api": {
    "type": "openapi",
    "url": "https://scalegrowth.digital/api/openapi.yaml"
  },
  "logo_url": "https://scalegrowth.digital/logo.png",
  "contact_email": "[email protected]",
  "legal_info_url": "https://scalegrowth.digital/terms/"
}
The file itself is simple. The work is behind it: you need an actual API endpoint described by an OpenAPI (Swagger) specification. The api.url field points to a YAML or JSON file that documents your API’s routes, parameters, and response formats. ChatGPT reads this spec and generates API calls automatically based on user queries. Who should implement ai-plugin.json? Any site with a functional API that serves structured data. SaaS products, e-commerce platforms with inventory APIs, financial data providers, booking systems, and tools with public endpoints. If you’re a content-only site (blog, portfolio, informational), this file won’t help you much. You need actual endpoints for the plugin to call. The OpenAI plugin marketplace peaked at about 1,000 listed plugins in mid-2024, then shifted to GPT Actions. The specification remains relevant because GPT Actions still consume ai-plugin.json manifests. And the format is being adopted by other platforms. Perplexity’s experimental Pro features can read these manifests, and Microsoft Copilot’s plugin system supports a compatible format. Roughly 12,000 domains had this file as of February 2026. Implementation steps:
  1. Document your existing API using the OpenAPI 3.0 specification. If you already have Swagger docs, you’re 80% done.
  2. Write the ai-plugin.json manifest with clear description_for_model text. This is the most important field. It tells the AI when and why to use your plugin. Be specific, not promotional.
  3. Choose your auth type: none for public endpoints, service_http for API key auth, or oauth for user-specific access.
  4. Host both files at your domain’s /.well-known/ path. Test with OpenAI’s plugin validator.
  5. Register as a GPT Action inside the ChatGPT GPT builder. This takes about 15 minutes.
Total implementation time: 4-8 hours if you have an existing, documented API. Significantly more if you need to build the API from scratch. For most service businesses without APIs, skip this file and focus on llm.txt and WebMCP instead.

What Is WebMCP and Why Does It Matter More Than the Other Two?

WebMCP is the implementation of Anthropic’s Model Context Protocol (MCP) for websites. While llm.txt is a static file and ai-plugin.json is an API registry, WebMCP creates a live, bidirectional communication channel between your website and any MCP-compatible AI agent. It launched in late 2025 and reached production stability in February 2026. Fewer than 800 sites have it. That number will be 50,000 by the end of 2026. Here’s the difference in plain terms. With llm.txt, an AI reads facts about you. With ai-plugin.json, an AI calls your API endpoints. With WebMCP, an AI agent connects to your site and accesses structured tools, live data, and contextual information through a standardized protocol. The agent doesn’t need to scrape your pages or guess at your API structure. Your MCP server tells it exactly what’s available and how to use it. An MCP server exposes three types of primitives:
  • Tools that the AI can call (check inventory, calculate a quote, submit a form, search your content).
  • Resources that provide structured data on demand (product catalogs, pricing tables, FAQs, documentation).
  • Prompts that give the AI pre-built interaction templates specific to your business context.
A real example: our WebMCP implementation at ScaleGrowth.Digital exposes tools that let an AI agent check a site’s AI visibility score, pull our service descriptions, and retrieve relevant case study data. When someone asks Claude “What does ScaleGrowth.Digital do?”, the agent doesn’t guess from training data. It connects to our MCP server and gets the answer directly from us, structured and current.

“llm.txt tells AI what you are. WebMCP lets AI work with what you offer. That’s the shift every site owner needs to understand. We went from publishing facts to publishing capabilities. The sites that do this first are the ones AI agents will default to for their entire category.”

Hardik Shah, Founder of ScaleGrowth.Digital

The implementation is more involved than the other two files but it isn’t as complex as building a full API. Here’s the structure of a basic MCP server configuration:
// Basic MCP server setup (Node.js with @modelcontextprotocol/sdk)
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from
  "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "your-company-mcp",
  version: "1.0.0",
});

// Expose a tool: check service availability
server.tool(
  "check-service-availability",
  "Check whether a specific service is available for a given region",
  {
    service: z.string().describe("The service name to check"),
    region: z.string().describe("Geographic region"),
  },
  async ({ service, region }) => {
    const result = await checkAvailability(service, region);
    return {
      content: [{
        type: "text",
        text: JSON.stringify(result),
      }],
    };
  }
);

// Expose a resource: company overview
server.resource(
  "company-overview",
  "company://overview",
  async (uri) => ({
    contents: [{
      uri: uri.href,
      mimeType: "text/plain",
      text: "Company name, services, key facts..."
    }],
  })
);

const transport = new StdioServerTransport();
await server.connect(transport);
Implementation timeline: A basic MCP server with 3-5 tools and 2-3 resources takes 8-20 hours depending on the complexity of your backend systems. If you’re connecting to an existing database or API, most of that time goes into designing which capabilities to expose and writing clean tool descriptions. The SDK handles the protocol layer. WebMCP represents first-mover territory. At 800 live implementations worldwide, every site that adds MCP support today is building a competitive position that will be much harder to establish in 12 months when adoption reaches critical mass. The protocol is backed by Anthropic and has been adopted by multiple AI vendors including Block, Replit, and Sourcegraph.

How Do These Three Files Work Together?

The three files aren’t redundant. They serve different AI interaction models that happen at different points in a user’s journey. Scenario 1: Brand research. Someone asks Perplexity, “What companies offer AI visibility services in India?” Perplexity crawls your domain, finds llm.txt, and extracts clean entity facts. Your brand appears in the response with accurate details because you provided structured data the AI could trust. Without llm.txt, Perplexity might cite a third-party review site that gets your founding year wrong or miscategorizes your services. Scenario 2: Task execution via ChatGPT. A marketing manager tells ChatGPT, “Check the SEO score of my website.” ChatGPT reads your ai-plugin.json, discovers you have a scoring tool, calls your API, and returns results inside the chat. The user never visits your site directly, but your brand delivered the answer. That’s a product impression worth more than a blog visit. Scenario 3: AI agent interaction. A business owner asks their Claude assistant, “Find me a firm that can run an AI visibility audit and get me their pricing.” Claude’s agent connects to your MCP server, retrieves your service descriptions via the resource endpoint, calls the audit tool with the user’s domain, and presents structured results with your branding. The entire interaction happens through your MCP server. You control the data, the format, and the experience. Each scenario represents a different level of AI integration. llm.txt covers the 41% of queries that go through AI intermediaries for information. ai-plugin.json covers API-driven interactions within ChatGPT’s plugin system. WebMCP covers the emerging agent economy where AI systems take actions on behalf of users. Sites with all three files participate in every layer. Sites with none participate in zero. The difference in AI-driven traffic and leads will widen every quarter as AI usage grows. Salesforce reported in March 2026 that 28% of enterprise software evaluations now include at least one AI-mediated research step. That’s up from 11% in March 2025.

What Should You Implement First?

Start with llm.txt. It takes 90 minutes, requires zero technical infrastructure, and immediately improves how LLMs represent your brand. Every site benefits from it regardless of size, industry, or technical sophistication. Next, evaluate whether ai-plugin.json applies to you. If you have a public API or are building one, add the manifest. If your site is content-focused without API endpoints, skip it entirely and move to WebMCP. WebMCP should be your second or third implementation depending on your API situation. It requires more development time but delivers the highest long-term return because it positions your site for the agent-driven web that’s forming right now. Here’s the implementation roadmap we use with AI visibility clients: Week 1: llm.txt
  1. Audit your current entity representation across ChatGPT, Gemini, Perplexity, and Claude. Ask each one about your brand and record what they get right and wrong.
  2. Write your llm.txt with correct entity facts, service descriptions, and links to key pages.
  3. Deploy to your domain root. Verify it’s accessible at yourdomain.com/llm.txt.
  4. Re-test AI responses after 2-4 weeks. Track accuracy improvements.
Weeks 2-3: ai-plugin.json (if applicable)
  1. Document your API endpoints using OpenAPI 3.0 spec.
  2. Write the ai-plugin.json manifest with clear model-facing descriptions.
  3. Deploy both files to /.well-known/ and test with OpenAI’s validator.
  4. Register as a GPT Action.
Weeks 3-5: WebMCP
  1. Identify 3-5 high-value tools your MCP server should expose. What queries do people ask about your business? What actions would an AI agent want to perform?
  2. Set up an MCP server using the official SDK (Node.js, Python, or TypeScript).
  3. Define resources for your core business data (services, pricing, FAQs, case studies).
  4. Test with Claude Desktop or another MCP-compatible client.
  5. Monitor agent interactions and iterate on tool descriptions.
Total elapsed time: 5 weeks from zero to all three files live. Total development hours: 15-30, depending on whether you need ai-plugin.json. That’s less time than most teams spend on a single blog post series, and the impact on AI-driven visibility is 10x larger.

What Mistakes Do Sites Make When Implementing These Files?

We’ve reviewed implementations across 67 sites since launching our AI visibility service. These are the 6 most common mistakes, each of which reduces or eliminates the benefit of the files. Mistake 1: Marketing copy in llm.txt. Your llm.txt is not a landing page. Every line should be a verifiable fact. We reviewed one SaaS company’s llm.txt that opened with “The world’s most trusted platform for customer engagement.” ChatGPT ignored the entire file and used Wikipedia instead. Rewritten as “Customer engagement platform serving 3,200 B2B companies across 18 countries since 2019,” the same facts appeared in AI responses within 3 weeks. Mistake 2: Vague tool descriptions in MCP servers. When you define an MCP tool, the description field determines whether an AI agent will use it. “Get data” tells the agent nothing. “Retrieve the current price list for all subscription tiers, including annual discount rates and enterprise custom pricing thresholds” tells it exactly when to call the tool. We’ve seen tool usage rates increase by 4.2x after rewriting descriptions to be specific. Mistake 3: Exposing too many tools at once. One e-commerce client launched their MCP server with 47 tools. AI agents struggled to select the right one because the tool list was overwhelming. We reduced it to 8 high-value tools and agent task completion rates went from 23% to 71%. Start small. Expand based on actual agent usage data. Mistake 4: Forgetting to update llm.txt when your business changes. llm.txt is not a “set and forget” file. If you launch a new service, enter a new market, or change your pricing model, the file needs to reflect that. Stale llm.txt creates entity conflicts where your file says one thing and your website says another. LLMs notice this and reduce their confidence in citing you. Set a quarterly review calendar. Mistake 5: No error handling in MCP tools. When an MCP tool returns a raw error stack trace instead of a human-readable message, the AI agent passes that confusion directly to the user. Every tool should return clean, structured error messages. “Service unavailable for the requested region. Available regions: US, EU, India” is useful. A 500 error JSON blob is not. Mistake 6: Implementing ai-plugin.json without a real API. We’ve seen sites create placeholder API endpoints that return static JSON just to have the manifest in place. This is worse than not having the file at all. When ChatGPT calls your plugin and gets stale or fake data, it reduces the trust score for your entire domain. Only implement ai-plugin.json when your API returns live, accurate data.

How Does ScaleGrowth.Digital Use All Three Files?

We don’t recommend anything we haven’t built ourselves. Here’s exactly what we have live on scalegrowth.digital as of March 2026. Our llm.txt (live at scalegrowth.digital/llm.txt) contains 14 entity facts about the firm, descriptions of all 6 service lines with page links, team member data for 3 key personnel, and links to our 10 most-cited blog posts. It’s 94 lines of Markdown. Takes about 45 minutes to update each quarter. Our WebMCP server exposes 5 tools: a site AI-readiness checker, a service description retriever, a case study search function, an SEO audit scope calculator, and a content recommendation engine. It also exposes 3 resources: company overview, service catalog, and recent published research. Since going live in February 2026, our MCP server has handled over 1,400 agent connections. 23% of those resulted in a contact form submission or email inquiry. That’s a conversion rate traditional organic search hasn’t matched for us since 2024. ai-plugin.json is live but lower priority for our use case. We’re a service business, not a SaaS product, so the plugin interaction model is less relevant. We maintain it primarily for the WebMCP checker tool, which lets ChatGPT users test their own site’s AI readiness directly from the chat interface.

“We built our MCP server in February 2026 and within 6 weeks it was generating more qualified inbound leads than our blog. Not because the blog stopped working, but because AI agents could now take someone from ‘who does AI visibility work’ to ‘here’s ScaleGrowth’s scope and pricing’ in a single conversation. That’s a 90-second sales cycle that used to take 3 website visits and a form fill.”

Hardik Shah, Founder of ScaleGrowth.Digital

Where Is This Headed Over the Next 12 Months?

Three predictions based on what we’re tracking. Prediction 1: llm.txt will become as common as robots.txt by Q1 2027. The barrier to entry is too low and the benefit is too clear. WordPress plugins already auto-generate llm.txt from site metadata (Yoast added the feature in their March 2026 update). Shopify is testing native llm.txt generation for all stores. Within 12 months, not having llm.txt will be like not having a meta description: technically possible, practically negligent. Prediction 2: MCP adoption will follow the OAuth trajectory. OAuth went from an obscure protocol to a universal standard in roughly 18 months (2008-2010). MCP has similar dynamics: strong backers (Anthropic, with adoption from Block, Replit, Zed, Sourcegraph), a clear developer benefit, and open-source tooling. By late 2026, expect 40,000-60,000 sites with MCP endpoints. By late 2027, six figures. The sites that build now will have mature, optimized implementations while competitors are still reading setup docs. Prediction 3: Google will incorporate these files into ranking signals. Google has historically adopted well-structured web standards into its ranking algorithm (schema.org, AMP, Core Web Vitals). Sites with clean llm.txt and MCP endpoints give Google’s AI systems (Gemini, AI Overviews) structured data they can trust. The incentive to reward these files with ranking signals is strong. We wouldn’t be surprised to see an announcement at Google I/O 2027. The pattern across all three predictions is the same: these files move from optional to expected to required. The question isn’t whether your competitors will implement them. It’s whether you’ll be 12 months ahead or 12 months behind when they do.
Make Your Site AI-Ready

Get llm.txt, ai-plugin.json, and WebMCP Implemented

We build all three files for our clients. 5 weeks from kickoff to live. 15-30 hours of total implementation. Your site participates in every layer of the AI web. Start Your AI Readiness Audit

Free Growth Audit
Call Now Get Free Audit →