Mumbai, India
March 15, 2026

How We Built AI Agents That Run Our Growth Engine

We don’t just build AI agents for clients. We run AI agents across every engine at ScaleGrowth.Digital. Our SEO Engine, Content Engine, Analytics Engine, AI Visibility Engine, and WebMCP Engine all use AI agents for execution. This post explains what those agents do, how we built them, and what we learned from running them on our own growth system before deploying them for clients.

“We made a rule in 2025: never sell a client an agent we haven’t run ourselves first. Every agent type in our service catalog started as an internal tool. The SEO reporting agent ran our own reports for 4 months before we offered it to a client. The content brief agent produced our own briefs for 3 months. If the agent can’t run our growth engine reliably, it can’t run anyone else’s,” says Hardik Shah, Founder of ScaleGrowth.Digital.

How Is ScaleGrowth’s Growth Engine Structured?

The engine has six service-specific components, each running its own set of AI agents, all bound together by a Business Governance Engine that handles cross-channel data sharing and strategic alignment.

Engine What It Does AI Agents Running
SEO Engine Technical audits, keyword strategy, link building 4 agents
Content Engine Briefs, production, editorial calendar, refresh cycles 3 agents
AI Visibility Engine GEO, AEO, AI Overviews, LLM optimization 2 agents
Analytics Engine GA4, attribution, dashboards, conversion tracking 2 agents
PPC Engine Campaign architecture, bid optimization, ROAS tracking 2 agents
WebMCP Engine Tool declarations, API design, agent interaction monitoring 1 agent

That’s 14 agents total, running daily or weekly cycles across our operations. Not all of them are fully autonomous. Some are human-in-the-loop (the agent recommends, a human approves). Others run unsupervised on proven, low-risk tasks. I’ll cover each engine’s agents and explain where they sit on the autonomy spectrum.

What Agents Run Inside the SEO Engine?

The SEO Engine has four agents, each handling a different phase of the SEO workflow.

Agent 1: Keyword Intelligence Agent

This agent pulls keyword data from DataForSEO, SEMrush, and Google Search Console. It processes the raw data into structured keyword clusters, assigns priority scores based on volume, difficulty, and business relevance, and produces keyword strategy documents for each client.

Before this agent, keyword research for a new client took 12-15 hours. Our analyst would pull data from three platforms, export to spreadsheets, manually cluster keywords, and write up the strategy. The agent now does the data pull and clustering in 22 minutes. The analyst spends 2-3 hours reviewing, adjusting clusters, and adding strategic context that requires industry knowledge.

Time saved: roughly 10 hours per client keyword research cycle. We run this for 8 clients. That’s 80 hours per month of analyst time redirected to higher-value strategy work.

Agent 2: Technical Audit Agent

Runs weekly technical audits on client sites. Checks Core Web Vitals, crawl errors, broken links, schema validation, and mobile usability. Compares against the previous week’s baseline and flags changes.

This agent runs fully autonomously. It doesn’t need human approval to crawl a site and generate a report. It does flag issues that require human judgment (e.g., “new 301 redirect chain detected on 14 URLs, review needed”) and routes those to the responsible analyst.

Agent 3: Content Gap Agent

Compares client keyword coverage against their top 5 competitors. Identifies keywords where competitors rank and the client doesn’t. Filters for business relevance and search volume. Outputs a prioritized list of content opportunities.

This agent runs monthly. It was the hardest to build because “business relevance” is subjective. We spent 6 weeks fine-tuning the relevance scoring. The first version flagged hundreds of keywords that were topically related but commercially irrelevant. Version 4 (current) gets it right about 88% of the time, with the remaining 12% caught during human review.

Agent 4: Rank Tracking Reporter

Pulls daily ranking data, identifies significant movements (gains or drops of 5+ positions), correlates movements with known events (algorithm updates, content changes, link acquisition), and generates weekly ranking reports for clients.

This agent saved us from hiring a dedicated reporting person. Before the agent, report generation consumed 6 hours per week across the team. Now it takes 45 minutes of review time per week for all clients combined.

What Agents Run Inside the Content Engine?

Content is where AI agents deliver the most visible output, and where quality control is most important.

Agent 1: Content Brief Agent

Given a target keyword and client brief, this agent researches the top 10 ranking pages, extracts their content structure, identifies gaps and angles, and produces a detailed content brief. The brief includes: recommended headings, key points to cover, data to cite, internal links to include, and tone guidance based on the client’s brand guidelines.

The brief agent has been running since September 2025. It produces briefs that are 85% ready for a writer to start working from. The remaining 15% involves adding client-specific context, adjusting for industry nuance, and occasionally overriding the agent’s heading suggestions when they’re too generic.

Agent 2: First Draft Agent

Takes a finalized content brief and produces a first draft. This agent uses the client’s brand guidelines, existing content samples, and tone preferences to match the expected voice. For our own content (like this blog post you’re reading), the agent produces drafts that go through extensive human editing.

Accuracy note: first drafts from this agent require 30-40% editing on average. That sounds high, but the baseline for a junior writer’s first draft requiring editorial revision is 45-55%. The agent’s drafts are structurally better (they follow the brief precisely) but need voice refinement and factual verification.

Agent 3: Content Refresh Agent

Monitors published content for decay. When a page’s rankings drop by more than 10 positions, traffic declines by more than 25% over 30 days, or the content references data older than 12 months, the agent flags it for refresh. It then produces a refresh brief showing what to update, what to add, and what data to replace.

This agent caught a client’s product comparison page that had dropped from position 3 to position 18 because a competitor had published an updated comparison with 2026 data while our client’s page still referenced 2024 numbers. The agent flagged it within 48 hours of the ranking drop. Without the agent, we’d have caught it in the next monthly review, possibly 3 weeks later.

What Agents Run Inside the AI Visibility Engine?

Agent 1: AI Citation Tracker

Monitors how AI platforms (ChatGPT, Gemini, Perplexity, Google AI Overviews) respond to queries relevant to each client. Tests 50-100 queries per client per week and records whether the client is cited, mentioned, or absent from each response.

This is our most compute-intensive agent. Running 100 queries across 4 platforms means 400 API calls per client per week. For 8 clients, that’s 3,200 weekly API calls just for monitoring. The cost is approximately Rs 28,000 per month in API fees. The value is worth it: we track AI visibility trends with weekly granularity that no manual process could match.

Agent 2: Schema Compliance Agent

Checks client sites for schema markup compliance against our standards. Validates JSON-LD against the Schema.org spec, checks for FAQPage schema on all FAQ sections, verifies entity consistency across pages, and flags pages missing required schema types.

This agent runs weekly and has a 99.2% accuracy rate (measured against manual audit results). It’s fully autonomous. When it finds an issue, it creates a task in our project management system with the specific fix needed.

What Did We Learn Building These Agents?

Fourteen agents, running for 6-18 months depending on when each was deployed. Here are the lessons that changed how we build agents for clients.

Lesson 1: Start simpler than you think. Our first Content Brief Agent tried to do everything: keyword research, competitor analysis, heading generation, tone matching, internal link suggestions, and schema recommendations. It was slow (8 minutes per brief) and unreliable (60% accuracy). We stripped it back to headings and key points only. Accuracy jumped to 91%. Then we added features back one at a time, testing accuracy at each step. The current version does everything the first version tried to do, but it took 4 iterations over 5 months to get there reliably.

Lesson 2: Cost monitoring is not optional. Our AI Citation Tracker ran up Rs 1,40,000 in API costs in its first month because we hadn’t set rate limits. A bug in the retry logic caused it to re-run failed queries up to 20 times. We now set hard cost caps on every agent: if it exceeds its daily API budget, it stops and alerts the team.

Lesson 3: Agent-to-agent communication is fragile. We tried connecting the Content Gap Agent directly to the Content Brief Agent: find a gap, automatically generate a brief. It worked 70% of the time. The other 30%, the gap agent passed keywords to the brief agent with insufficient context, producing briefs for the wrong intent. We added a human checkpoint between the two agents. That 30% failure rate dropped to under 5%.

Lesson 4: Version everything. Agents drift over time. Prompt changes, API updates, and data source modifications all affect output. We now version-control every agent’s configuration (prompts, parameters, data sources, guardrails) in Git. When something breaks, we can diff the current config against the last known working version and find the change that caused the regression.

Lesson 5: Build the monitoring first. This seems backward, but for every new agent, we build the dashboard before we build the agent. Define what metrics to track, create the logging hooks, build the visualization. Then build the agent and connect it to the existing monitoring infrastructure. This ensures we have data from day one instead of realizing 3 weeks into a deployment that we have no idea how the agent is actually performing.

How Does This Translate to Client Deployments?

Every agent we build for clients is a version of an agent we’ve already proven internally. The configurations differ (different data sources, different brand guidelines, different success criteria), but the architecture is tested.

When a client asks for a content production agent, we don’t start from scratch. We deploy the same Content Brief + First Draft agent architecture we’ve been running for 12 months. The customization is in the prompts (client voice, client topics, client competitors) and the integrations (client CMS, client analytics, client brand guidelines). The core agent logic is proven.

This is a significant advantage. Most firms building AI agents for clients are building from zero for each engagement. They’re learning on the client’s dime. We learned on our own. The client gets an agent that’s been iterated 4-6 times before they ever see it.

What’s Next for Our Agent Architecture?

Three things we’re building or testing right now:

Cross-engine intelligence. Currently, our agents operate within their engine boundaries. The SEO agent doesn’t know what the PPC agent is doing. We’re building the Governance Engine layer that shares context across all agents. When the SEO agent identifies a keyword opportunity, the PPC agent should know about it. When the Content agent publishes a new page, the Analytics agent should start tracking it automatically.

Client-facing agent dashboards. Right now, agent outputs flow through our team to the client. We’re building dashboards where clients can see their agents’ activity, outputs, and metrics directly. Not to replace our human analysts, but to give clients real-time visibility into what the engine is doing between our weekly check-ins.

WebMCP agent monitoring. As more clients implement WebMCP, we need agents that monitor how AI agents interact with our clients’ tool declarations. Which agents are calling which tools, success rates per agent platform, and patterns that suggest tool definition improvements. This is the newest agent in development, scheduled for internal testing in April 2026.

Building AI agents that run a growth engine isn’t about replacing the team. It’s about giving the team capacity that would otherwise require 3-4x the headcount. Our 4-person strategy team manages work that would traditionally require 12-15 people, because agents handle the execution and the humans handle the judgment.

If your team is interested in how agent-powered growth engineering works in practice, we’re happy to walk through our architecture in detail. Book a consultation and ask for the “agent architecture deep dive.” It’s not a sales pitch. It’s a technical walkthrough of what we’ve built and what we’ve learned.

Free Growth Audit
Call Now Get Free Audit →