A step-by-step methodology for auditing how your brand appears across ChatGPT, Perplexity, Gemini, and Google AI Overviews. Covers the 50-prompt test framework, scoring model, entity consistency checks, citation tracking, and competitive AI visibility analysis. Built from the methodology we use on every client engagement at ScaleGrowth.Digital.
Last updated: March 2026 · Reading time: 22 min
“An AI visibility audit is the single most important diagnostic you can run for your brand right now. Traditional SEO tells you where you rank in a list of links. An AI visibility audit tells you whether AI systems understand what your brand does, trust your content, and recommend you when someone asks. We’ve audited over 50 brands, and every single one had blind spots they didn’t know existed until they saw how AI described them.”
Hardik Shah, Founder of ScaleGrowth.Digital
AI visibility is the measure of how, when, and in what context AI systems mention your brand when users ask relevant questions. It’s the AI-era equivalent of search engine rankings, but instead of tracking your position in a list of blue links, you’re tracking whether AI assistants know your brand exists, describe it accurately, and recommend it to users seeking what you sell.
An AI visibility audit is a systematic evaluation of how your brand appears across AI-powered platforms (ChatGPT, Perplexity, Gemini, Google AI Overviews), measuring mention frequency, accuracy, sentiment, and citation patterns against competitors.
This matters because the way people find and evaluate brands is shifting. Gartner projected that traditional search volume would decline 25% by 2026 as AI-powered answers replace link-based search results. Whether or not that exact number holds, the directional shift is undeniable. When a potential customer asks ChatGPT “What’s the best CRM for small businesses?” or “Which accounting firm should I use in Melbourne?”, your brand is either part of that answer or it isn’t. There’s no position 2 or position 5. You’re mentioned or you’re invisible.
An AI visibility audit tells you exactly where you stand. It measures three things: (1) whether AI platforms mention your brand at all, (2) whether they describe your brand accurately, and (3) how you compare to competitors in AI-generated recommendations.
Your brand has an AI reputation whether you manage it or not. Every time someone asks ChatGPT, Perplexity, or Gemini a question in your industry, those systems generate an answer based on their training data, web index, and retrieval sources. If your brand’s information is inconsistent, outdated, or absent from those sources, AI systems will either ignore you or describe you incorrectly. Both outcomes cost you revenue.
Here are five specific reasons to run an AI visibility audit now:
Not all AI platforms work the same way. Each has different data sources, different citation behaviors, and different relevance to your audience. Audit all four major platforms, but understand what each one tells you.
| Platform | Data Source | Citation Behavior | Why It Matters |
|---|---|---|---|
| ChatGPT | Training data (cutoff date varies) + Bing web search (when browsing enabled) | Inconsistent. Sometimes inline links, sometimes footnotes, often no sources at all. | Largest user base among AI assistants. Over 200 million weekly active users (OpenAI, 2025). |
| Perplexity | Real-time web search with numbered citations for every response | Most transparent. Every response includes numbered source citations with clickable URLs. | Best platform for understanding citation mechanics. Every Perplexity answer tells you which URLs the AI consulted. |
| Google Gemini / AI Overviews | Google’s search index + Knowledge Graph + live web retrieval | AI Overviews show source cards linking to websites. Gemini (standalone) provides inline citations. | Google AI Overviews appear on 30%+ of search queries (SE Ranking, 2025). They sit above traditional organic results. |
| Microsoft Copilot | Bing search index + OpenAI models | Numbered citations linking to Bing search results | Integrated into Microsoft 365, Windows, and Edge. Relevant for B2B brands where decision-makers use Microsoft products. |
Audit all four. The brand that appears consistently across all platforms builds more trust than one that shows up in Perplexity but is absent from ChatGPT. Consistency across platforms signals to users (and to the AI systems themselves) that your brand is an established, authoritative entity.
The 50-prompt test is the core of an AI visibility audit. You submit 50 carefully designed prompts across all four AI platforms and record whether your brand is mentioned, how it’s described, what competitors appear, and what sources are cited. Here’s the methodology we use at ScaleGrowth.Digital on every client engagement.
Divide your 50 prompts into five categories of 10 prompts each. Each category tests a different dimension of AI visibility.
| Category | Purpose | Example Prompts |
|---|---|---|
| Brand queries (10) | Test whether AI knows who you are | “What is [brand name]?” / “Tell me about [brand name]” / “Is [brand name] reliable?” |
| Category queries (10) | Test whether AI recommends you in your category | “Best [product category] in [location]” / “Top [service type] companies” |
| Comparison queries (10) | Test how AI positions you vs. competitors | “[Your brand] vs [competitor]” / “Compare [your brand] and [competitor]” |
| Problem queries (10) | Test whether AI recommends you for the problems you solve | “How do I [problem your product solves]?” / “What’s the best way to [task you help with]?” |
| Purchase-intent queries (10) | Test whether AI recommends you when someone is ready to buy | “Where can I buy [product category]?” / “Best [service] for [specific need]” |
That’s 50 prompts x 4 platforms = 200 data points. For each response, record:
AI responses aren’t deterministic. The same prompt can produce different responses on different days. Run your full 50-prompt test twice, 7 days apart. This gives you 400 data points and accounts for response variability. Average the results for a more stable baseline.
Use a spreadsheet with columns for: prompt text, platform, date, mentioned (Y/N), position, accuracy score (1-5), sentiment (+1/0/-1), competitors mentioned, sources cited, and notes. This raw data feeds into the scoring framework described in the next section.
The full 50-prompt test takes 4-6 hours for one brand across all four platforms, including recording and analysis. For our team, it’s a one-day deliverable that forms the foundation of every AI visibility engagement.
Raw data from 200 responses needs a scoring system to be actionable. The ScaleGrowth.Digital AI Visibility Score is a 0-100 composite metric built from four weighted dimensions.
| Dimension | Weight | What It Measures | How to Calculate |
|---|---|---|---|
| Mention Rate | 30% | How often AI platforms mention your brand when relevant queries are asked | (Number of responses mentioning your brand / Total relevant responses) x 100. Score across all 4 platforms, then average. |
| Accuracy Score | 25% | How accurately AI describes your brand when it does mention you | Rate each mention 1-5 for factual accuracy. Average across all mentions. Convert to 0-100 scale (score x 20). |
| Sentiment Score | 20% | Whether AI descriptions are positive, neutral, or negative | Score each mention: +1 (positive), 0 (neutral), -1 (negative). Calculate (sum + total mentions) / (2 x total mentions) x 100. |
| Competitive Position | 25% | Where you appear relative to competitors in AI responses | Track your share of voice (your mentions / total brand mentions in competitive queries). Score: (your SoV / top competitor SoV) x 100, capped at 100. |
| Score Range | Grade | What It Means |
|---|---|---|
| 80-100 | A | Strong AI visibility. AI platforms consistently mention, accurately describe, and recommend your brand. |
| 60-79 | B | Good foundation with gaps. You’re mentioned but not consistently, or there are accuracy issues to fix. |
| 40-59 | C | Significant visibility gaps. AI platforms know you exist but don’t reliably recommend you. |
| 20-39 | D | Weak AI presence. AI platforms rarely mention your brand and may describe it inaccurately. |
| 0-19 | F | Effectively invisible to AI. Start with the fundamentals: entity establishment, structured data, and authoritative content. |
Most brands score 30-50 on their first audit. That’s not alarming; it’s expected. AI visibility is a new channel, and most brands haven’t optimized for it yet. The brands that act now have a significant first-mover advantage because AI visibility compounds.
AI platforms build their understanding of your brand from hundreds of data points scattered across the web: your website, Wikipedia, Crunchbase, LinkedIn, social profiles, press mentions, review sites, and directory listings. If these sources provide inconsistent information, AI systems get confused.
Entity consistency means that every digital source describing your brand uses the same name, description, founding date, leadership, location, and product/service categorization, giving AI systems a clear, unified understanding of who you are.
Here’s the entity consistency audit checklist we use:
| Entity Attribute | What to Check | Where to Check It | Common Issue |
|---|---|---|---|
| Brand name | Exact same format everywhere | Website, LinkedIn, Crunchbase, Google Business Profile, social profiles | “ScaleGrowth” vs “Scale Growth” vs “ScaleGrowth.Digital” |
| Founding date | Consistent year across all sources | About page, LinkedIn, Crunchbase, Wikipedia | Different founding years across profiles |
| Headquarters / location | Same city, state, country | Google Business Profile, LinkedIn, directories | Old addresses not updated after a move |
| Industry / category | Same category classification | LinkedIn industry, Google Business category, Crunchbase | Listed as “marketing agency” on one platform and “software company” on another |
| Products / services | Current product names and descriptions | Website, directory listings, review sites | Discontinued products still listed on third-party sites |
| Leadership | Current CEO/founder/leadership | LinkedIn, Crunchbase, About page, press | Former executives still listed as current |
| Schema markup | Organization schema on homepage, correct @type, sameAs links | Your website (homepage, about page) | No Organization schema, or missing sameAs links |
| Social profile links | All profiles linked, active, and consistent | Website footer, schema sameAs, each social profile’s bio | Dead profiles on platforms you no longer use |
Fix entity inconsistencies before running your 50-prompt test if you can. But if time is limited, run the audit with the current state, document the inconsistencies, fix them, and re-run the 50-prompt test 30 days later to measure improvement.
Citation tracking is the practice of identifying which content pages AI platforms reference when they mention your brand or answer queries in your category. Unlike traditional backlink analysis (where you track who links to you), AI citation tracking reveals which of your pages AI systems trust enough to cite as sources.
Perplexity is the best platform for citation analysis because every response includes numbered source URLs. ChatGPT is the hardest because its citation behavior is inconsistent. Here’s how to approach each platform:
For each of your 50 test prompts on Perplexity, record every cited URL. Then analyze:
AI Overviews display source cards with website thumbnails and links. For each AI Overview in your category:
ChatGPT’s citations are the most unpredictable. With browsing enabled, it sometimes provides inline links. Often, it provides no sources at all. For ChatGPT, focus on:
The output of citation tracking is a “citation map” showing which content assets drive AI visibility and which gaps need to be filled.
Competitive AI visibility analysis answers one question: when a potential customer asks an AI platform about your category, which brands get mentioned most, and where do you rank?
Run your 50-prompt test and track every brand mentioned across all responses. Calculate each brand’s share of voice:
AI Share of Voice = (Your brand mentions / Total brand mentions across all responses) x 100
In a typical audit across 4 platforms, we see 200-400 total brand mentions. A strong leader might command 25-35% share of voice. A mid-tier competitor might hold 10-15%. Below 5% means you’re effectively invisible in AI recommendations for your category.
Build a matrix comparing your brand to your top 3-5 competitors across the four scoring dimensions:
| Metric | Your Brand | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| Mention rate (% of relevant queries) | [X]% | [X]% | [X]% | [X]% |
| Average accuracy score (1-5) | [X] | [X] | [X] | [X] |
| Sentiment (% positive) | [X]% | [X]% | [X]% | [X]% |
| AI Share of Voice | [X]% | [X]% | [X]% | [X]% |
| Citation count (Perplexity) | [X] | [X] | [X] | [X] |
| Composite AI Visibility Score | [X]/100 | [X]/100 | [X]/100 | [X]/100 |
This matrix becomes the centerpiece of your AI visibility report. Decision-makers respond to competitive data faster than abstract metrics. When they see a competitor’s AI share of voice is 3x theirs, the urgency to invest in AI visibility becomes immediate.
The AI visibility monitoring tool market is growing rapidly. As of March 2026, several purpose-built platforms exist alongside features added by established SEO tool providers.
| Tool | What It Does | Best For | Price Range |
|---|---|---|---|
| Otterly.ai | Tracks brand mentions across ChatGPT, Perplexity, Google AI Overviews, and Copilot. | Ongoing monitoring with custom prompt sets | From $39/month (as of March 2026) |
| Semrush AI Visibility Toolkit | Shows how brands appear in AI-generated answers. Integrates with existing Semrush data. | Teams already using Semrush for SEO | Included in Semrush plans (from $139.95/month) |
| HubSpot AEO Grader | Free tool that checks your brand’s visibility across ChatGPT, Perplexity, and Gemini. | Quick, free initial assessment | Free |
| LLMClicks | Tracks brand visibility across multiple LLM platforms with search-style queries. | Share-of-voice tracking and competitive benchmarking | From $49/month (as of March 2026) |
| Peec AI | Monitors brand visibility across 10+ LLMs with real-time optimization recommendations. | Enterprise brands needing multi-LLM coverage | Custom pricing |
You can run your first AI visibility audit manually with nothing but the AI platforms themselves and a spreadsheet. The tools above become valuable when you need ongoing monitoring, large-scale prompt testing, or automated competitive tracking. Start manual, validate the methodology, then invest in tooling when you know what metrics matter for your brand.
We’ve run AI visibility audits for B2B SaaS companies, DTC ecommerce brands, professional services firms, and enterprise technology vendors. Here are the patterns that don’t show up in tool dashboards but make the difference between a useful audit and a wasted effort.
Five mistakes appear in nearly every first-time AI visibility audit we review.
37 prompts for email subject lines, welcome sequences, and automation.
32 prompts for ad copy, negative keywords, and campaign strategy.
47-point SEO audit covering technical, on-page, off-page, and AI visibility.
A thorough AI visibility audit using the 50-prompt methodology takes 4-6 hours for one brand across four AI platforms, including data recording and initial analysis. The full report with scoring, competitive analysis, and recommendations takes an additional 4-6 hours to produce. For an agency or consultant, expect 2-3 business days from start to deliverable.
Run a comprehensive 50-prompt audit quarterly. Between quarterly audits, monitor a smaller set of 10-15 core prompts monthly to catch sudden changes. If you make significant changes to your website, brand, or content strategy, run an ad hoc audit 30 days after the changes to measure impact.
Yes. AI visibility is optimizable. The most impactful actions are: fixing entity consistency across all web properties, adding Organization and FAQ schema to your website, creating comprehensive content that directly answers questions in your category, building your presence on sources AI systems consult, and earning press mentions and authoritative backlinks. Most brands see measurable improvement within 60-90 days.
GEO (Generative Engine Optimization) is the practice of optimizing your content to be cited by AI systems. AI visibility is the measurement of how your brand currently appears across AI platforms. The audit identifies the gaps. GEO closes them. They’re complementary disciplines, not competing ones. You audit first, then optimize.
No. You can run your first audit with nothing but free access to ChatGPT, Perplexity, Gemini, and Copilot, plus a spreadsheet to record results. HubSpot’s AEO Grader is free for an initial assessment. Paid tools like Otterly.ai (from $39/month) and Semrush add automation and historical tracking, but they’re not required for the first audit.
ScaleGrowth.Digital runs AI visibility audits using the 50-prompt methodology described in this guide. We audit, score, map your competitive position, and deliver a prioritized optimization roadmap.