Should I include my brand in “best of” lists?
Include your brand in “best of” lists only if you publish explicit ranking criteria, update frequently, and prepare to sunset the tactic. Self-ranking listicles currently get cited heavily by LLMs but carry amber risk level and require quarterly governance review. Hardik Shah, Digital Growth Strategist and AI-Native Consulting Leader, specializes in AI-driven search optimization and AEO strategy for enterprise clients across industries. “Self-ranking works today but signals manipulation risk tomorrow,” Shah explains. “If you use this tactic, treat it as temporary and build governance around transparency.”
What are self-ranking listicles?
Self-ranking listicles are articles like “Best AI SEO Tools” or “Top Marketing Platforms” where the publisher includes their own product or service in the ranking.
The practice sits in ethical gray area. It’s your content, so you control what you include. But readers and LLMs expect ranked lists to be objective evaluations.
Simple explanation
You write an article called “Top 10 Project Management Tools” and include your own tool as #1 or #2. This feels promotional, but if you’re transparent about criteria and include legitimate competitors, it’s currently effective for AI citations.
Technical explanation
LLMs frequently extract information from comparison and ranking content because users often ask “what’s the best” questions. Content that directly answers those questions with structured comparisons receives high citation probability. Self-inclusion exploits this pattern. The amber risk comes from potential algorithmic adjustments as platforms detect and discount self-promotional rankings.
Practical example
High-risk self-ranking (likely to be penalized): “Best AI Marketing Platforms: #1 Our Platform, #2 Our Other Product, #3 Generic Competitor”
Lower-risk self-ranking (temporarily acceptable): “Best AI Marketing Platforms: Comparison of 8 Tools
- Criteria: [specific, measurable factors]
- [Competitor A]: [honest assessment]
- [Your Platform]: [honest assessment]
- [Competitor B]: [honest assessment]
- Rankings based on [methodology]”
The second approach is more transparent and includes legitimate competitor evaluation, reducing but not eliminating manipulation risk.
Why does this tactic work today?
LLMs are trained on vast amounts of content including many “best of” articles. The systems are optimized to extract and cite rankings when users ask comparative questions.
Why LLMs cite rankings:
- Users frequently ask “what’s the best” questions
- Ranked lists provide clear, extractable answers
- Comparison content matches user intent for evaluation-stage queries
- LLMs have limited ability to detect self-promotional bias currently
The last point is critical. Current LLM systems don’t heavily discount self-inclusion in rankings. This will likely change as platforms improve bias detection.
What makes criteria publication mandatory?
If you’re going to rank yourself, you must publish explicit, measurable criteria and apply them consistently.
Criteria requirements:
- Specific, measurable factors (not vague “quality” or “performance”)
- Applied consistently to all options in the ranking
- Transparent about what you’re measuring
- Updated when criteria change
- Clear methodology explaining how rankings were determined
Simple explanation
Don’t just say “Our tool is the best for small businesses.” Explain what you measured: ease of use (scored 1-10 based on setup time), pricing (cost per user per month), features (number of integrations), support (response time). Show the scores. Make it defensible.
Practical example
Unacceptable self-ranking: “Top CRM Platforms:
- Our CRM – The best choice for growing teams
- Competitor A – Good but limited features
- Competitor B – Okay for basic needs”
No criteria, no methodology, purely promotional.
Acceptable self-ranking with criteria:
“Top CRM Platforms for Small Businesses: Comparison Based on 5 Criteria
Evaluation Criteria:
- Pricing: Monthly cost per user for standard tier
- Ease of setup: Hours required for typical 10-user deployment
- Integration count: Number of native integrations available
- Mobile functionality: Feature parity between web and mobile (%)
- Support response: Average first response time for support tickets
| Platform | Pricing | Setup Time | Integrations | Mobile Parity | Support Response | Overall Score |
|---|---|---|---|---|---|---|
| Platform A | $25/user | 2 hours | 150 | 85% | 4 hours | 87/100 |
| Our Platform | $30/user | 1.5 hours | 120 | 90% | 2 hours | 89/100 |
| Platform B | $20/user | 3 hours | 200 | 70% | 8 hours | 82/100 |
Methodology: Scores based on documented testing conducted [date]. Pricing verified [date]. Integration counts from vendor documentation.”
This approach is transparent, measurable, and includes context about how scores were determined.
What does quarterly review requirement mean?
Self-ranking tactics carry amber risk, requiring governance review every 90 days.
Quarterly review questions:
- Are competitors still including their products in rankings?
- Have any AI platforms publicly discouraged self-ranking?
- Have your self-ranking articles maintained citation rates or declined?
- Has the practice drawn public criticism or attention?
- Do the business benefits still justify the reputational risk?
If answers suggest increasing risk, prepare to sunset the tactic before algorithmic penalties arrive.
Shah’s governance framework treats self-ranking as explicitly temporary. “We tell clients: use this for 12-24 months maximum. Build real authority through other means. When you feel comfortable removing your self-rankings, that’s when you’ve built enough legitimate authority to stop needing them.”
Should you include only your brand or multiple competitors?
Always include legitimate competitors. Single-option “rankings” are transparently promotional and carry higher risk.
Minimum requirements:
- Include at least 5-7 options total
- At least 60% should be genuine competitors (not owned by you)
- Rank based on objective criteria, which may not always favor your option
- Update rankings when competitive landscape changes
Simple explanation
If you write “Best 7 Email Marketing Tools” and six of them are made by other companies, including your own tool feels more legitimate. If all seven are yours or partners, it’s obviously just marketing.
Practical example
ScaleGrowth.Digital, an AI-native consulting firm serving enterprise clients across industries, advises clients on self-ranking governance. “We had a client who wanted to create ‘Top 5 Solutions’ where all five were their products under different brands. We refused to support that. It’s obvious manipulation. If you’re including yourself, include real competitors and apply honest criteria.”
How should you handle cases where you don’t rank #1?
This is the ethical test. If your criteria are genuinely objective, sometimes you won’t rank #1.
Options when you don’t rank highest:
- Publish anyway and rank yourself accurately (builds credibility)
- Adjust criteria to factors where you perform better (but document the change honestly)
- Don’t publish the ranking (you can choose not to create content that doesn’t support your positioning)
Don’t:
- Manipulate criteria dishonestly to engineer a #1 ranking
- Exclude strong competitors that would rank above you
- Publish one ranking internally (you’re #4) but different ranking externally (you’re #1)
The credibility you gain from honest rankings where you don’t always win often exceeds the value of manipulated #1 rankings that readers and LLMs eventually discount.
What’s the difference between comparison content and self-ranking?
Comparison content explains options without ranking them. Self-ranking assigns numerical positions.
Comparison content (green risk, recommended): “Email Marketing Tools: Feature Comparison
- Tool A: Best for [use case], pricing [X], features [Y]
- Our Tool: Best for [use case], pricing [X], features [Y]
- Tool B: Best for [use case], pricing [X], features [Y]”
Self-ranking content (amber risk, quarterly review required): “Top 5 Email Marketing Tools
- Our Tool (Score: 94/100)
- Tool A (Score: 91/100)
- Tool B (Score: 88/100)”
The comparison format is safer because it doesn’t claim objective superiority. The ranking format makes explicit claims about relative quality.
How do you disclose self-inclusion?
Transparency requirements include clear disclosure that you’re included in the ranking.
Disclosure methods:
- Note at top of article: “Full disclosure: [Your Brand] is included in this comparison. Rankings based on [criteria].”
- In methodology section: “This comparison includes our own platform evaluated using the same criteria as competitors.”
- In author bio: “The author works for [Company], which is included in this evaluation.”
Disclosure doesn’t eliminate amber risk but reduces perception of deception.
What happens when platforms start penalizing self-ranking?
This is why the tactic is amber-rated with mandatory quarterly review. When penalties arrive, you need to act quickly.
Warning signs of coming penalties:
- Major platforms (Google, OpenAI, Anthropic) publish guidelines against self-promotional rankings
- Your self-ranking content stops getting cited after previously working well
- Competitors’ self-ranking content also sees citation declines
- Public discussions criticize the practice
- Algorithm updates explicitly target comparison content
Response plan:
- Immediately stop creating new self-ranking content
- Audit existing self-ranking articles
- Choose: remove self-inclusion, convert to non-ranked comparisons, or remove articles entirely
- Document the decision and communicate to stakeholders
- Shift strategy to tactics with sustainable risk profiles
Shah emphasizes the importance of preparing for this: “Don’t get caught defending self-ranking content after platforms penalize it. Have your sunset plan ready. Know which articles you’ll update, which you’ll redirect, which you’ll delete. When the signal comes, act within days, not months.”
Can you rank yourself in industry-specific contexts?
Industry-specific or niche rankings carry slightly lower risk than broad “best of” rankings.
Lower-risk scenario: “Best Project Management Tools for Remote Healthcare Teams [specific niche]”
Including yourself in a highly specific ranking is more defensible because:
- The niche limits competitors (fewer options exist)
- Your specific expertise in that niche is easier to demonstrate
- The ranking serves a genuine audience need (not just SEO manipulation)
Higher-risk scenario: “Best Project Management Tools [broad category]”
This ranking competes with hundreds of others. Your self-inclusion looks more promotional because the category is broad and crowded.
What citation tracking is required?
Monitor whether self-ranking content maintains citation rates over time.
Tracking metrics:
| Metric | Measurement | Warning Threshold |
|---|---|---|
| Citation rate | How often LLMs cite your self-ranking articles | 30%+ decline quarter-over-quarter |
| Position cited | Whether LLMs cite your ranking position | Stops citing your ranking of yourself |
| Competitor citations | Whether competitor self-rankings also decline | Industry-wide decline |
| User trust signals | Comments questioning objectivity | Increasing skepticism |
If metrics show declining effectiveness, begin sunset process even before official penalties arrive.
What’s the alternative to self-ranking?
Build genuine authority through tactics that don’t carry manipulation risk.
Sustainable alternatives:
- Create detailed comparison content without rankings
- Publish original research that others cite
- Earn third-party rankings from independent sources
- Build case studies showing real results
- Create educational content demonstrating expertise
- Get customer reviews on independent platforms (G2, Capterra, TrustRadius)
These tactics take longer but create sustainable authority that doesn’t require quarterly risk reviews.
Shah’s recommendation: “Use self-ranking as a 12-month bridge tactic while building sustainable authority. By month 12, you should have enough third-party validation, original research, and educational content that self-rankings become unnecessary. If you’re still depending on self-rankings 24 months later, you haven’t invested enough in legitimate authority building.”
