Methodology

DecaGEO measures which brands AI platforms actually recommend within a category — and how often they come out on top.

Overview

DecaGEO sends recommendation-seeking questions — modeled on how real users ask AI — to AI platforms every week, then analyzes which brands get recommended and in what order.

  • AI Platform: Currently ChatGPT (GPT-5.4), with more engines coming soon
  • Region & Language: United States, English
  • Update Cycle: Weekly — every Sunday (ET)
  • Chart Scope: 10 SaaS categories at launch, expanding to more industries

What We Measure

Every week, we send hundreds of recommendation-seeking questions to AI platforms for each category — then count which brands show up in the answers, and where they rank.

What counts as a mention

When a brand appears in an AI response to a category-specific question, we call that a mention. But not all appearances count equally. We only count mentions where AI recommends the brand as a solution — simple name-drops, negative references, and comparison filler are excluded. Every metric below is built on this definition.

What counts as a position

Position is the order in which a brand appears within a single AI response. If AI recommends five brands, the first brand listed is position 1, the second is position 2, and so on.

Using mentions and positions, we track three core metrics.

DECA Score (0–100)

The question it answers: “How strongly does AI recommend this brand in this category?”

DECA Score is a 0–100 score based on how often a brand is mentioned in AI responses and how high it ranks when it appears.

A brand mentioned in first position 40% of the time will score higher than a brand mentioned in last position 70% of the time — because where you appear matters as much as whether you appear.

Score scale

RangeWhat it means
80–100Dominant AI visibility — consistently recommended at or near the top
60–79Strong presence — frequently recommended with solid positioning
40–59Moderate — appears regularly but often in mid-to-lower positions
20–39Low visibility — mentioned occasionally, rarely in top positions
0–19Minimal or absent — rarely or never recommended by AI

These ranges are operational guidelines. Actual score distributions vary by category, time period, and segment — interpret scores relative to the competitive context, not as absolute benchmarks.

How it's calculated

Each time a brand appears in an AI response, it earns points based on its position. First place earns the most points; last place earns the fewest. Brands not mentioned earn zero. We then sum these points across all responses and normalize to a 0–100 scale, where 100 means the brand was recommended first in every single response.

Example:If an AI platform lists 5 brands in a response and HubSpot appears first, HubSpot earns 5 points. If it appears third, it earns 3 points. If it doesn't appear at all, it earns 0. Repeat across hundreds of responses, normalize, and you get the DECA Score.

Mention Rate (%)

The question it answers: “What percentage of AI responses mention this brand for the category?”

Mention Rate is the percentage of AI responses that mention a brand, regardless of position. A 72% Mention Rate means the brand was mentioned in 72 out of every 100 relevant AI responses.

Rank

The question it answers: “Which brand is recommended most in this category?”

The overall chart position, determined by DECA Score. Rank 1 = highest DECA Score in the category.


How It Works

Step 1 — Research the category

For each category, we research the real-world questions buyers are asking. We analyze search trends, community discussions, and people-also-ask patterns to understand what matters most when choosing a product or service in that category.

Step 2 — Build buyer personas and queries

This is where DecaGEO differs from other AI visibility tools.

Rather than asking AI generic questions like “What's the best CRM?”, we design queries that reflect how real buyers actually think. We define the key decision conditions for each category — such as business size, primary use case, or budget priority — and create buyer personas that represent specific combinations of these conditions.

Each persona generates a set of natural, recommendation-seeking questions. This means our data captures not just who AI recommends overall, but who it recommends for different types of buyers.

Example: For a CRM category, instead of one generic question, we might ask:

  • “What CRM tools work best for a small marketing team on a tight budget?”
  • “What enterprise CRM platforms offer advanced automation for large sales teams?”

Different conditions, different personas, different answers — and that's the point.

  • No brand names in prompts: We do not include specific brand names in our queries. AI recommends brands entirely on its own — ensuring the data reflects organic AI preference, not prompted recall.

Step 3 — Collect AI responses

Every Sunday (ET), we send the full query set to our tracked AI platforms (currently ChatGPT, GPT-5.4) and collect the responses. We use a single, fixed model version to ensure measurement consistency week over week.

  • What counts as a “recommendation”: We only count brands that AI explicitly recommends as solutions in a given category. Simple mentions — such as comparisons, background context, or negative references — are excluded. DECA Score reflects whether AI positions a brand as a choice, not just whether it names it.

Step 4 — Score, rank, and publish

For every response, we extract the recommended brands, record their positions, calculate DECA Scores, and update the charts. We standardize brand names across common variations (e.g., “HubSpot CRM” and “HubSpot” are counted as the same brand) to ensure accurate aggregation. The full process — from data collection to chart refresh — happens within the same weekly cycle.


Condition Filters

DecaGEO's segment filters don't just narrow results — they simulate different user contexts. Because every query is generated from a specific buyer persona with defined conditions (business size, primary use case, or budget priority), filtering by a segment replays only the queries that match that context. You're not slicing a single dataset differently — you're seeing what AI recommends when the question comes from a different type of buyer.

This is where the most valuable insights live: a brand that ranks #1 overall might drop to #5 for enterprise buyers — and vice versa.

Segment filters are available on every category chart.


What Makes Our Data Different

Persona-driven, not prompt-driven

Most AI visibility tools ask generic prompts like “best CRM software.” We design queries from the buyer's perspective — factoring in real decision conditions like team size, budget, and use case. This produces richer, more actionable rankings.

Recommendation-only counting

We don't count every time a brand name appears in an AI response. We only count instances where AI recommends a brand as a solution. Passing mentions, negative references, and comparison filler are excluded.

One model, one version, every week

AI responses vary across models, versions, and access methods. We fix our measurement to a single model per platform (currently ChatGPT, GPT-5.4) and a single collection window (weekly, Sunday ET). This means week-to-week changes in DECA Score reflect real shifts in AI behavior — not measurement noise.


What's Next

We're continuously expanding DecaGEO's coverage and depth:

  • Multi-engine support — Adding Claude, Gemini, Perplexity, AI Overviews and etc.
  • More categories and industries — Expanding beyond the initial 10 categories into new verticals
  • Sentiment layer — Measuring whether AI recommends a brand enthusiastically or with caveats
  • Trend alerts — Notifications when a brand's DECA Score shifts significantly

Frequently Asked Questions


See Where Your Brand Ranks in AI

Explore your category's AI brand ranking now — or talk to us about
improving your AI visibility.