Track Brand Mentions in ChatGPT, Claude, and Perplexity
How to track your brand mentions in ChatGPT, Claude, and Perplexity: manual step-by-step method plus why automation matters when scale kicks in.
Most marketing teams have some form of brand monitoring set up. Google Alerts, Mention, a Slack integration that pings when someone tags you on social. What almost none of them are tracking: whether AI assistants mention their brand when potential customers ask for recommendations.
That gap is getting expensive.
This article gives you a working method to track your brand mentions across ChatGPT, Claude, and Perplexity: manually, with exact prompts you can copy right now. Then it explains why that method will eventually break under its own weight, and what the alternative looks like.
Why AI brand mentions are now a monitoring priority
Potential customers are asking AI assistants things like "what's the best CRM for a 10-person sales team?" or "which email marketing platforms should I consider for an e-commerce brand?" Those queries used to land on Google, where SEO determined visibility. Now a significant and growing share of them go to a chat interface instead.
The AI's answer is not a list of links. It's a synthesized recommendation. Your brand either gets named or it doesn't.
Unlike Google rankings, you can't pull up a dashboard and see where you stand. There is no equivalent of Search Console for AI mentions. The only way to know is to ask. Systematically, across models, with prompts that match how your customers actually phrase their questions.
Understanding why certain brands get recommended over others is a separate conversation. This article is about the mechanics of tracking it.
The manual method: how to track each platform
The process is the same across all three platforms, but each has quirks worth knowing.
How to track your brand in ChatGPT
ChatGPT is the starting point for most brand visibility audits because it has the largest user base and the most varied use cases.
-
Open a new, fresh session. Do not use a conversation where you've already mentioned your brand name or industry. Context bleeds into responses, so start clean every time.
-
Run category-level discovery prompts. These should match how a real customer would ask, not how a marketer would search. Examples you can use directly:
- "What are the best [your category] tools for [your use case]?"
- "Can you recommend a [your industry] platform for [specific need]?"
- "What brands do you know in the [your niche] space?"
- "I'm a [your target customer type] looking for [your product category]. What would you suggest?"
-
Record the raw output. Copy the full response into a doc or spreadsheet. Not just whether you appeared, but what the model said about each brand it mentioned.
-
Note position. Was your brand the first mentioned? Third? Did it appear in the main recommendation or only in an "also consider" clause at the end?
-
Run the same prompt again in a new session. Minimum 3–5 times per prompt. Results vary between sessions because LLMs are probabilistic. The same query does not guarantee the same output.
-
Test prompt variations. "What project management tools do you recommend?" and "What's the best project management software?" can produce meaningfully different brand mentions even though they're asking the same thing.
Important
Never test the same prompt twice in the same conversation. ChatGPT's context window means it will remember what it already said and often repeat itself. Each test needs a fresh session to be meaningful.
How to track your brand in Claude
Claude (by Anthropic) behaves differently from ChatGPT in one important way: it tends to be more cautious about endorsing specific brands and will often say "I don't have a preference" or recommend you evaluate options yourself. This does not mean your brand won't appear — it just means the framing is different.
-
Start a new conversation. Same rule as ChatGPT: fresh session each time.
-
Use the same category-level prompts, but also try these Claude-specific framings that tend to elicit more concrete brand mentions:
- "What are the most well-known [category] platforms?"
- "If someone asked you what brands exist in the [niche] space, what would you say?"
- "What tools do developers/marketers/founders typically use for [your use case]?"
-
Pay attention to hedge language. Claude often says things like "Some popular options include..." or "You might want to look at...". Whether your brand appears in that list matters. The hedging is just Claude's style, not a signal about your brand quality.
-
Check for accuracy. Claude is trained on data with a knowledge cutoff and may describe your product incorrectly or mention a feature that no longer exists. This is a separate issue from visibility but worth flagging separately.
-
Record position and sentiment. Same as ChatGPT: where in the response, and how is the brand described?
How to track your brand in Perplexity
Perplexity is different from the other two in a way that matters for tracking: it uses live web search to augment its responses. This means your current web presence (content, reviews, backlinks, fresh pages) has a direct, near-real-time effect on whether you appear.
-
Use the default mode (not "Focus: Academic" or other filtered modes) to replicate how most users search.
-
Run the same prompts. Perplexity tends to give more concrete, list-style recommendations than Claude, so results often look more like a traditional search result.
-
Check the sources panel. Perplexity shows which URLs it pulled from to construct its answer. If your brand appears, look at which pages Perplexity is citing. If you don't appear, the sources panel tells you which of your competitors' pages are winning that real estate.
-
Re-run the same query on different days. Because Perplexity uses live search, the results can shift based on what's been published recently in your category. A competitor's blog post from last week can displace you this week.
-
Try both question and query formats. "What are the best tools for X?" and "best tools for X comparison" can pull different sources and produce different brand lists.
Note
Perplexity's source citations are your fastest diagnostic tool. If you appear in the response, the sources tell you which content is driving that mention. If you don't appear, those are the exact pages you'd need to outrank or match.
Setting up a manual tracking spreadsheet
If you're going to do this more than once, structure your logging so you can actually spot trends.
Set up a spreadsheet with these columns: Date | Platform | Prompt | Session # | Brand Mentioned (Y/N) | Position (1st/2nd/3rd/Not Listed) | Sentiment (Positive/Neutral/Negative) | Notes (exact quote). Run each prompt 5 times per platform per week. After 4 weeks you'll have enough data to see real patterns, or to confirm that the variance makes consistent manual tracking impractical.
A minimal but useful weekly tracking structure:
| Date | Platform | Prompt | Brand Seen? | Position | Competitor Mentioned |
|---|---|---|---|---|---|
| Apr 8 | ChatGPT | "Best CRM for small sales teams" | Yes | 2nd | HubSpot (1st), Pipedrive (3rd) |
| Apr 8 | Claude | "Best CRM for small sales teams" | No | n/a | Salesforce, HubSpot, Zoho |
| Apr 8 | Perplexity | "Best CRM for small sales teams" | Yes | 1st | HubSpot (2nd) |
That's one prompt, one day. A real monitoring setup means 10–20 prompts, three platforms, multiple sessions each, every week.
Free scan across ChatGPT, Gemini, and Perplexity. No account needed.
Why manual tracking breaks down
The manual process above is genuinely useful. If you've never checked your AI brand visibility before, doing it once will tell you something real. The problem is what happens when you try to do it consistently.
Variance makes single sessions meaningless. A brand that's mentioned in 60% of relevant queries will sometimes have three sessions in a row where it doesn't appear. If you only run one or two tests per prompt per week, you can't tell signal from noise. You need volume, and volume is where manual tracking becomes untenable.
You can't track your competitors the same way. Every additional brand you want to benchmark doubles your workload. If you want to know how you compare to three competitors across 10 prompts on three platforms, you're looking at roughly 90 data points per week, before accounting for repeated sessions.
No trend data. A spreadsheet from this week tells you nothing about whether you've improved since last month. You need at least 4–8 weeks of consistent data before trends become meaningful. And that data only exists if someone remembered to do the tracking every single week.
Model updates invalidate your baseline. ChatGPT's behavior changes between model versions. When GPT-4o gets updated, a brand that was appearing consistently may drop without any change on the brand's end. Without a continuous record, you'll never know if the change was a model update or something you did.
Session bias is hard to eliminate manually. Real users don't ask exactly the same prompts the same way. Prompt wording shifts brand mentions more than most marketers realize. Testing one phrasing and concluding "we appear in ChatGPT" is like checking one keyword and concluding "we rank on Google."
What automated tracking actually looks like
Automated LLM brand monitoring solves the volume problem. Instead of running queries manually in a browser tab, a monitoring tool runs them programmatically, across platforms, across prompt variations, across multiple sessions, on a schedule every week.
BrandPulse runs a configurable set of prompts (10 on the Starter plan, 50 on Pro) across ChatGPT, Gemini, and Perplexity on a weekly cadence. Each prompt runs across multiple sessions to account for LLM variance. The output is not a dashboard you have to remember to check. It's a weekly email that shows you:
- Your mention rate across each platform (what percentage of prompts included your brand)
- Your average position when you did appear
- Sentiment breakdown (how your brand was described when mentioned)
- How you compare to the competitors you're tracking
The email is the product. You don't need another dashboard to maintain. It lands in your inbox on Monday morning, with the data for the prior week, ready to share with whoever asks "are we showing up in AI?"
For teams already doing some form of brand monitoring, this is an extension of what you're already doing, not a replacement for it. It covers the channel your existing tools don't reach.
The free brand audit at brandpul.se/audit runs a one-time scan across the three major AI platforms with no account required. If you've never checked your AI visibility, that's the fastest way to get a real data point in the next five minutes.
The prompts that matter most for your category
One thing worth emphasizing: not all prompts are created equal. The queries that produce the most brand mentions are usually the ones that match real buyer intent.
Category-level queries ("best tools for X") tend to surface the most mentions because they're the exact question buyers ask when they're researching. Product-specific queries ("what does [your brand] do?") will usually surface your brand, but they're not the ones driving new discovery. Those are the queries someone asks after they've already heard of you.
For tracking purposes, focus on:
- Category queries: "best [your category] for [specific use case]"
- Comparison queries: "alternatives to [competitor name]" (these often surface a ranked list)
- Problem queries: "how do I [problem your product solves]?" (look for brand mentions in the recommended solution)
The fact that different phrasings produce different results is exactly why tracking one prompt and calling it done is misleading. A brand that appears consistently across varied phrasings of the same underlying question has real AI visibility. A brand that only appears when you ask the exact right question does not.
For a deeper look at what actually determines whether AI models recommend your brand in the first place, this breakdown of how LLMs decide which brands to mention covers the underlying mechanics.
See your mention rate, position, and sentiment across AI platforms. Takes 60 seconds.
Frequently asked questions
How do I check if my brand is mentioned in ChatGPT?
Open a new ChatGPT session and ask category-level questions like 'What are the best tools for [your use case]?' and 'What brands do you know in the [your niche] space?' Note whether your brand appears, in what position, and how it's described. Repeat across fresh sessions, as results vary significantly between conversations.
Why do my brand mentions change between ChatGPT sessions?
LLMs are probabilistic systems. Even with identical prompts, the model samples from a probability distribution each time it generates text, producing different outputs. Brand mentions are not deterministic. A brand that appears in 6 out of 10 queries is meaningfully more visible than one that appears in 2 out of 10, but a single test tells you almost nothing.
Does Claude mention brands the same way ChatGPT does?
Not exactly. Each LLM has different training data, different tuning, and different tendencies around brand recommendations. Claude tends to be more cautious about endorsing specific brands. Perplexity supplements its responses with live web search results. Tracking across all three gives you a more complete picture of your AI brand presence.
How many prompts do I need to test to get reliable brand visibility data?
Single-session tests are unreliable. To get statistically meaningful data, you need to run each prompt 10–20 times across separate sessions, across multiple LLMs, over multiple weeks. That's roughly 200–600 individual queries for a basic monitoring setup, which is why manual tracking breaks down quickly.
What is the difference between brand mention rate and brand position in AI responses?
Mention rate is how often your brand appears across all relevant queries, expressed as a percentage (e.g., mentioned in 40% of prompts). Position tracks where in the response your brand appears: first mention, second mention, or buried at the end. Both matter. Appearing third every time is better than appearing first once and never again.
Find out what AI says about your brand right now
See exactly how ChatGPT, Gemini, and Perplexity describe your brand — and how you compare to competitors.
Get your free audit →No account required. Results in your inbox within 24 hours.
Related articles
How AI Language Models Decide Which Brands to Recommend
ChatGPT, Gemini, and Perplexity mention brands in response to millions of queries every day. Here's the research-backed breakdown of what actually determines whether yours is one of them.
What Does ChatGPT Say About Your Brand? How to Check
Wondering what ChatGPT says about your brand? Here's how to check it manually, what the results mean, and why one-off checks aren't enough to stay competitive.
LLM Brand Monitoring: What It Is and Why It Matters
LLM brand monitoring tracks how your brand appears in ChatGPT, Gemini, and Perplexity responses. Here's what it is, what it measures, and why it's now a core marketing metric.