How to Run a B2B AI Visibility Check in 30 Minutes

44% of AI prompts return zero brand mentions. This 30-minute manual check tells you if your B2B brand is in that invisible 44%. Ten buyer queries, four AI engines (ChatGPT, Perplexity, Gemini, Google AI Overviews), no tools required.

B2B AI Visibility check

Most B2B marketing teams have no idea whether ChatGPT, Perplexity, or Gemini mentions their brand. Not because the check is hard. Because nobody’s told them how to do it.

This post is the 30-minute audit I run before any AI visibility engagement. It’s manual, it’s simple, and it produces a baseline you can actually act on. No tools required. No spend required. Just a browser, a spreadsheet, and the discipline to run it the same way each time.

If you want to know whether your brand is showing up in the AI answers your buyers are reading, this is the fastest way to find out.

Why this matters before you spend anything

73% of B2B buyers now use AI tools like ChatGPT and Perplexity in their research process (Averi, March 2026). Only 22% of marketing teams track AI visibility (Yext, 2025). That gap is the single biggest unmeasured channel in B2B marketing right now.

Before you buy a tool, hire a consultant, or kick off an AI visibility project, you need a baseline. You need to know where you stand today. The audit below gives you that in 30 minutes, flat.

BrightEdge’s 2025 analysis of millions of AI search responses found that 44% of all AI prompts return zero brand mentions (via Yozigo, 2026). You might be in that 44%. Most B2B companies are. The audit tells you.

Practitioner note: Most teams that hire me have never done this audit. They assume their organic traffic is fine because Google Analytics says so. Google Analytics can’t see AI visibility. The audit fills that blind spot.

What you’ll need

  • A spreadsheet (Google Sheets, Excel, Airtable, whatever)
  • Access to ChatGPT, Perplexity, Gemini, and Google (all free tiers work)
  • 30 minutes of focused time
  • An honest list of the queries your actual buyers run

That’s it. No AI visibility platform subscription. No Semrush licence. No Ahrefs API. You can optionally upgrade to paid tracking tools later, but the manual audit gives you 80% of what matters.

Step 1: Build your query set (5 minutes)

Most teams skip this step and try to run queries off the top of their head. That’s the fastest way to get a useless audit.

Write 10 queries that real buyers actually type. Not what you wish they’d type. Not marketing-team fantasy queries. The actual questions your sales team hears on discovery calls.

Group them into three categories:

Category-level queries (4 of the 10) Questions a buyer asks when they’re building a shortlist. They don’t know your brand yet. They’re asking the AI to name vendors.

Examples for an MSSP:

  • Best MSSP for financial services in North America
  • Top managed SOC providers for mid-market companies
  • Best MSSPs with SOC 2 compliance for SaaS companies
  • Which MSSP is best for healthcare HIPAA requirements

Use-case queries (4 of the 10) Questions buyers ask when they have a specific problem and want a vendor that solves it.

Examples:

  • Best managed detection and response for detecting ransomware early
  • Vendors that offer 24/7 SOC monitoring for Canadian businesses
  • MSSPs that provide compliance reporting for PCI-DSS audits
  • Managed security providers that support Microsoft E5 environments

Brand-direct queries (2 of the 10) Questions buyers ask after they’ve heard your name somewhere and want to verify.

Examples:

  • Is [Your Company] a good choice for managed security services
  • What does [Your Company] do
  • [Your Company] vs [Competitor]

Write these 10 queries in the first column of your spreadsheet. Label the columns to the right: ChatGPT, Perplexity, Gemini, Google AI Overviews. That’s your audit grid.

Practitioner note: If you can’t come up with 10 queries that feel real, your positioning is too vague. Spend 10 extra minutes talking to your sales team before you run the audit. Their discovery call notes are gold for this.

Step 2: Run your queries through four AI engines (15 minutes)

This is the bulk of the audit. You’re going to paste each query into each platform and note what happens.

ChatGPT (3-4 minutes)

Open ChatGPT. Make sure web browsing or search is enabled (depends on your account tier). Paste each of your 10 queries. For each response, note:

  • Is your brand mentioned at all?
  • If yes, is it a citation (with a source link) or just a mention (no source)?
  • Which competitors are cited?
  • How many total vendors does ChatGPT name in its answer?

ChatGPT citations appear as numbered footnotes with source links. A citation means ChatGPT pulled content from your site and presented it as authoritative. A mention without citation means ChatGPT knows your brand exists but isn’t citing you as a source (Yozigo, 2026).

Perplexity (3-4 minutes)

Open Perplexity. Paste the same 10 queries. Perplexity shows citations inline with numbered source pills after each claim, which makes auditing easier than ChatGPT.

For each query, note:

  • Is your brand mentioned?
  • Is your domain in the source pills?
  • Which competitor domains appear in the source pills?
  • Which third-party sources (review sites, publications, Reddit threads) get cited?

Perplexity tends to cite more sources per response than ChatGPT, so your share-of-voice calculations will look different here. Only 11% of domains are cited by both ChatGPT and Perplexity (Averi, March 2026). Absence in one does not predict absence in the other.

Gemini (3-4 minutes)

Open Google’s Gemini consumer app. Paste the same 10 queries. Gemini inherits heavily from Google’s index and Knowledge Graph, so entity signals matter disproportionately here.

For each response, note:

  • Brand mention present or absent
  • Source panel contents (Gemini shows source links below responses)
  • Whether the answer describes your brand accurately or conflates you with others

Gemini feeds both the standalone Gemini chat AND Google AI Overviews. Strong Gemini presence often correlates with AI Overview presence, which is why this platform matters for traditional SEO teams paying attention to AIO impact.

Google AI Overviews (4-5 minutes)

Open Google. Use an incognito window to avoid personalisation bias. Paste each of your 10 queries directly into Google search.

For each query, note:

  • Does an AI Overview appear at all? (Not every query triggers one)
  • If yes, is your brand named in the Overview text?
  • Is your domain in the cited source panel?
  • Which competitors appear in the Overview?

AI Overviews now trigger on approximately 13-48% of queries depending on the query type, with category-level B2B queries often appearing in the higher end of that range (theStacc, April 2026). If an AI Overview doesn’t appear for your category queries, that’s still useful data — it means Google hasn’t rolled AI Overviews out for your specific vertical yet.

Step 3: Score what you found (5 minutes)

Now you have 40 data points (10 queries × 4 platforms). Convert them into four simple scores.

Visibility Score

Count the number of cells where your brand is mentioned or cited. Divide by 40. Multiply by 100.

  • 0-25%: You’re mostly invisible. This is where most B2B companies live.
  • 25-50%: Partial visibility. You appear in some queries but not consistently.
  • 50-75%: Strong visibility. You’re being cited across most of your buyer intent queries.
  • 75-100%: Category leader. Rare, usually the Gartner Magic Quadrant leaders.

Citation Quality Score

Of the cells where your brand IS mentioned, count how many are actual citations (with source links back to your domain) vs just mentions.

High citation quality means AI engines trust your content enough to cite it directly. High mention-only rate means the AI knows you exist but doesn’t treat your site as an authoritative source. That’s a structural problem with your content or entity signals, not a visibility problem per se.

Competitor Gap

For each query where you’re absent, note which competitors are cited. Count the top 3 competitors that appear most often across your query set.

This tells you who you’re actually losing to in AI answers, which may be different from who you track in Google rankings. I regularly see clients whose Google competitors and AI competitors are two different lists.

Platform Fragmentation

Look at your visibility scores platform-by-platform.

  • Strong on Perplexity, weak on ChatGPT? You have a Reddit/community presence problem, because Perplexity cites Reddit heavily (46.7% of top citations) but ChatGPT favours Wikipedia and encyclopedic content (47.9% of top citations) (Averi, 2026).
  • Strong on Gemini, weak on Perplexity? Your Google rankings are doing the work, but your entity and third-party signals are thin.
  • Strong on ChatGPT, weak on everything else? You probably have decent Wikipedia or encyclopedic presence but weak third-party validation.
  • Weak on all four? Full AI visibility gap. Most common pattern I see.

Step 4: Interpret and decide what to do (5 minutes)

Based on your scores, you have four common outcomes. Here’s what each means.

Outcome 1: Low visibility, high-quality citations where they appear

You’re invisible most of the time, but when the AI does mention you, it links to your site. This is a volume problem, not a quality problem. You need more surfaces getting mentioned in AI training data: Reddit presence, review platform profiles, earned media, and industry publication mentions.

Outcome 2: Decent visibility, mostly mentions without citations

The AI knows you exist but doesn’t cite you as a trusted source. This is an entity and content structure problem. You need better schema markup, clearer content hierarchies, and stronger brand entity consolidation across Wikidata, Wikipedia, Crunchbase, and your About page.

Outcome 3: Strong on one platform, absent on others

Platform-specific gap. You’ve inadvertently optimised for one AI engine’s citation preferences while ignoring others. Needs platform-specific work on the weak engines.

Outcome 4: Weak everywhere, competitors named in every query

Full visibility crisis. You’re losing shortlist positions you don’t know you’re competing for. This needs a full AI visibility engagement — entity audit, schema deployment, citation engineering, and ongoing tracking.

When to upgrade from manual to automated

Manual monthly audits work for most B2B teams starting out. You’d upgrade to automated tools (Topify, Peec AI, SE Ranking, HubSpot AEO, or similar) when one of these becomes true:

  • You need to track more than 30 queries across multiple competitors regularly
  • You want sentiment analysis (how AI describes your brand, not just whether)
  • You want daily or weekly tracking cadence rather than monthly
  • You need dashboards for stakeholder reporting
  • You’ve moved past baseline and are running active optimisation work

The market for AI visibility tracking tools was valued at $848 million in 2025 and is projected to grow rapidly (Topify, 2026). Most paid tools start around $99/month and scale from there. Worth paying for when manual methods become operational bottlenecks. Not worth paying for before you’ve run at least one manual baseline.

The results sheet template

Here’s the simple structure for your audit spreadsheet. Ten rows for queries, four columns for platforms, plus a scoring summary.

Query ChatGPT Perplexity Gemini Google AIO Notes
[Your query 1] C/M/N C/M/N C/M/N C/M/N Which competitors appeared
[Your query 2]
[Your query 3]

Use the codes:

  • C = Cited (brand mentioned with source link to your domain)
  • M = Mentioned only (brand named but not cited as source)
  • N = Not mentioned

At the bottom, calculate:

  • Total citations (count of C): ___
  • Total mentions (C + M): ___
  • Visibility score (C+M / 40): ___%
  • Top competitors cited against you: ___

That’s your baseline.

What to do after the audit

You now have data most of your competitors don’t have. Three practical next steps.

1. Repeat the audit monthly. The same 10 queries, the same 4 platforms, the same sheet. Score the changes. AI models update constantly, so visibility drifts. Monthly cadence catches the drift.

2. Share the results with your team. Your sales team will find gaps you missed. Your CS team will know which buyer queries matter most. Your content team will know what to write next.

3. Decide whether the gap is worth fixing. If you’re in Outcome 4 (invisible everywhere) and your buyers are moving to AI research, the gap is worth fixing. If you’re already at Outcome 3 and just need platform-specific optimisation, smaller engagement. The audit tells you which.

The short version

30 minutes, 10 queries, 4 platforms, 4 scores. That’s an AI visibility baseline most B2B companies don’t have.

If the audit reveals gaps and you want help closing them, the $397 AI Visibility Spot-Check runs this same methodology plus an entity audit, schema review, and top-5 prioritised fixes. Delivered in five business days. No sales call required.

If you want the broader context on how AI search is reshaping B2B visibility, the full 2026 AI SEO statistics guide has the data behind the shift. And if you’re a cybersecurity vendor or MSSP specifically, the cybersecurity SEO implementation guide covers the vertical-specific framework.

Anurag Pareek
Anurag Pareek is an SEO, AEO, and GEO consultant helping B2B companies rank in Google search and get cited in AI answers. Based in Toronto and Dubai. Specialising in B2B SaaS, cybersecurity, MSSP, MSP, and manufacturing. 15+ years of experience, managed directly, no handoffs.

You may also like:

How to Run a B2B AI Visibility Check in 30 Minutes

How to Run a B2B AI Visibility Check in 30 Minutes

44% of AI prompts return zero brand mentions. This 30-minute manual check tells you if your B2B brand is in that invisible 44%. Ten buyer queries, four AI engines (ChatGPT, Perplexity, Gemini, Google AI Overviews), no tools required.

Ready to see where your brand stands in AI and search?

The $397 AI Visibility Spot-Check gives you a ranked list of fixes in five business days. No sales call required.