we built the
observatory.
BotScope is a scientific instrument for watching how generative AI represents your brand. We made it because the existing tools — SEO trackers, brand monitors, social listeners — were not built for a world where the answer is generated, not retrieved.
SECT.01 The thesis.
When customers Google your category, ranking position is a proxy for visibility — but it is also visible to you. You can read the SERP. You can see who outranks you. You can build the page that wins.
When customers ask Claude or ChatGPT or Gemini, the answer is generated in private and read in private. There is no SERP to inspect. There is no DOM to scrape. The model decides who gets named, what gets claimed, and which URLs get cited — and unless you are watching the model, you have no idea what it is saying about you.
BotScope watches. Every model. Every layer. Every day.
SECT.02 The framework.
We score AI visibility in four layers, drawn from the L1–L4 framework that emerged from Aaron Haynes' 2025 research and refined through our own observations.
- L1 — Entity Establishment. Does the model recognise the brand name?
- L2 — Entity Depth. What does the model claim about you when prompted?
- L3 — Recommendation Visibility. Are you surfaced in answers to category questions?
- L4 — Informational Citation. Which of your URLs does the model cite when grounding its answer?
Each layer fails differently. L1 failures look like the model not knowing you exist. L4 failures look like a competitor's blog post being treated as the canonical source for facts about your product. The fixes are different. The framework is the map.
SECT.03 What we believe.
Measurement before strategy. Most AI visibility advice is folklore. We start with what we observe, run it daily, and let the longitudinal data tell us what works. The folklore is often wrong.
Median, not mean. Single-shot AI responses are noisy. We aggregate across runs, models, and prompts using medians so the number you see is robust to the worst response and the best one — not anchored to either.
Honest dashboards. If a model didn't respond, we say so. If a citation is dubious, we flag it. If a score moved because of a sample-size shift, we surface it. There is no value in dashboards that hide their own caveats.
Built like an instrument. The aesthetic is not decoration. The Observatory's grid, monospace labels, ref codes, and signal colours are functional — they make it easier to read precise numbers, follow citations, and track deltas over time. Soft SaaS aesthetics make precision data harder to read.
SECT.04 The team.
BotScope is built by a small team of operators, engineers, and researchers who have spent the last decade running marketing, building data products, and studying how models work. We are based in London and ship from a quiet building with too many monitors.
If you want to talk to us, write to us at hello@botscope.ai — every email is read by a person, usually within a working day.
join the observation.
Free trial. No credit card. Daily scans on your real brand within ninety seconds.
OPEN THE SCOPE →