Backed by Customer Data: What Makes Brands Stand Out in AI Search

Marketing teams and even a lot of SEO veterans are still confused about what actually works in AI search. What helps you show up, what helps you get cited, what matters technically, and what is just old SEO advice with new packaging. There is a lot of speculation and a lot of noise. We do not claim to know everything, but we do have something many people in this space do not have: real data from customer websites. We look at logs, bot behavior, and repeat patterns. We try to separate facts from assumptions and be honest about what we can and cannot prove. So here is a short recap of what we are seeing so far and how we interpret it.

Question-shaped pages seem to matter more

One of the clearest patterns in our data is that pages framed around direct questions get fetched more often than broader generic pages. In a related analysis of roughly 6 million bot requests, Q&A-style endpoints attracted a very large share of structured fetches across major bots. Meta AI sent about 87% of extracted requests to Q&A pages, Claude about 81%, ChatGPT about 75%, and Gemini about 63%. This does not prove rankings or citations by itself, but it does suggest that when bots start consuming content, they are strongly drawn to clear question-and-answer structures.

Structured content still beats "LLM-friendly" writing

We also found that structured, machine-readable content seems to matter more than trying to make copy sound optimized for language models. In a 30-day crawl experiment across a few dozen sites and about 5 million bot requests, structured endpoints outperformed unstructured versions of the same content by roughly 14% on a composite measure of bot behavior. Extraction success rate improved by 12%, crawl depth by 17%, and crawl rate by 13%. The simple interpretation is that when information is easier for machines to understand, bots fetch it more reliably and go deeper.

Bots may consume less of a page than people assume

Across our dataset, LLM bots often extracted a limited amount of data from the first page they accessed, averaging roughly 25 KB to 30 KB per page. We also measured average response payload sizes by bot family and saw clear differences. Meta AI averaged 4.9 KB per request, Gemini 9.2 KB, ChatGPT 8.5 KB, Claude 13.9 KB, and Perplexity 14.6 KB. There are a few possible reasons for those gaps, but the practical takeaway is straightforward: clarity near the top of the page likely matters more than most teams think.

Classic FAQ pages do not seem very important

In a sample of 6.2 million AI-bot requests across a few dozen sites, URLs containing /faq represented just 1.1% of requests on average. That is very different from question-shaped Q&A pages, which drew much more attention. At least in our data, a traditional FAQ hub does not look nearly as important as content that directly mirrors the kinds of questions bots appear to be looking for.

Skills changed behavior for some bots

One of the most interesting patterns came after we rolled out a skills manifest across customer websites on March 2, 2026. We wanted to see whether bots change behavior when a website clearly tells them what they can do, for example search the site, browse products, read FAQs, or pull business information. The clearest example was ChatGPT. In the seven days after skills went live, ChatGPT traffic rose from 2,250 to 6,870 hits. Q&A hits went from 534 to 2,736. It fetched the manifest 434 times, and path diversity dropped from 51.6% to 30%. To us, that suggests the bot stopped wandering and started repeatedly using endpoints it found useful. In plain English, it behaved less like a crawler and more like a tool user.

Access is still a very basic problem

In a separate scan of 2,870 websites, 27% were blocking at least one major LLM crawler. In most cases the issue was not in the CMS or robots.txt. It was happening at the CDN, WAF, or hosting layer. So some brands are investing in content and AI visibility while certain crawlers cannot reliably access the site in the first place.

Where the facts seem to point

We do not see strong evidence that brands improve AI search visibility simply by making copy sound more "LLM-friendly." The stronger signals in our data point somewhere more practical: clear question-based structures, stronger machine-readable signals, better defined endpoints, and technically accessible websites. That is not a complete answer, and it is definitely not hype. It is simply where the facts are pointing us right now.

Related Resources

For a personalized review, schedule a free AI visibility audit with the LightSite team.