All work
B2BAI Search + Build

200 pages shipped in three months. Now dominant in Perplexity answers.

Services

GEO, AEO, Content strategy, Programmatic content build, Structured data

Vertical

B2B

Engagement

Six months

Outcome

60% citation share on comparison queries.

The challenge

The client was a B2B SaaS company operating in the developer tools space — a sector where comparison and "best of" content is the primary discovery mechanism. Developers researching tools don't start on Google and click blue links. They ask Perplexity. They ask ChatGPT. They read the AI answer, pick from the two or three options named there, and go to those sites.

The client had a strong product and decent traditional SEO, but they were invisible in AI-generated comparisons. Their competitors — several of which were inferior products with better-known brand names — were appearing consistently. The client's brand appeared in fewer than 5% of monitored comparison queries.

The brief: build a content programme that dominates the AI-generated answer layer for developer tool comparison queries.

Our approach

We started by auditing the AI search landscape for 60 target comparison queries — "best [category] tools," "[competitor] alternatives," "[use case] tool for developers," and similar patterns. The audit confirmed the problem: competitors with higher name recognition were being cited regardless of product quality. The signal AI systems were reading was brand familiarity and content coverage, not product merit.

Two observations shaped the programme:

Coverage gap. The client had almost no content targeting the specific question forms that AI systems were answering. No comparison pages. No "why choose us over [competitor]" content. No use-case-specific landing pages. The content architecture was built for direct navigation, not for the question-and-answer layer that LLMs are trained on.

Entity clarity gap. The brand's entity footprint was weak. Third-party sources that LLMs rely on for brand understanding — developer community sites, press coverage, structured directories — had incomplete or inconsistent descriptions of what the product actually did and who it was for.

The work

We ran a two-track programme concurrently: entity work to fix the recognition problem, and a content build to fix the coverage problem.

Entity programme. We audited brand presence across the sources that feed developer-focused LLM training data — Stack Overflow, GitHub, developer-focused press, product directories, comparison platforms. We seeded consistent brand descriptions and co-citations across these sources, working with the client's developer relations team who had existing community relationships. We also implemented comprehensive SoftwareApplication and Product schema across the site.

Content build. We designed a programmatic content architecture targeting three query types: direct comparisons ("[client] vs [competitor]"), alternatives pages ("[competitor] alternatives"), and use-case pages ("[use case] tool for [developer persona]"). We wrote the brief frameworks, built the page templates, and worked with a writer to produce 200 pages over three months. Each page was structured to answer the query directly and completely — optimised for the citation pattern, not just the click.

Quality control was the hard part of the build. Programmatic content at scale is easy to do badly. We ran every brief through entity coverage checks before writing started, reviewed every draft against a citation-readiness rubric, and validated structured data on every page before publish.


The outcome

Within six weeks of the first pages going live, citation appearances began on Perplexity — the platform most responsive to content changes in this sector. By month four, the client was appearing in 60% of monitored comparison queries on Perplexity and 41% on ChatGPT.

The comparison pages drove the bulk of the movement. The "[competitor] alternatives" content format proved particularly effective — these pages now routinely appear when users ask AI systems to compare options in the category.

"Six months ago we were invisible in every AI answer that mattered. Now we're in most of them. The content programme was the catalyst, but the entity work upstream was what made it stick."

Traditional organic traffic also grew — 34% over the six-month engagement — as the comparison content picked up conventional Google rankings alongside its AI citation performance.


Related disciplines

This engagement combines AI Search strategy, programmatic content production, structured data implementation, and large-scale build work. It is a good example of the AI Search and Build practices running in parallel on the same programme.

Working on something similar?

Talk to us.

Tell us what you're working on. We'll tell you honestly whether we can help.