GEO & AEO · Generative Engine Optimisation

Answer engine optimisation for the brands AI keeps forgetting.

Your customers have started asking ChatGPT, Claude, Gemini and Perplexity the questions they used to type into Google. If the model has never heard of you, or has heard of you and got it confidently wrong, you have lost the recommendation before the shortlist even exists. I measure what the machines currently say about your brand, work out why, and rebuild the evidence until the answer changes.

4

Answer engines measured, because your customers do not all use the same one

17 yrs

Of evidence-trail SEO, the unglamorous discipline GEO is quietly built on

2 wks

From first audit to a measurable AI visibility baseline

0

Tolerance for AEO services that are just SEO at a higher day rate

What you'd actually buy

The work, not the buzzword.

GEO has been around just long enough to attract a crowd selling it as alchemy. It is not alchemy. It is measurement, entity hygiene and evidence, pointed at a new set of machines. Here is what an engagement actually involves.

A · Audit

AI visibility audit

A fixed-fee two-week engagement. I run your brand and your competitors through ChatGPT, Claude, Gemini and Perplexity with a repeatable set of real buyer questions, record where you appear, where you do not, and where the model states something about you that is simply untrue. You leave with a measured baseline instead of a vague worry.

  • Repeatable query sets across four answer engines
  • Citation share measured against named competitors
  • A log of every factual error the models make about you
B · Fix

Entity & evidence repair

The models are not guessing at random. They are reflecting the evidence trail your brand has left across the web, and that trail is usually thinner and messier than you would like. I rebuild it, so the picture the machines hold of you matches the one you would actually choose.

  • Schema and structured data implementation
  • Entity disambiguation, for when AI confuses you with someone else
  • llms.txt and the crawler-facing fundamentals
C · Earn

Content built to be cited

Ranking earns a click. Being cited earns a recommendation, which is a different and considerably better thing. I build the content that answer engines lift from: clear, specific, well-evidenced, and structured the way a model likes to quote.

  • Content scaffolding designed for citation, not just clicks
  • Question-shaped pages that match how people actually prompt
  • Evidence and sourcing the models have reason to trust
D · Monitor

Visibility monitoring

Models change weekly, and so does what they say about you. Monthly re-measurement tracks your citation share over time, so AI visibility becomes a number on a chart rather than a thing you anxiously check yourself at 11pm.

  • Monthly re-measurement across all four engines
  • Citation-share trend reporting
  • An alert when a model starts getting you wrong again
Four phases

How an engagement actually runs.

No mystique. GEO is the evidence-trail discipline of SEO pointed at a new set of readers. The method is the same four honest steps.

01 · Measure

Find out what the machines say

I ask ChatGPT, Claude, Gemini and Perplexity the questions your customers ask, and write down the answers. Not impressions of the answers. The actual answers, logged and repeatable.

02 · Diagnose

Work out why they say it

Every answer has a cause somewhere in your evidence trail. I trace the citations, the structured data and the third-party mentions, and find where the model is getting its picture of you, flattering or otherwise.

03 · Rebuild

Fix the evidence trail

Structured data, entity disambiguation, citable content, and consistent facts in the sources the crawlers weight. The unglamorous engine-room work that quietly moves what the model believes.

04 · Re-measure

Track citation share over time

Back through the same query set, every month. Visibility becomes a measured trend, and we can tell the difference between a real gain and a model simply having a good day.

Where it pays off

What the work looks like in practice.

I run this methodology on my own brand, which is a useful incentive to keep it honest. Some of the engagements it covers:

Four-engine visibility audits

A brand and its competitors run through ChatGPT, Claude, Gemini and Perplexity with repeatable queries, scored on presence, accuracy and citation share.

Entity disambiguation

For brands the models cheerfully confuse with someone else. Common names, merger history, several products wearing similar hats. I make the machines tell you apart.

Structured data & llms.txt

The crawler-facing plumbing: schema, clean markup, llms.txt. Unglamorous, and exactly the sort of thing that decides whether a model trusts your page.

Citation-earning content

Pages built to be quoted, not merely ranked. Clear claims, real evidence, and the question-shaped structure answer engines lift from.

Competitor citation analysis

Who the models recommend in your category, how often, and what their evidence trail has that yours does not yet.

Monthly visibility reporting

Citation share tracked over time across all four engines, so AI visibility is a chart you can put in front of a board, not a hunch.

Honest comparison

GEO done properly vs the rebadged version.

GEO attracts a lot of confident vagueness. The honest version is measurable, and cheerfully admits where it overlaps with the SEO you already know.

  Most "AEO" services Dog on the Table
What you're actually buying A familiar SEO checklist with "AI" written on the cover Measurement against the real outputs of four answer engines
How success is measured "Improved AI presence", conveniently unquantified Citation share, scored on repeatable queries, every month
Honesty about SEO overlap Sold as a brand-new discipline at a brand-new price GEO and SEO overlap a lot. I tell you exactly where, and don't bill twice.
Who does the work A junior running a tool they bought last month A 17-year senior operator who tracked evidence trails before it was fashionable
Engine coverage Usually just ChatGPT, because it is the one people have heard of ChatGPT, Claude, Gemini and Perplexity. Your customers are not all in the same place.
What you leave with A slide reassuring you that things are better now A measured baseline, a repaired evidence trail, and a monthly number
FAQ

The questions I get asked most.

What is GEO, and what is AEO?

Generative engine optimisation (GEO) and answer engine optimisation (AEO) are two names for the same job: making sure your brand shows up, accurately, when someone asks an AI assistant a question instead of typing it into Google. GEO tends to be the term for the broad discipline; AEO for the answer-shaped end of it. The work is the same either way, and I will not charge you twice for owning two acronyms.

Is this just SEO with a new name?

Partly, and anyone telling you otherwise is selling something. GEO leans heavily on SEO fundamentals: structured data, clean content, a credible evidence trail. What is genuinely new is the measurement, where you are scoring model outputs rather than rankings, and the goal, which is being cited and recommended rather than merely listed.

I will always tell you which parts of an engagement are classic SEO and which are not. That honesty is, oddly, quite rare in this corner of the market.

Which AI engines do you optimise for?

ChatGPT, Claude, Gemini and Perplexity, plus Google's AI Overviews. They draw on different sources and behave differently, so I measure all of them rather than optimising for one and quietly hoping the rest follow along.

How do you actually measure AI visibility?

With a repeatable set of real buyer questions, run through each engine on a schedule. I record whether you appear, whether what the model says is accurate, and how your citation share compares to named competitors. Same questions every month, so the trend is real rather than anecdotal.

How quickly do results show?

The audit gives you a measured baseline in about two weeks. Moving the models themselves takes longer, because evidence trails need to be recrawled and reflected. Expect early movement in one to three months, with the compounding gains arriving the way they always have in search: slowly, and then noticeably.

Do I still need traditional SEO?

Almost certainly yes. Classic search is not going anywhere quickly, and a strong SEO foundation is also one of the best things you can do for GEO, since the answer engines lean on the same evidence. The two reinforce each other. Anyone urging you to drop SEO entirely for AI is overselling, probably loudly.

Find out what the machines think

See what AI actually says about your brand.

The first step is never a strategy deck. It is a measured audit of what ChatGPT, Claude, Gemini and Perplexity currently tell people about you. Sometimes it is reassuring. Sometimes it is the useful kind of alarming. Either way, you stop guessing.