Content experiments: bite sized FAQs

I’ve spent the last six months testing the theory that has resulted in stable weekly leads and achieving coveted citations in LLMs. The secret: bite-sized FAQs. 

By structuring content into concise, pre-packaged answers, I’ve helped two clients achieve consistent visibility in Google’s AI Overviews and in one case, it’s taken just two weeks to see the ROI. So today I’m explaining my vending machine theory for LLM visibility and the exact testing methodology I used to see whether it works.


Key Takeaways: 

  • My clients goals were: demo requests from LLMs and citations or mentions in AI overviews and LLMs

  • I implemented a strict structure for informational content including bite-sized FAQs

  • It achieved 2 leads per week through LLMs for one client over six months and primary term citation for another client in just two weeks

  • I believe it works because FAQs align with LLM chunking, and satisfies both the ‘path of least resistance’ vectoring and authority requirements 

What’s the theory? 

The theory is: integrating bite-sized FAQs directly into informational content increases LLM visibility. Essentially, by pre-packaging key information into concise, question-and-answer formats, you’re making it incredibly easy for an LLM to identify and extract the precise answer it needs to respond to a user query. Easy access, better visibility. 

Traditional long-form content is like a 5-course meal; it takes time to consume. But this FAQ strategy turns your content into a vending machine for insights. The LLM inputs a query, and the website 'drops' a perfectly packaged, bite-sized answer instantly.

 

Here’s what I mean: an example of an FAQ and how I’d structure its answer

Here is an example of a topic, an FAQ and how I’d structure the answer to make it ‘LLM-friendly’.

My chosen topic: What is Making Tax Digital? 

Here’s my workflow for the FAQs:  

  1. Google: making tax digital - the primary keyword. 

  2. Scroll to ‘people also ask’ 

  3. Note each of the questions and disqualify any that are not relevant for the audience I’m writing for, and any that overlap with the H2s I’ve already written in the piece

  4. Head to an SEO platform (my chosen one is SEMRush for now) and compare the search volume of each query - these are likely to align with the LLM query volume

  5. Choose the 3 or 4 highest search volume queries

  6. Write the answers

 
 

For example, if I chose: Who will be affected by making tax digital?  

The answer would look something like: 

Making Tax Digital currently affects all VAT-registered businesses, and will extend to self-employed individuals and landlords based on their annual income. From April 2026, those with a qualifying income over £50,000 (gross) must comply, followed by those earning over £30,000 in April 2027 and over £20,000 by April 2028.

No storytelling, no clever metaphors. Just a simply and quick-find answer.
It’s important to avoid duplication. I therefore would not include an FAQ for ‘what is making tax digital?’ if the H1 title is ‘what is making tax digital?’. 

 

How did I test the theory?

I was involved in two sets of tests to see whether FAQs had an impact on LLM visibility:

  1. Added FAQs to the structure of each piece for my long-standing seo content client

  2. I include FAQs as part of the strategy from day 1 in a completely new informational content project 

Let’s deep dive into each of the situations a little more.

 
  1. Long-standing SEO iteration Project #1

The first test was in changing the structure of pieces for a long-standing SEO content project that I work on. To give some context, I’ve been working with this client for over three years, and before we made any changes, we had already scaled the site by over 650% year-on-year into tens of thousands of visitors each month. 

We made a change towards the end of 2025 with the structure of the content to include FAQs at the end of each piece. Generally speaking, there are three to four FAQs per piece and we aim to replicate an LLM’s own ‘semantic chunking’ structure in the answers. This means: short sentences, clear and concise answers, and no fluff. And the FAQs sit at a H4 level.  

It’s now been over six months. My content output is one piece per week, and the topics relate to fraud, regulations and payments: all themes I’m very familiar with after over seven years of writing in the fintech space. 

To be fully transparent, we kept our other SEO best practices which may also impact LLM visibility. These include: 

  • Using the BLUF (bottom line up front) method to answer the H1 query immediately in the introduction, instead of any sort of storytelling or metaphors in the intro

  • Including key takeaways after the intro: ties into BLUF again

  • Ensuring each H2 is in question format so that we align with the semantic chunking and clustering preferred by LLMs 

 

New informational Project #2

I am responsible for the content function at this second client, and (spoiler) after seeing the success of the brief structure for Project 1, I adapted it slightly and brought it in for Project 2. 

This client has changed a lot of their positioning recently, so we are virtually starting from point zero for the targeted terms we want to rank for. One of the goals is specifically to improve LLM visibility, so I created this ‘stream’ of content mainly for that goal.  

I call them glossaries. They’re ‘what is’ informational blogs that give a clear and concise overview into topics in the banking, pensions, AI and data spaces, and they include FAQs at the end: 

  • I started with three glossaries.

  • Two are part of the same cluster / pillar page set, and one is the start of a new cluster.

  • To decide on what the FAQs should be, I went with a ‘so what’ approach. Since LLMs and AI overviews pre-empt the next steps of a user once they read the previous section, my FAQs are essentially predictions of what else the user might like to know about the topic. One really helpful tool here was a member of our internal sales team, as they were able to tell me about a prospect conversation relating to the topic, and a case where the prospect had a question that Google couldn’t answer. So this gave us a bit of originality in our content, which LLMs also happen to love.

  • The FAQ answers follow the same structure as project #1: a clear and concise answer in the first sentence, and use the second sentence to add context if required. Sidenote: this is a really good prompt for Gemini if you’re using AI to help you write good content more efficiently.

  • FAQs are their own content type rather than a heading level: our web developers came up with this to give us proper breadcrumb navigation. I think this has also brought the benefit of stronger clustering though, as we can effectively embed and ‘re-use’ FAQs on multiple glossaries where it will become appropriate (without cannibalising the content).      

 

Did it work?

In short: yes. For both two sets of experiments with different clients, each has achieved LLM citations for the specific content we littered with FAQs. 

I work directly with the content manager at Project 1, and she tells me that they are generating a minimum of two (high-value) demo requests per week attributed to LLMs. Still ‘low’ vs the rest of our inbound channels but a good signal nevertheless. 

The platform they’re using to measure it is called Meteoria. It means they’re able to track visibility on chosen prompts and see the pages a) crawled and b) used as sources. Pretty cool that we’re at this stage in the AI journey. (not sponsored.) 

And it’s these results that inspired me to include the FAQ strategy in Project 2, although I’ve started off higher up the funnel. The results here are that two of the three glossary posts achieved coveted LLM citations for the primary target query, with one significantly appearing directly within a Google AI overview. This took two weeks between publication (and the start of distribution efforts) to measure. 

What’s more, in these two weeks, our FAQs have become the sixth-most-viewed content type on the site, behind the home and some of the product pages. 

Obviously these aren’t ‘proper’ experiments with scientific methodology, control variables and well-defined results. However, I can confidently say that I do think bitesized FAQs have played a key part in helping us to gain LLM visibility, and then the leads generated through LLMs too. And I’ll explain why in the section below.  

 

Why the theory has legs for LLM visibility

Here is why this strategy carries so weight from a technical SEO perspective:

  1. It aligns with chunking and vector embeddings

When an LLM ‘reads’ your page, it breaks the content into chunks and converts them into vector embeddings, which are numerical representations of meaning. Think of these like a map of the seo universe, and if you want to learn more about this click here. 

Large, rambling paragraphs are hard to vectorise cleanly because they contain too many competing ideas. A bite-sized FAQ, however, is a ‘clean’ chunk. Because the question and answer are tightly coupled, the LLM can easily map that specific block of text to a user’s specific intent.

 

2. FAQs satisfy the reward function

LLMs are trained to be helpful and concise, and when sourcing an answer, it’s looking for the path of least resistance. If it has to synthesise a 2,000-word article to find a single definition, there’s a higher computational cost and a higher risk of hallucination.

By providing a pre-packaged FAQ, you are essentially doing the heavy lifting for the model. You’re providing a high-confidence snippet. The LLM recognises that your content is already in the format it wants to output to the user, making you the path of least resistance for a citation.

 

3. You signal authority through semantic clustering

LLMs don't just want an answer; they want the right answer from a source that sounds like it knows what it’s talking about. When you include three to four related FAQs at the bottom of a post, you are building a semantic cluster to satisfy this requirement.

By answering the primary question and the most likely follow-up questions as FAQs, you’re proving to the model that your page is a comprehensive node of information. It helps to create authority and relevance across the entire sub-topic.

 

What’s next for FAQs in my content experiments? 

I know that the way we search will continue to change, so having success in this experiment has been helpful in confirming that FAQs are strong to include. But I want to move beyond informational content and test out whether FAQs can be effective in LLM seeding. This is the practice of influencing what LLMs know and report about your brand, because of the information you ‘feed’ into them. 

Citations and mentions are really helpful for long-term brand awareness, but I also want to double down on leads. So testing FAQs in product-based and comparison content could be a huge cheat code to LLM seeding here. 

I’ll continue to experiment and I’m sure the algorithm will continue to change, but if you’d like to follow along, you can connect with me on LinkedIn here. And if you have the same lofty goals of becoming visible or generating leads in LLMs, I have some availability coming up.

You can get in touch below to discuss your project and see if we’re a good fit:

 
 
 
Next
Next

If You’re in Fintech, You’re in Media Too