Why this matters now
In 2026, most teams will publish with help from AI—but audiences and platforms are raising the bar. Google’s guidance explicitly allows AI‑assisted content so long as it’s people‑first, accurate and original; attempts to manipulate rankings with mass‑produced copy fall under spam policies such as “scaled content abuse”. developers.google.com
Google has also tightened its “site reputation abuse” policy, clarifying that even when a publisher oversees third‑party content, gaming rankings via that publisher’s reputation can trigger enforcement. In short: quality and intent matter more than ever. developers.google.com
On the advertising side, the UK’s CAP Code says marketing communications must be “obviously identifiable”. If you’re publishing an advertorial, sponsorship or affiliate feature, the label needs to be clear. This applies across digital formats. asa.org.uk
Bottom line for UK SMEs and charities: AI can speed you up, but only disciplined editorial standards will protect trust, search visibility and conversions.
The 9 tests: a practical scorecard
Run these tests before you hit publish. Aim for 8/9 passes. If you fail a test, fix the issue or hold the piece.
1) Intent match test
- Does the first 120–160 characters answer the user’s core question and set expectations for the page? Search snippets and social previews rely on this. gov.uk
- Is the page built around a single task or decision the reader must make?
2) E‑E‑A‑T signals test
- Is there a named author or accountable team and clear date?
- Are first‑hand experience, examples, or data included—rather than generic tips? Google advises focusing on accuracy, quality, relevance and signalling Who/How/Why. developers.google.com
3) Factual accuracy and sources test
- For every key claim or stat, can you cite an authoritative source (government guidance, standards body, manufacturer, or first‑party data)?
- For topics prone to errors (e.g., laws, safety, medical), add a plain‑English caveat and link to the primary source.
4) Originality and value‑add test
- Does the piece add something new—process diagrams, a checklist, a UK‑specific angle, or proprietary metrics—not just a reworded summary?
- Avoid “scaled content” patterns like mass templated pages with thin edits. searchenginejournal.com
5) Brand voice test
- Define three voice pillars (for example: practical, calm, UK‑centric). Does every paragraph reflect them?
- Remove filler adjectives and buzzwords; GOV.UK style research shows users prefer plain English—even experts. gov.uk
6) Transparency and labelling test
- If the piece is sponsored, an advertorial, or includes affiliate links, mark it clearly and early to meet CAP Code expectations. asa.org.uk
- If you’ve used AI in a material way, consider a short “How we created this” note—Google encourages giving readers context. developers.google.com
7) Structure and accessibility test
- Use clear headings, short sentences, and explain technical terms at first use. Provide descriptive alt text for images. gov.uk
- Check reading level with a simple tool; aim for plain English unless your audience demands specialist language. design.education.gov.uk
8) Conversion clarity test
- One primary call‑to‑action per page, plus a secondary low‑friction option (for example: download the checklist, or book a call).
- Make next steps visible without scrolling on desktop and within a few swipes on mobile.
9) Risk and sensitivity test
- Scan for comparative or superlative claims that might need evidence. If the piece promotes a product or service, check that claims are substantiated and not misleading.
- Where a topic affects vulnerable users (for example, debt advice), add signposting to authoritative help. CAP guidance favours clarity and responsibility. asa.org.uk
A 60‑minute workflow your team can actually run
Minutes 0–15: Brief and outline
- Define user intent in one sentence: “This page helps [persona] decide/do [task].”
- List the proof you’ll include: examples, prices, screenshots, UK‑specific rules, or your own data.
- Draft an outline with 4–6 headings that mirror the decision steps.
Minutes 15–35: Draft with AI, inject experience
- Generate a first draft, then insert your lived experience: “What surprised us”, “Mistakes to avoid”, “Costs in London vs regions”.
- Add sources to high‑risk claims and link them inline.
Minutes 35–55: Run the 9 tests
- Cut clichés, shorten sentences, label sponsorships, add alt text, ensure a single primary CTA. gov.uk
- Check for scaled content patterns; avoid thin, templated sections. searchenginejournal.com
Minutes 55–60: Publish and monitor
- Set a 14‑day review to refresh facts, add early performance notes, and capture reader questions.
What could derail this—and how to avoid it
- “Publish at scale” pressure. Resist mass generation without human value‑add; Google’s policies target high‑volume, low‑value patterns regardless of whether a human or AI typed the words. searchenginejournal.com
- Blurry lines between editorial and advertising. If money or reciprocal value changed hands, label it clearly to meet UK expectations. asa.org.uk
- Trust shocks from AI errors. Public research shows users are sensitive to inaccuracies and poor sourcing in AI‑mediated answers—quality control and clear attribution matter. reuters.com
- Weak plain‑English discipline. Busy expert readers also prefer clear, direct language. gov.uk
KPIs to review weekly
| Area | Leading indicators (week 1–2) | Lagging indicators (week 3–6) |
|---|---|---|
| Quality | Average score on the 9‑test checklist; % of pieces scoring 10+; number of authoritative sources per article. | Bounce rate on informational pages; time on page; scroll depth to CTA. |
| Trust & transparency | % of sponsored/affiliate content with clear labelling and “How we created this” notes where appropriate. asa.org.uk | Complaints/clarifications requested; editorial corrections rate. |
| Search health | New pages indexed; Search Console coverage with no manual actions. developers.google.com | Impressions, clicks and query diversity for new pages. |
| Conversion | CTA view rate; click‑through to enquiry or download. | Qualified enquiries; demo bookings; assisted revenue. |
| Team efficiency | Time from brief to publish; edits per piece. | Cost per quality article (including reviews and sourcing). |
Procurement questions for tools and agencies
Use these when shortlisting AI writing platforms or content partners:
- Quality controls: Show us your editorial checklist. How do you evidence originality and avoid scaled‑content patterns? Provide two examples with sources. searchenginejournal.com
- Search safety: What safeguards do you apply to avoid “site reputation abuse” dynamics when publishing guest or partner content? How do you respond to a manual action? developers.google.com
- Transparency: How do you handle advertorial and affiliate labelling across web and social? Share a screenshot pack of live examples. asa.org.uk
- Plain‑English standard: What’s your readability target, and how do you enforce it for specialist audiences? gov.uk
- Sourcing: Who is accountable for fact‑checking? Do you link to primary sources as a rule?
- Post‑publish: What’s your 14‑day refresh process and your correction SLAs?
Cost and risk guardrails (lightweight)
- Monthly cap: Set a content budget and stick to it. For a deeper and more technical view on AI budgets and model tiering, see our guide to a 90‑day AI cost guardrail. Read the cost guardrail playbook.
- Two‑step review: Subject‑matter owner plus an editor who wasn’t involved in drafting.
- Policy alignment: Keep a one‑pager summarising CAP Code labelling rules and Google’s AI and spam guidance, and link to it in each brief. asa.org.uk
What “good” looks like in 30 days
- Week 1: Agree your 9‑test checklist, define voice pillars, and run two pilot pieces end‑to‑end.
- Week 2: Scale to three pieces. Introduce a “How we created this” note where relevant. Build a light briefing template.
- Week 3: Publish a pillar page plus two supporting articles, each with a single, clear CTA. For inspiration on no‑click formats that still convert, see our playbook on zero‑click content. Zero‑click content that converts.
- Week 4: Review KPIs, prune or improve anything scoring under 8, and identify one topic to repurpose into email and social. For trust‑building content ideas, see our earlier guidance on creating AI‑assisted content that wins trust. Trust‑first AI content.
Related playbooks for your next step
- Want acceptance criteria for on‑brand AI outputs? See Ship AI that behaves.
- Need to control spend while you scale production? Use the 90‑day AI cost guardrail.
FAQs (for directors and trustees)
Do we need to label AI‑assisted content?
Search doesn’t require a label, but Google encourages giving readers context on how content was created. If money changed hands (sponsorships, affiliate), CAP Code labelling is expected. developers.google.com
Will AI content hurt our rankings?
No—poor quality content will. Google rewards original, people‑first material and considers abuse at scale to be spam regardless of the tool used. developers.google.com
How often should we refresh pages?
Set a 14‑day check for new articles to amend facts, improve sources and tighten CTAs; then move to quarterly reviews for evergreen pages.