Product manager choosing between a chat interface, a form and a simple action button
Product management & UX with AI

Stop Forcing Chat: When AI Should Be a Button, Not a Bot

Over the past year, “let’s add a chatbot” has become the default answer to every AI question. Yet most customers don’t choose chat first, and when they do, many still leave frustrated. A 2023 Gartner survey found only 8% of customers used a chatbot in their most recent service interaction, and just a quarter of those would use the same bot again. The key driver of adoption was whether the bot actually moved the issue forward — not the novelty of conversation. gartner.com

For UK SMEs and charities, this matters. You don’t have time or budget for experiments that confuse users or generate avoidable support tickets. The UK Service Manual is blunt: only use AI if there’s a proven user need and make it clear how AI affects the information users receive, with a clear route to a human. gov.uk

This article gives you a simple, repeatable way to decide whether a task should be delivered as chat, a structured form, or a single action button — plus metrics, procurement questions and rollout tactics you can put to work this month.

The decision: chat, form, or button?

Use this quick decision tree in backlog grooming or during discovery:

  1. Is the task a frequent, well-defined action with a clear outcome? If yes, make it a button (with a short confirmation step if risky). Examples: “Summarise this document”, “Create invoice from draft”, “Translate to UK English”. Buttons reduce cognitive load and set crisp expectations.
  2. Does the task need a few specific inputs to get a good answer? If yes, use a form with helpful defaults and validation. Examples: “Draft complaint response” with fields for tone, deadline and key facts; “Generate job ad” with role, location and salary range. GOV.UK’s writing and error-message guidance shows how small wording choices improve completion and reduce error. gov.uk
  3. Is the task open-ended, ambiguous or multi-turn by nature? If yes, use chat — but set expectations, show examples, and provide escape hatches to a person or a different channel. The UK guidance on chatbots emphasises clear scope and easy human escalation. gov.uk

Why “chat everywhere” backfires

  • Expectation drift: People assume chat is flexible and human-like. If your bot can’t reliably handle deviations or complex intents, experience deteriorates fast. Microsoft’s Human–AI Interaction guidelines start with “make clear what the system can do” and “how well it can do it”. Build this into the UI, not just docs. microsoft.com
  • Hidden inputs: With chat, users can’t see which parameters matter. Forms and buttons externalise those parameters, making outcomes more repeatable and improving quality control.
  • Search and query tasks are often faster with structured controls: Decades of UX research show autocomplete, filters and suggested queries help people form better searches faster. For example, copying the highlighted suggestion into the search field so users can edit it is a proven best practice that many sites still miss. baymard.com
  • Accessibility and inclusion: Not everyone is comfortable writing prompts. Clear labels, examples and constraints reduce effort and errors, aligning with GOV.UK’s design guidance. gov.uk

If you do choose chat, follow human-centred patterns from the People + AI Guidebook (PAIR): set expectations, show confidence appropriately, provide controls, and design graceful failure. pair.withgoogle.com

A 15‑minute triage for any AI flow

Run this checklist on an existing bot or AI feature before your next release. Fixes here typically reduce support calls within a week and increase completion with no model changes.

1) Scope and expectation

  • Does the entry screen state, in plain English, what the AI can and can’t do?
  • Is example input/output shown, with a link to “what works best” tips?
  • Is the typical quality range or confidence conveyed when helpful, not as a blanket %?

These map to Microsoft’s first two HAI guidelines and are often the biggest wins. microsoft.com

2) Inputs and shortcuts

  • Are the important parameters visible as fields, toggles or presets?
  • Are there “fast paths” for common jobs — effectively buttons that prepare a great prompt for users?
  • Is there good autocomplete and typo‑tolerance where users search or select? baymard.com

3) Recovery and escalation

  • After two unhelpful responses, does the UI offer an alternative path (form, knowledge article, or human)?
  • Is a handover to a person prominent and available out of hours via email or call-back? GOV.UK explicitly recommends easy alternatives to avoid looped conversations. gov.uk

4) Microcopy and error messages

  • Are instructions short, direct and non-technical? gov.uk
  • Do error messages quote the field or step users are on and tell them how to fix it? design-system.service.gov.uk

5) Controls over time

  • Can users see and change what’s remembered about their session?
  • Are there global controls such as “turn off personalisation” and “reset context”? These align with HAI guidance on learning and user control. microsoft.com

KPIs that fit the interface you chose

Measure success by task, not channel. Track a small set of KPIs per flow and report them weekly. Our AI Quality Scoreboard article explains how to set thresholds and a go/no‑go. Here’s a quick pack:

InterfacePrimary KPIGood (first month)Notes
Button Task Completion Rate ≥ 90% Shows the action is predictable and helpful.
Form First‑Attempt Success ≥ 75% Low rework means your inputs are well chosen and explained.
Chat Resolution with Satisfaction ≥ 55% in month 1 Count only sessions where users confirm the outcome or do not re‑contact within 72 hours.
Any Escalation Success ≥ 80% When users switch to a human or another path, do they get a resolution?
Any Time to Outcome -20% vs baseline Must not degrade accessibility or accuracy.

For chat specifically, beware of “containment” as a vanity metric; your best indicator is whether the bot progressed the issue. That’s also what drove future use in Gartner’s survey. gartner.com

Cost and operational guardrails

Whichever interface you pick, instrument the cost per successful outcome and set weekly caps. For a pragmatic starting point:

  • Button: aim for a clear ceiling per action (for example, generating a document or summary). Batch long tasks and cache repeat outputs. See also our unit‑economics primer: turn tokens into pounds.
  • Form: validate inputs client‑side where possible and prevent obviously unanswerable requests reaching the model.
  • Chat: cap session length, offer “convert to form” after a few turns, and auto‑suggest frequent tasks as buttons mid‑chat.

Roll out in small slices behind flags so you can switch off or revert without drama — see our feature‑flags playbook.

Procurement questions that separate hype from help

Before you buy a chatbot or AI authoring tool, ask vendors:

  1. Channel fit: Which tasks do you recommend as buttons or forms instead of chat, and how does your product support those patterns out‑of‑the‑box?
  2. Expectation setting: How do you implement “make clear what the system can do” and “how well” within the UI? Ask for live examples, not slides. microsoft.com
  3. Alternatives and escalation: Show us how a user exits loops and reaches a human or another channel within two clicks. This aligns with UK guidance. gov.uk
  4. Design assets: Do you provide PAIR‑style patterns and worksheets, or similar materials we can adapt? pair.withgoogle.com
  5. Instrumentation: Which KPIs are first‑class in your dashboard (e.g., Resolution with Satisfaction, Time to Outcome)? How do you attribute outcomes across channels?
  6. Build vs buy: How does your offer align with the Technology Code of Practice principles on purchasing strategy and integration? UK public bodies — and many SMEs selling into them — are expected to consider these. gov.uk

Run a short UAT before go‑live to check the UX basics. Our 5‑day UAT plan keeps it lightweight and evidence‑based.

Playbook: switch the default from “chat” to “choose”

  1. Map your top 10 tasks by volume and value (support tickets, time taken, or revenue impact).
  2. Classify each task as Button, Form or Chat using the decision tree above. If uncertain, prototype two versions and run a 1‑week A/B with a hard stop.
  3. Design for expectation: add a short “What this can do” statement and 2–3 examples before people start. HAI guidelines make this a non‑negotiable. microsoft.com
  4. Add visible parameters for quality control: tone, length, audience, policy links. If you keep chat, surface these as chips or toggles.
  5. Wire in alternatives: visible “Try the form” or “Talk to a person” links from turn two onwards, per GOV.UK guidance. gov.uk
  6. Ship behind flags and progressively widen access; if KPIs don’t hit your guardrails after a fortnight, roll back and fix. See our go‑live gate for a one‑page checklist.
  7. Review weekly: 10 sample sessions, the KPI pack, and 3 qualitative findings. Make one change that reduces time to outcome next week.

When chat is the right answer

There are genuine sweet spots for conversational UX:

  • Exploratory tasks where users don’t know the exact question yet (for example, “Turn these notes into a clear plan”). PAIR’s patterns for mental models and feedback help here. pair.withgoogle.com
  • Expert guidance when the steps vary by context and the AI can reason across documents or systems.
  • Assisted search with good suggestions and editable queries, ideally paired with filters and previews. Baymard’s research shows small details in typeahead make a large difference to success. baymard.com

But even in these cases, offer a “convert to form” option and clear routes to a human if the conversation stalls. gov.uk

Book a 30‑min call Or email: team@youraiconsultant.london

Further reading

  • Design patterns for human‑centred AI: Microsoft’s 18 HAI guidelines. microsoft.com
  • Google’s People + AI Guidebook: patterns, worksheets and case studies. pair.withgoogle.com
  • GOV.UK guidance on using AI in services and using chatbots/webchat tools. gov.uk
  • Autocomplete UX research that improves search‑like experiences. baymard.com