
Policies should reduce risk without slowing people down. This copy-and-adapt pack gives UK charities and SMEs four lightweight policies—Acceptable Use, Human-in-the-Loop (HITL), Vendor Risk, and Publishing Rules—plus simple checklists and a 90-day rollout plan. No legalese, no PDFs to download—just paste, adapt, and ship.
What “good” looks like
- Clarity: staff can tell in 10 seconds what’s allowed.
- Auditability: prompts/outputs are reviewable; risky work is escalated.
- Proportionality: tighter controls only for higher-risk tasks or data.
- Maintainability: a one-pager per policy; versioned; owners named.
How to use the templates
- Paste into your policy wiki or intranet.
- Change the bracketed bits
[like this]
. - Assign owners & review dates.
- Train once; remind quarterly.
1) Acceptable Use Policy (AUP) — one page
Title: AI Acceptable Use Policy (AUP)
Owner: [Role, e.g., COO] Version: 1.0 Review: Quarterly
Purpose
Give simple rules for using AI tools (Copilots, chatbots, internal apps) safely.
Scope
All staff, volunteers and contractors using AI for [Organisation Name].
Allowed, low risk
• Brainstorming, meeting notes, summaries, rewriting for clarity.
• Drafting generic emails, web copy, SOPs (no personal/special-category data).
• Data-free code examples, formulas, regex, or documentation.
Restricted (approval required)
• Any personal data beyond work contact info.
• Special-category data, safeguarding, health, financial or legal advice.
• Decisions that materially affect people or funding.
Prohibited
• Uploading client records, identifiers, or confidential datasets to public tools.
• Bypassing security controls, storing API keys in prompts, or sharing outputs externally without review.
• Synthetic impersonation or deepfakes of real people.
Operational rules
• Use only approved tools: [list your tools].
• Sign in via SSO; keep data retention to [X days] where configurable.
• Store prompts/outputs in [approved system], with links to sources when used.
• Flag risky outputs to [security/contact] and do not publish without HITL review.
Questions
Ask: [policy@org.org] or [#ai-governance] channel.
2) Human-in-the-Loop (HITL) — checks before you ship
Title: Human-in-the-Loop (HITL) Policy
Owner: [Role] Version: 1.0 Review: Quarterly
When HITL is REQUIRED
• Anything public-facing (website, press, policy papers).
• Content citing facts, numbers, or legal/medical claims.
• Outputs that may affect a person’s access to services or funding.
Reviewer requirements
• Reviewer must be subject-matter competent and not the original author.
• Reviewer confirms: factual accuracy, source attribution, tone, accessibility.
Checklist (tick all before approval)
[ ] Sources are linked or archived; no blind claims.
[ ] Sensitive data removed or minimised.
[ ] Reasoning aligns with originals; no hallucinated citations.
[ ] Accessibility: plain English, inclusive language, alt text where applicable.
[ ] Outcome logged in [location] with version/date.
Escalation
If safety, safeguarding or legal concerns arise, escalate to [Role/Committee].
3) Vendor Risk Mini-Policy — quick diligence
Title: AI Vendor Risk & Procurement (Lite)
Owner: [Procurement/DPO] Version: 1.0 Review: 6-monthly
Minimum controls for approval
• Data location & retention: documented; customer data not used for training (or opt-out is enforced).
• Access: SSO/SCIM available; admin logs exportable.
• Security: SOC 2/ISO 27001 (or equivalent) or public security whitepaper.
• Regionality: processing in [UK/EEA/Designated regions] or SCCs in place.
• Exit: data export & deletion within [30] days of request.
Diligence questions (record answers)
1) Where is data processed and for how long?
2) Sub-processors and their regions?
3) Training usage of customer data, toggles, and defaults?
4) What logs/telemetry can we export?
5) Private endpoint or regional options available?
6) Data deletion SLA & process?
Approved vendor list
Keep a current list in [location], with owner and review date.
4) Publishing Rules — what we put into the world
Title: AI Publishing & Attribution Rules
Owner: [Comms/Policy Lead] Version: 1.0 Review: Quarterly
Attribution
• If AI materially contributed to drafting or analysis, note “assisted by AI” in the acknowledgements or metadata.
• Cite primary sources for facts and data; link to originals.
Copyright
• Do not reproduce proprietary content unless licensed.
• Respect image rights; use licensed or created assets; keep proof of licence.
No impersonation
• No synthetic media representing a real person without written consent.
Record-keeping
• Store final text, sources, prompt templates and approval records in [location] for [X years].
Roles & responsibilities (who does what)
Role | Core duties |
---|---|
Policy Owner | Maintains the policy; runs quarterly reviews; tracks incidents. |
DPO/Compliance | Assures DPIA where needed; records lawful basis; monitors vendor risk. |
Team Leads | Ensure staff training; assign reviewers; maintain prompt libraries. |
Reviewers | Run HITL checks; sign off high-risk outputs. |
All Staff | Follow AUP; report issues; use approved tools only. |
Risk tiers (apply proportionate controls)
- Low: internal drafts, brainstorming → AUP only.
- Medium: external comms, public website → HITL + Publishing Rules.
- High: policy positions, safeguarding, special-category data → HITL + DPO review + tighter vendor controls.
90-day rollout plan
- Days 1–14: pick policy owners; paste these templates into your wiki; set SSO and data-retention defaults; publish approved tool list.
- Days 15–45: pilot HITL on two high-impact workflows (e.g., policy briefs and grant copy). Start a prompt library in your repo/wiki.
- Days 46–90: add vendor-risk answers for each AI tool; run a one-hour red-team test; record metrics: time saved, error rate, rework.
Outcome: clear guardrails, auditable outputs, and faster, safer content shipping.
FAQs (quick answers for staff)
Can I paste client or beneficiary data into AI tools?
Only if explicitly approved and processed in an environment with the right controls (e.g., private endpoint or vetted vendor) and a lawful basis documented.
Do I have to cite sources?
Yes—if an output contains facts, numbers, or quotes, cite and link them. If no reliable source exists, do not publish.
What if an AI output looks risky or biased?
Stop and escalate to your reviewer or policy owner. Save the prompt/output and note the issue. Do not publish.
Need a hand?
I help charities and SMEs roll out AI safely with policies that people actually use—plus the technical setup to back them.