Skip to main content

GOV.UK AI Guardrails β€” Adapted for Nursing

Source

This page adapts content from the UK Government AI Engineering Lab β€” an open-source repository maintained by GDS/DSIT that defines how public sector teams should safely adopt AI tools.

πŸ”— Source: github.com/govuk-digital-backbone/aiengineeringlab β€” Open Government Licence v3.0

Why This Matters for Nursing​

The GOV.UK AI Engineering Lab provides the official governance framework for AI adoption across UK public services β€” including the NHS. As nurses increasingly use AI in education, documentation, and clinical decision support, these guardrails provide a structured, government-endorsed approach to doing so safely.

This page translates the government's technical guardrails into nursing-specific language and scenarios, connecting them to the AI Nursing Constitution and NMC professional standards.


1. Data Handling β€” What You Must Never Share with AI​

GOV.UK Reference: G-DH-01, G-DH-02

The government classifies data and prohibits certain types from being shared with any AI tool. For nursing, this means:

❌ Never Share with AINursing Examples
Patient-identifiable data (PII)Names, NHS numbers, dates of birth, addresses
Health informationMedical records, diagnoses, test results, care plans with patient details
Authentication credentialsNHS login passwords, smartcard PINs, API keys
Biometric dataPatient photographs with identifiable features
Case detailsSafeguarding referrals, incident reports with names
Clinical Scenario

Wrong: Pasting a patient's discharge summary into ChatGPT and asking it to "write a care plan."

Right: Writing a de-identified clinical scenario (no names, no NHS numbers, no dates of birth) and asking for care plan structure guidance.

Prompt Hygiene for Nurses​

Before entering any clinical text into an AI tool:

  1. Remove all identifiers β€” names, NHS numbers, DOB, addresses
  2. Generalise the scenario β€” "A 72-year-old female" not "Mrs Smith"
  3. Never paste directly from clinical systems β€” always rewrite in your own words
  4. Start a new chat when switching between patient scenarios

2. Usage Boundaries β€” Where AI Should and Shouldn't Be Used​

GOV.UK Reference: G-UB-01, G-UB-02

βœ… Appropriate Uses in Nursing Education​

Use CaseNotes
Learning about clinical conditionsUse AI to explain pathophysiology, pharmacology
Generating practice scenariosCreate de-identified case studies for teaching
Exploring care planning frameworksAsk AI about ADPIE, SBAR, ABCDE structures
Drafting reflective writingUse as a starting point, then personalise
Understanding research papersSummarise evidence-based practice articles
Exploring NMC proficiency standardsQuery what competencies relate to a topic

❌ Prohibited Uses​

Use CaseRationale
Making autonomous clinical decisionsAI cannot replace registered nurse judgment
Submitting AI-generated work as your ownAcademic integrity / NMC professional standards
Sharing patient data with AI toolsData protection, Caldicott principles
Using AI for safeguarding decisionsRequires qualified human professional assessment
Bypassing clinical protocolsAI must support, not circumvent, safety processes

3. Ethical Use β€” Human Judgment Remains Essential​

GOV.UK Reference: G-ET-01, G-ET-02, G-ET-03

The government framework is clear: "AI may assist, humans must decide." This directly mirrors the AI Nursing Constitution's core principle.

Clinical Safety Review Triggers​

The government requires ethical review for healthcare and clinical systems (G-ET-01). In nursing education, this means:

System TypeReview RequiredWho Reviews
AI tools used in clinical placementsYes β€” clinical safety assessmentPractice supervisor / assessor
AI-assisted care planningYes β€” nursing judgment verificationRegistered Nurse
AI-generated patient education materialsYes β€” accuracy reviewClinical educator
AI tools processing student performance dataYes β€” GDPR assessmentData Protection Officer
AI in simulation/OSCE scenariosYes β€” educational validity checkProgramme lead

The "AI Didn't Write This" Rule​

The government requires transparency about AI use in public services (G-ET-02). For nursing students:

  • Always declare when AI has been used in assessed work
  • Understand what the AI generated β€” you must be able to explain it
  • Verify clinical accuracy against authoritative sources (BNF, NICE, NMC)
  • Take professional responsibility for any AI-assisted output you submit or act on
NMC Alignment

The NMC Code (2018) states: "You are personally accountable for your actions and omissions in your practice." This applies equally to AI-assisted work. If you use AI to help write a care plan and the AI hallucinates an incorrect drug dose, you are accountable β€” not the AI.


4. Agentic AI β€” When AI Acts Autonomously​

GOV.UK Reference: G-AG-01 to G-AG-08

The government classifies AI tools by their autonomy level:

LevelDescriptionNursing ExampleControl Required
L1 β€” SuggestiveShows suggestions you accept/rejectAutocomplete in clinical documentationMinimal
L2 β€” AssistiveGenerates content you reviewAI drafts SBAR handover, you reviewReview before use
L3 β€” CollaborativeMulti-step with checkpointsAI generates a full lesson planCheckpoint reviews
L4 β€” AutonomousExtended operationAI monitors patient deterioration alertsContinuous oversight
L5 β€” Fully AutonomousHours/days without human inputNot appropriate for clinical settingsN/A
caution

For nursing and healthcare contexts, L4 and L5 autonomy levels require explicit clinical governance approval before deployment. Most nursing AI use cases should operate at L1–L3 with human-in-the-loop review.

Kill Switch Principle​

The government mandates that autonomous AI must always have an accessible kill switch β€” a way to immediately stop the AI. In nursing terms:

  • You must always be able to override AI recommendations
  • AI must never lock you out of manual clinical decision-making
  • If an AI system fails, you must be able to deliver care without it

5. Output Validation β€” Never Trust, Always Verify​

GOV.UK Reference: G-OV-01 to G-OV-04

The VERIFY Framework for Nursing AI Output​

StepActionNursing Application
V β€” Verify factsCheck clinical accuracyCross-reference with BNF, NICE guidelines
E β€” Examine biasCheck for demographic biasDoes the output account for diverse skin tones, ages, cultures?
R β€” Review logicTrace the reasoningDoes the care plan follow ADPIE? Is the triage category logical?
I β€” Identify hallucinationsSpot invented informationDoes that drug dose exist? Is that NICE guideline real?
F β€” Flag uncertaintyNote where you're unsureEscalate to a supervisor or mentor
Y β€” Your judgmentApply professional reasoningDoes this align with your clinical experience?
Hallucination Risk

AI models can confidently state incorrect information. In one study, an LLM hallucinated a non-existent NICE guideline number when asked about diabetes management. Always verify guideline references against the original source.


6. Self-Assessment Checklist​

Before using an AI tool in your nursing education or practice, work through this checklist β€” adapted from the GOV.UK AI Engineering Lab readiness assessment:

βœ… Pre-Use Checklist​

  • I understand what data I can and cannot share with this AI tool
  • I have removed all patient-identifiable information from my prompt
  • I know the AI tool's limitations and potential for error
  • I have an authoritative source to verify the AI's output against
  • I understand my professional accountability for any AI-assisted output
  • I am using the AI to support (not replace) my learning and clinical judgment
  • I will declare AI use where required by my institution's academic integrity policy
  • I have a plan for what to do if the AI gives incorrect or harmful output

βœ… Post-Use Checklist​

  • I have verified the clinical accuracy of the AI's output
  • I have checked for potential bias (age, gender, ethnicity, skin tone)
  • I can explain the AI's output in my own words
  • I have documented my AI use where required
  • I would be comfortable defending this output to a practice assessor

How This Connects to Our AI Nursing Constitution​

The GOV.UK AI Engineering Lab guardrails and our AI Nursing Constitution share the same foundational principles:

GOV.UK GuardrailAI Nursing Constitution Equivalent
G-ET-03: Human judgment requirementsThe Nurse Decides β€” AI advises, the nurse acts
G-DH-02: Prohibited data typesPatient Privacy β€” never share identifiable data
G-ET-02: Transparency and documentationTransparency β€” always declare AI use
G-AG-07: Meaningful human reviewProfessional Accountability β€” understand what the AI generated
G-UB-01: Prohibited use casesSafety First β€” AI must never make autonomous clinical decisions
G-OV-01: Factual verificationEvidence-Based β€” verify against authoritative sources

The AI Nursing Constitution is, in effect, a nursing-specific extension of the GOV.UK guardrails β€” exactly the type of "department-specific variant" the government framework encourages organisations to create.


Further Reading​