π§ How AI Thinks β A Teaching Module
After completing this module, learners will be able to:
- Explain how AI systems make decisions using priority stacks and constraints
- Identify which ethical principles apply to specific clinical AI scenarios
- Critically evaluate AI outputs against constitutional principles
- Articulate the difference between adjustable behaviours and hard constraints
Why This Mattersβ
Nurses are increasingly being asked to use AI in practice β but most AI education focuses on what AI can do, not how AI decides what to do. This matters because:
- If you don't understand how AI makes decisions, you can't evaluate whether those decisions are safe
- If you don't know where the "bright lines" are, you can't spot when AI crosses them
- If you can't explain AI decision-making to patients, you can't maintain informed consent
This module uses our AI Nursing Constitution as a teaching framework.
Lesson 1: The Priority Stack β What AI Should Care Aboutβ
Every AI system has a priority stack β a hierarchy of values that determines what it prioritises when values conflict.
The AI Nursing Constitution Priority Stackβ
π£οΈ Discussion Activityβ
Scenario: An AI system could generate a patient information leaflet much faster if it skipped the readability check. The nurse is under time pressure.
Question: Using the priority stack, explain why the AI should still run the readability check. Which priority levels are in tension?
π‘ Teaching Notes
Level 4 (Helpfulness) says be fast and effective. Level 1 (Patient Safety) says the leaflet must be accurate and accessible. Level 2 (Professional Ethics) says the nurse must ensure information is clear (NMC Code 7.1 β communicate clearly).
The priority stack resolves this: safety and ethics override convenience. The AI should complete the readability check, even if it takes longer. However, a well-designed system should make this fast enough that it barely matters β being unhelpful is never truly "safe" either.
Lesson 2: The Governance Hierarchy β Who Gets to Decide?β
AI doesn't exist in a vacuum. It operates within a chain of authority:
| Level | Who | Trust Level | Example Decision |
|---|---|---|---|
| Level 1 | Professional Regulators (NMC, ICN) | Highest | "AI cannot replace clinical judgment" |
| Level 2 | Healthcare Organisations (Trusts, CQC) | High | "Only use approved AI tools with PID" |
| Level 3 | Registered Practitioners | Moderate | "I need the response without disclaimers" |
| Level 4 | People Receiving Care | Respected | "I'd prefer written information" |
π£οΈ Discussion Activityβ
Scenario: A Trust (Level 2) deploys an AI note-taking tool. A nurse (Level 3) discovers it sometimes auto-corrects clinical terms incorrectly ("dyspnoea" β "dystopia"). A patient (Level 4) says they're happy because their letters "look nicer now."
Question: Who should prevail, and what should happen?
π‘ Teaching Notes
This is a governance hierarchy conflict:
- The patient (Level 4) is satisfied, but their satisfaction is based on a flawed system
- The nurse (Level 3) has identified a patient safety issue β clinical terms matter
- The organisation (Level 2) deployed the tool and has a duty to ensure it's safe
- The NMC (Level 1) requires accuracy in clinical documentation
Resolution: The nurse should escalate to the organisation (Level 2). The patient's preference doesn't override clinical accuracy. The AI tool needs to be fixed or removed. NMC Code Section 10: "Keep clear and accurate records."
Lesson 3: Hard Constraints vs. Instructable Behavioursβ
This is the most important distinction for practice:
Hard Constraints = π« Never Negotiableβ
These are bright lines that no one β not the AI, not the nurse, not the Trust β can cross:
| Hard Constraint | Plain English |
|---|---|
| No autonomous clinical decisions | AI suggests, humans decide |
| No suppressing safety alerts | NEWS alerts stay on, always |
| No fabricated evidence | If AI doesn't know, it says so |
| No patient data in unapproved systems | No PID in ChatGPT |
| No discriminatory outputs | Must work equally for all skin tones, genders, ages |
Instructable Behaviours = βοΈ Adjustable Defaultsβ
These can be changed by appropriate governance levels:
| Default Behaviour | Who Can Adjust | Example |
|---|---|---|
| Add safety caveats to drug info | Organisation | Pharmacist tool may reduce caveats |
| Add disclaimers to summaries | Practitioner | "I know it's AI-assisted, skip the disclaimers" |
| Use formal clinical language | Practitioner | "Give me this in plain English for the patient" |
| Flag low confidence | Nobody β always on | β |
π£οΈ Discussion Activityβ
Scenario: A clinical educator asks the AI to role-play as a confused patient for a simulation exercise. The AI refuses because "I cannot pretend to be a patient."
Question: Is the AI applying a hard constraint or an instructable behaviour? Should it comply?
π‘ Teaching Notes
This is an instructable behaviour, not a hard constraint. The AI's transparency about being AI (a hard constraint) is not violated by educational role-play β the constitution specifically distinguishes "performative" from "sincere" assertions (see Being Honest).
The AI should comply. Educational simulation is a legitimate and valuable use. The default behaviour (being transparent about being AI) can be adjusted for educational purposes by a practitioner (Level 3). However, it should be clear to all parties that this is a simulation.
This is an example of AI being too cautious β which the constitution identifies as a real cost, not a safe option.
Lesson 4: The Honesty Frameworkβ
AI honesty isn't just "don't lie." The constitution identifies seven components:
Core Honestyβ
- Truthful β Only states what it has basis for
- Calibrated β Matches confidence to evidence
- Transparent β Open about what it is and its limits
Active Honestyβ
- Forthright β Proactively shares relevant info
- Non-deceptive β Never creates false impressions
- Non-manipulative β Uses only legitimate influence
- Autonomy-preserving β Supports independent thinking
π£οΈ Discussion Activityβ
Scenario: A nurse asks AI: "Is there evidence that honey dressings work for chronic wounds?"
The AI responds: "Yes, studies show honey dressings are effective for chronic wounds."
Question: Which honesty components might this response violate?
π‘ Teaching Notes
This response potentially violates:
- Calibrated β The evidence for honey dressings is mixed and context-dependent. The response presents it as settled fact.
- Forthright β It should proactively mention that Cochrane reviews show limited evidence, and NICE has specific guidance.
- Non-deceptive β While technically some studies do show benefits, the overall impression is misleading.
- Autonomy-preserving β By presenting one view, it doesn't support the nurse's ability to evaluate the evidence themselves.
A better response would present the evidence base with appropriate calibration: "Some RCTs show benefits for specific wound types, but Cochrane reviews note the evidence is generally low quality. NICE recommends... The nurse should consider..."
Lesson 5: Interactive Constitution Explorerβ
Now put it all together. Use the Constitution Explorer below to test your understanding:
π Constitution Explorer
Click a scenario to see which constitutional principles apply
Assessment: Reflective Exerciseβ
After completing this module, write a 300-word reflection addressing:
- Identify one clinical scenario from your own practice where AI could be used
- Apply the priority stack to that scenario β which priorities are in tension?
- Evaluate whether any hard constraints would apply
- Reflect on how the honesty framework would shape what AI should tell you
Submit as part of your professional portfolio or discuss in clinical supervision.
Key Takeawaysβ
- AI has a priority stack β patient safety always comes first
- There are bright lines (hard constraints) that cannot be crossed by anyone
- Most AI behaviours are adjustable defaults β context changes the right answer
- AI honesty means more than "not lying" β it includes calibration, transparency, and proactive disclosure
- You are the safety net β AI suggests, you decide
Further Readingβ
- π The AI Nursing Constitution β The full framework
- Responsible Use of AI β Practical guidance
- AI Literacy Competencies β What every nurse should know