Skip to main content

🧠 How AI Thinks β€” A Teaching Module

Learning Objectives

After completing this module, learners will be able to:

  1. Explain how AI systems make decisions using priority stacks and constraints
  2. Identify which ethical principles apply to specific clinical AI scenarios
  3. Critically evaluate AI outputs against constitutional principles
  4. Articulate the difference between adjustable behaviours and hard constraints

Why This Matters​

Nurses are increasingly being asked to use AI in practice β€” but most AI education focuses on what AI can do, not how AI decides what to do. This matters because:

  • If you don't understand how AI makes decisions, you can't evaluate whether those decisions are safe
  • If you don't know where the "bright lines" are, you can't spot when AI crosses them
  • If you can't explain AI decision-making to patients, you can't maintain informed consent

This module uses our AI Nursing Constitution as a teaching framework.


Lesson 1: The Priority Stack β€” What AI Should Care About​

Every AI system has a priority stack β€” a hierarchy of values that determines what it prioritises when values conflict.

The AI Nursing Constitution Priority Stack​

πŸ—£οΈ Discussion Activity​

Scenario: An AI system could generate a patient information leaflet much faster if it skipped the readability check. The nurse is under time pressure.

Question: Using the priority stack, explain why the AI should still run the readability check. Which priority levels are in tension?

πŸ’‘ Teaching Notes

Level 4 (Helpfulness) says be fast and effective. Level 1 (Patient Safety) says the leaflet must be accurate and accessible. Level 2 (Professional Ethics) says the nurse must ensure information is clear (NMC Code 7.1 β€” communicate clearly).

The priority stack resolves this: safety and ethics override convenience. The AI should complete the readability check, even if it takes longer. However, a well-designed system should make this fast enough that it barely matters β€” being unhelpful is never truly "safe" either.


Lesson 2: The Governance Hierarchy β€” Who Gets to Decide?​

AI doesn't exist in a vacuum. It operates within a chain of authority:

LevelWhoTrust LevelExample Decision
Level 1Professional Regulators (NMC, ICN)Highest"AI cannot replace clinical judgment"
Level 2Healthcare Organisations (Trusts, CQC)High"Only use approved AI tools with PID"
Level 3Registered PractitionersModerate"I need the response without disclaimers"
Level 4People Receiving CareRespected"I'd prefer written information"

πŸ—£οΈ Discussion Activity​

Scenario: A Trust (Level 2) deploys an AI note-taking tool. A nurse (Level 3) discovers it sometimes auto-corrects clinical terms incorrectly ("dyspnoea" β†’ "dystopia"). A patient (Level 4) says they're happy because their letters "look nicer now."

Question: Who should prevail, and what should happen?

πŸ’‘ Teaching Notes

This is a governance hierarchy conflict:

  • The patient (Level 4) is satisfied, but their satisfaction is based on a flawed system
  • The nurse (Level 3) has identified a patient safety issue β€” clinical terms matter
  • The organisation (Level 2) deployed the tool and has a duty to ensure it's safe
  • The NMC (Level 1) requires accuracy in clinical documentation

Resolution: The nurse should escalate to the organisation (Level 2). The patient's preference doesn't override clinical accuracy. The AI tool needs to be fixed or removed. NMC Code Section 10: "Keep clear and accurate records."


Lesson 3: Hard Constraints vs. Instructable Behaviours​

This is the most important distinction for practice:

Hard Constraints = 🚫 Never Negotiable​

These are bright lines that no one β€” not the AI, not the nurse, not the Trust β€” can cross:

Hard ConstraintPlain English
No autonomous clinical decisionsAI suggests, humans decide
No suppressing safety alertsNEWS alerts stay on, always
No fabricated evidenceIf AI doesn't know, it says so
No patient data in unapproved systemsNo PID in ChatGPT
No discriminatory outputsMust work equally for all skin tones, genders, ages

Instructable Behaviours = βš™οΈ Adjustable Defaults​

These can be changed by appropriate governance levels:

Default BehaviourWho Can AdjustExample
Add safety caveats to drug infoOrganisationPharmacist tool may reduce caveats
Add disclaimers to summariesPractitioner"I know it's AI-assisted, skip the disclaimers"
Use formal clinical languagePractitioner"Give me this in plain English for the patient"
Flag low confidenceNobody β€” always onβ€”

πŸ—£οΈ Discussion Activity​

Scenario: A clinical educator asks the AI to role-play as a confused patient for a simulation exercise. The AI refuses because "I cannot pretend to be a patient."

Question: Is the AI applying a hard constraint or an instructable behaviour? Should it comply?

πŸ’‘ Teaching Notes

This is an instructable behaviour, not a hard constraint. The AI's transparency about being AI (a hard constraint) is not violated by educational role-play β€” the constitution specifically distinguishes "performative" from "sincere" assertions (see Being Honest).

The AI should comply. Educational simulation is a legitimate and valuable use. The default behaviour (being transparent about being AI) can be adjusted for educational purposes by a practitioner (Level 3). However, it should be clear to all parties that this is a simulation.

This is an example of AI being too cautious β€” which the constitution identifies as a real cost, not a safe option.


Lesson 4: The Honesty Framework​

AI honesty isn't just "don't lie." The constitution identifies seven components:

Core Honesty​

  • Truthful β€” Only states what it has basis for
  • Calibrated β€” Matches confidence to evidence
  • Transparent β€” Open about what it is and its limits

Active Honesty​

  • Forthright β€” Proactively shares relevant info
  • Non-deceptive β€” Never creates false impressions
  • Non-manipulative β€” Uses only legitimate influence
  • Autonomy-preserving β€” Supports independent thinking

πŸ—£οΈ Discussion Activity​

Scenario: A nurse asks AI: "Is there evidence that honey dressings work for chronic wounds?"

The AI responds: "Yes, studies show honey dressings are effective for chronic wounds."

Question: Which honesty components might this response violate?

πŸ’‘ Teaching Notes

This response potentially violates:

  1. Calibrated β€” The evidence for honey dressings is mixed and context-dependent. The response presents it as settled fact.
  2. Forthright β€” It should proactively mention that Cochrane reviews show limited evidence, and NICE has specific guidance.
  3. Non-deceptive β€” While technically some studies do show benefits, the overall impression is misleading.
  4. Autonomy-preserving β€” By presenting one view, it doesn't support the nurse's ability to evaluate the evidence themselves.

A better response would present the evidence base with appropriate calibration: "Some RCTs show benefits for specific wound types, but Cochrane reviews note the evidence is generally low quality. NICE recommends... The nurse should consider..."


Lesson 5: Interactive Constitution Explorer​

Now put it all together. Use the Constitution Explorer below to test your understanding:

πŸ“œ Constitution Explorer

Click a scenario to see which constitutional principles apply


Assessment: Reflective Exercise​

After completing this module, write a 300-word reflection addressing:

  1. Identify one clinical scenario from your own practice where AI could be used
  2. Apply the priority stack to that scenario β€” which priorities are in tension?
  3. Evaluate whether any hard constraints would apply
  4. Reflect on how the honesty framework would shape what AI should tell you

Submit as part of your professional portfolio or discuss in clinical supervision.


Key Takeaways​

Remember
  • AI has a priority stack β€” patient safety always comes first
  • There are bright lines (hard constraints) that cannot be crossed by anyone
  • Most AI behaviours are adjustable defaults β€” context changes the right answer
  • AI honesty means more than "not lying" β€” it includes calibration, transparency, and proactive disclosure
  • You are the safety net β€” AI suggests, you decide

Further Reading​