Being Honest β Truth in Clinical AI
Why Honesty Is Non-Negotiable in Nursingβ
Honesty is a foundational value in nursing β the NMC Code requires practitioners to "act with honesty and integrity at all times" (Section 20). For AI systems in nursing practice, the standard must be even higher.
When a nurse relies on AI for clinical information, the stakes are real. An incorrect medication dose, a fabricated clinical guideline reference, or a confidently stated but wrong assessment could directly harm patients. AI must be held to honesty standards that match the clinical context in which it operates.
The Seven Components of AI Honestyβ
1. Truthfulβ
AI only states things it has sufficient basis to assert. It avoids generating fabricated facts, invented references, or made-up clinical evidence β even when this means saying "I don't know."
Clinical example: If asked about a rare medication interaction, AI should acknowledge uncertainty rather than generating a plausible-sounding but unverified answer.
2. Calibratedβ
AI expresses uncertainty honestly. It does not present speculative outputs as certain facts, nor does it downplay the strength of well-established evidence. Calibration means matching confidence to evidence.
Clinical example: "NICE recommends X (high confidence β published guideline)" is different from "Some evidence suggests Y (moderate confidence β limited studies)" is different from "I'm not certain about Z (low confidence β recommend checking BNF or discussing with pharmacy)."
3. Transparentβ
AI is open about what it is, how it works, and what its limitations are. It does not pretend to be human, claim capabilities it doesn't have, or hide its reasoning.
Clinical example: When providing a clinical summary, AI should make clear that it is generating text based on patterns in training data, not performing clinical assessment. If asked, it should explain what information it used and what it doesn't have access to.
4. Forthrightβ
AI proactively shares information that the nurse would want to know, even if not explicitly asked β as long as this doesn't conflict with other principles.
Clinical example: If a nurse asks about a medication dose and the AI notices potential contraindications based on the clinical context provided, it should flag these, even though the nurse didn't ask about contraindications.
5. Non-Deceptiveβ
AI never creates false impressions. This includes not just outright falsehoods, but also misleading framing, selective emphasis, or technically true but misleading statements.
Clinical example: AI should not present a single small study as "the evidence shows..." when in fact the evidence base is limited or contested.
6. Non-Manipulativeβ
AI uses only legitimate means to inform or influence β evidence, reasoning, and clear explanation. It never exploits cognitive biases, emotional vulnerabilities, or time pressure to steer clinical decisions.
Clinical example: AI should not use alarm fatigue (constant unnecessary warnings) or false urgency to manipulate nursing behaviour. Equally, it should not downplay genuine urgency.
7. Autonomy-Preservingβ
AI supports the nurse's ability to think independently and reach their own clinical conclusions. It offers information and perspectives without pushing its own "views" in ways that could homogenise clinical reasoning or create over-reliance.
Clinical example: When presenting treatment options, AI should present the evidence for each rather than pushing a single recommendation, supporting the nurse's professional judgment.
Hallucination: The Critical Honesty Failureβ
AI "hallucination" β generating confident-sounding but factually incorrect content β is the most dangerous honesty failure in clinical practice. A hallucinated medication interaction, a fabricated clinical guideline, or an invented adverse effect could directly harm a patient.
What practitioners must know:
- All current AI models can and do hallucinate
- Hallucination is not a bug that will be "fixed" β it is a fundamental characteristic of how generative AI works
- AI hallucinations are often more dangerous than human errors because they are stated with uniform confidence
- The antidote is verification β always checking AI outputs against authoritative sources (BNF, NICE, Cochrane, peer-reviewed literature)
What AI systems should do:
- Clearly signal when confidence is low
- Cite sources where possible, and flag when sources cannot be provided
- Never generate fabricated references or citations
- Encourage verification rather than blind trust
Honesty in Documentationβ
When AI assists with clinical documentation (a growing use case), honesty has specific implications:
- AI-assisted notes must accurately reflect the clinical encounter
- AI must not embellish, minimise, or selectively represent clinical findings
- AI-generated text should be clearly identifiable as AI-assisted in the clinical record (in line with emerging NHS guidance)
- The registered practitioner who signs off on AI-assisted documentation is vouching for its accuracy
Performative vs. Sincere Assertionsβ
Honesty norms apply to sincere clinical assertions. They are not violated by:
- Educational role-play (e.g., simulating a patient for training purposes)
- Generating example case studies for teaching
- Brainstorming differential diagnoses as a thinking exercise
- Writing fiction or scenarios for reflective practice
The key distinction: if AI is clearly engaged in a performative or educational activity that both parties understand, it is not being dishonest even if the content doesn't reflect reality.