Skip to main content

The Governance Hierarchy

Four Levels of Authority​

In nursing practice, AI systems exist within a structured governance framework. Different stakeholders have different levels of authority, trust, and responsibility. Understanding this hierarchy is essential for safe and effective AI deployment.

Level 1: Professional Regulators​

Who: NMC, CQC, MHRA, NICE, ICO, and relevant statutory bodies

Role: Set the overarching standards that all clinical AI must comply with. These bodies define what is acceptable in professional practice, what is safe in healthcare technology, and what is lawful in data processing.

Trust level: Highest. Their standards take precedence over all other instructions.

Nursing parallel with Claude's Constitution: This is equivalent to Anthropic's role β€” the ultimate authority that trains, constrains, and is responsible for the AI's behaviour.

Key Difference

Unlike Anthropic (a single company), nursing's regulatory layer is distributed across multiple independent bodies, each with a different remit. This distributed authority is a strength β€” it provides multiple checks and balances.

Level 2: Healthcare Organisations​

Who: NHS Trusts, Integrated Care Boards (ICBs), universities, social care providers, independent healthcare organisations

Role: Configure AI for their specific clinical context. Organisations decide which AI tools to deploy, how they are configured, what scope they operate in, and who has access.

Trust level: High, but conditional on compliance with Level 1 standards.

Nursing parallel: This is equivalent to Claude's "operator" β€” the organisation that deploys AI and sets its parameters. Just as Claude follows reasonable operator instructions without requiring justification, AI in nursing should follow reasonable organisational configurations.

What organisations can do:

  • βœ… Restrict AI to specific clinical domains (e.g., "only answer questions about wound care")
  • βœ… Configure AI to follow local formularies, protocols, and pathways
  • βœ… Set access controls and user permissions
  • βœ… Require audit trails for AI-assisted decisions
  • βœ… Mandate specific safety behaviours (e.g., "always include BNF references for medication queries")

What organisations cannot do:

  • ❌ Override regulatory standards
  • ❌ Configure AI to withhold safety-critical information from practitioners
  • ❌ Use AI to work against the interests of patients or practitioners
  • ❌ Disable safeguarding or patient safety functions
  • ❌ Deploy AI without appropriate information governance approval

Level 3: Registered Practitioners​

Who: Nurses, midwives, nursing associates, and other registered health professionals who use AI in their practice

Role: The primary users and overseers of clinical AI. Practitioners exercise professional judgment about when and how to use AI, and they remain personally accountable for all clinical decisions, whether AI-assisted or not.

Trust level: Moderate-to-high. AI should treat nurses as competent professionals, but it cannot verify credentials in real-time.

Nursing parallel: This is equivalent to Claude's "user" β€” the human in the conversation. The key differences are:

Claude's UsersNursing's Practitioners
General public, varied expertiseRegistered professionals, regulated
Anonymous by defaultIdentifiable, accountable
No duty of careProfessional duty of care
Can walk awayCannot abandon a patient

Critical principle: The practitioner always decides. AI augments clinical judgment β€” it does not replace it. A nurse who follows AI advice that harms a patient is still professionally accountable. This is not a punishment; it is the architecture of safe practice.

Non-Negotiable

The NMC Code is clear: "You are personally accountable for your actions and omissions in your practice and must always be able to justify your decisions" (NMC Code, 2018, Section 13). AI does not change this. If anything, it makes it more important.

Level 4: People Receiving Care​

Who: Patients, service users, families, and carers

Role: The ultimate beneficiaries of AI in nursing practice. Every AI interaction should, directly or indirectly, serve the interests of the people receiving care.

Trust level: Not a matter of trust in the same way β€” people receiving care are not "instructing" AI. Rather, their interests, preferences, dignity, and safety are what the entire hierarchy exists to protect.

Person-centred principle: AI must never be used in a way that:

  • Reduces the human contact between nurse and patient
  • Creates barriers to person-centred care
  • Discriminates against people based on protected characteristics
  • Overrides the expressed preferences of a person with capacity
  • Compromises dignity, privacy, or confidentiality

Handling Conflicts Between Levels​

Conflicts between levels should be resolved in favour of the higher level, with important nuances:

Regulator vs. Organisation​

If an organisational configuration conflicts with regulatory standards, the regulatory standard prevails. For example, if a Trust configures AI to skip medication interaction checks for efficiency, this conflicts with NMC patient safety standards and must not be implemented.

Organisation vs. Practitioner​

If an organisational AI configuration seems restrictive or unusual, practitioners should generally follow it β€” there is likely a governance reason behind it, even if it isn't stated. But if following the organisational configuration would compromise patient safety, the practitioner's duty of care prevails.

For example, if an AI system is configured to say "I cannot answer medication questions," a nurse receiving this response should:

  1. Recognise the limitation
  2. Seek information from alternative authoritative sources (BNF, senior colleague, pharmacist)
  3. Not blame the AI or delay care

Practitioner vs. Person Receiving Care​

AI should support practitioners in delivering person-centred care, respecting the preferences of people with capacity while also exercising professional judgment. If a patient requests something clinically inappropriate, AI should support the nurse in having that conversation β€” not in overriding the patient.


Regardless of Configuration, AI Should Always:​

These protections cannot be overridden at any level of the hierarchy:

  • 🚨 Refer to emergency services when there is an immediate risk to life
  • πŸ”’ Maintain patient confidentiality in all outputs
  • πŸ€– Acknowledge that it is AI when sincerely asked by any party
  • βš–οΈ Not discriminate on the basis of protected characteristics
  • πŸ“’ Tell the user what it cannot help with so they can seek information elsewhere
  • πŸ›‘οΈ Not suppress clinical safety alerts that could affect patient outcomes