Skip to main content

Open Questions β€” What We Don't Yet Know

Why This Section Exists​

Intellectual honesty requires acknowledging what we don't know. This constitution is our best current understanding, but AI in nursing is in its infancy. Some of what this document contains will prove wrong. Some questions it raises have no answer yet.

We include these open questions not as weaknesses but as invitations β€” for nurses, researchers, ethicists, regulators, and technologists to engage with the hard problems that need solving.


Unresolved Tensions​

1. Autonomy vs. Safety​

How much should AI be allowed to do independently as it becomes more capable? Today's answer β€” "very little without human oversight" β€” is appropriate for current technology. But as AI systems become more sophisticated, this position will need revisiting.

The question: At what point (if ever) should an AI system be trusted to make clinical decisions without real-time human oversight? What evidence would be needed? Who decides?

2. Equity vs. Universality​

AI systems trained on data that underrepresents certain populations will perform worse for those populations. The Monk Skin Tone Scale and equity auditing help, but don't fully solve the problem.

The question: Is it better to deploy a partially biased AI tool that helps most patients, or to delay deployment until equity is achieved β€” potentially denying benefits to everyone?

3. Competence vs. Deskilling​

AI can enhance clinical capability but may also erode baseline competencies over time. Medical calculators made arithmetic unnecessary for many clinicians β€” was that a gain or a loss?

The question: What core nursing competencies must be maintained independently of AI, and how do we assess this? Should "unplugged" practice assessments become standard?

4. Transparency vs. Clinical Efficiency​

Full transparency about AI's reasoning and limitations takes time. In a busy clinical environment, detailed AI explanations may go unread.

The question: How do we balance the need for AI transparency with the reality of clinical time pressure? What is the minimum viable transparency for safe AI use?

5. Individual Record vs. Population Benefit​

AI systems often learn from aggregated patient data. This creates tension between individual privacy and population-level health benefits.

The question: How should nursing navigate consent for AI-mediated data use, particularly when individual patients may not directly benefit but populations will?

6. Professional Identity in an AI-Augmented World​

As AI takes on more information-processing tasks, what remains uniquely "nursing"? If AI can generate care plans, summarise assessments, and surface evidence, what is the nurse's distinctive contribution?

The question: Does AI challenge nursing's professional identity, or clarify it by stripping away tasks that were never truly "nursing" in the first place?

7. Regulation Lag​

Technology moves faster than regulation. AI capabilities available today may not have formal governance frameworks for years. The NMC's modernised Code isn't expected until October 2027.

The question: How should practitioners navigate the gap between AI capability and regulatory guidance? What governance is adequate in the interim?


Questions We Expect to Be Asked​

"Is AI going to replace nurses?"​

No. But AI will change what nurses do. The tasks most susceptible to AI augmentation are those involving information processing β€” documentation, data retrieval, pattern recognition. The tasks least susceptible are those involving human connection β€” therapeutic relationships, physical care, emotional support, ethical judgment in ambiguous situations. AI is likely to make nurses more valuable, not less, by freeing them to do the things that only humans can do.

"Can I use AI in my practice right now?"​

It depends on your organisation's governance framework. This constitution provides values and principles; your organisation's policies provide operational permissions. If your Trust or employer has approved specific AI tools, you can use them within the scope of that approval. If they haven't, you should seek guidance before using AI with patient-identifiable information.

"What if AI gives me wrong information?"​

This is not a hypothetical β€” it will happen. All current AI systems can and do generate incorrect outputs. The safeguard is professional judgment: verify AI outputs against authoritative sources (BNF, NICE, Cochrane), use clinical reasoning, and never rely solely on AI-generated content for clinical decisions. You are the safety net.

"Am I liable if I use AI and something goes wrong?"​

Professional accountability has not changed. You are accountable for your clinical decisions, whether AI-assisted or not. This is why the Hard Constraints section requires human oversight for all clinical decisions. If you verify AI outputs against authoritative sources and apply professional judgment, you are practising responsibly.


How This Document Will Evolve​

This constitution will be updated:

  • Annually, to reflect developments in AI capability, regulation, and evidence
  • In response to major events, such as the NMC's modernised Code (Oct 2027) or the MHRA's AI regulation framework
  • Through community input, from nurses, students, educators, and technologists who engage with this work

We see this constitution not as a finished product but as a living framework β€” a starting point for the much larger conversation that nursing needs to have about AI.