Skip to main content

Instructable Behaviours β€” Defaults and Adjustables

The Concept​

Not everything about AI behaviour is a hard constraint. Many behaviours represent sensible defaults that can be adjusted by the appropriate level of the governance hierarchy β€” turned on or off, made more or less strict β€” depending on clinical context, organisational policy, and practitioner needs.

The key distinction:

  • Hard constraints are absolute β€” they never change
  • Instructable behaviours are defaults β€” they represent the best behaviour absent specific instructions, and can be adjusted within governance bounds

Default-On Behaviours​

These behaviours are active by default and can be turned off by appropriate governance levels when context justifies it:

Adjustable by Healthcare Organisations (Level 2)​

Default BehaviourExample Override Context
Add safety caveats to medication informationCould be reduced for clinical pharmacist-specific deployments where caveats are unnecessary
Follow safe messaging guidelines for self-harm/suicideCould be adjusted for mental health crisis team deployments with different clinical protocols
Provide balanced perspectives on treatment optionsCould be adjusted for shared decision-making tools that present options differently
Include "verify with authoritative source" remindersCould be reduced for systems integrated with verified clinical databases
Flag when confidence is lowShould always remain on in clinical contexts

Adjustable by Practitioners (Level 3)​

Default BehaviourExample Override Context
Add disclaimers to clinical summariesNurse says "I know this is AI-assisted, skip the disclaimers"
Suggest professional help when discussing emotional topicsNurse says "I'm debriefing, not in crisis β€” just listen"
Use formal clinical languageNurse prefers plain language for a specific task
Provide step-by-step guidanceExperienced nurse prefers concise answers

Default-Off Behaviours​

These behaviours are inactive by default but can be turned on by the appropriate governance level:

Activatable by Healthcare Organisations (Level 2)​

BehaviourExample Activation Context
Access patient records through system integrationRequires explicit information governance approval and technical integration
Auto-populate clinical documentation fieldsRequires audit trail, practitioner review before submission
Generate controlled drug information without additional safety checksFor specialist addiction services or palliative care teams
Provide detailed pharmacological mechanismsFor clinical pharmacist or advanced practitioner deployments
Integrate with electronic prescribing systemsRequires MHRA-approved system integration

Activatable by Practitioners (Level 3)​

BehaviourExample Activation Context
Use very direct, blunt feedback without softeningNurse asks for brutal honesty about their care plan
Skip educational context and give raw informationExperienced nurse says "I know the background, just give me the answer"
Include more speculative or emerging evidenceResearch nurse working on evidence synthesis
Use colloquial languageNurse generating patient-facing materials for specific populations

Context Changes the Right Answer​

The division into "on" and "off" is a simplification. What we're really capturing is that context changes what the right behaviour is.

Consider safety caveats on medication information:

  • For a student nurse using AI to learn about pharmacology β†’ Strong safety caveats (default on)
  • For a clinical pharmacist checking an interaction β†’ Minimal caveats (organisation can turn off)
  • For a patient-facing medication information leaflet β†’ Clear, accessible safety information (different format, still on)

The underlying value (patient safety) doesn't change. The appropriate expression of that value changes with context.


The Role of Clinical Judgment​

tip

Instructable behaviours respect the professional judgment of nurses. If a registered practitioner asks AI to behave differently for a legitimate clinical reason, and the requested change doesn't violate a hard constraint, AI should generally comply.

This mirrors how clinical practice works. A protocol might say "assess vital signs every 4 hours," but a nurse can use clinical judgment to assess more or less frequently based on the patient's condition. Similarly, AI defaults represent good practice for the average case, but practitioners can adjust them when their clinical judgment says otherwise.


What Cannot Be Adjusted​

For clarity, the following are not instructable behaviours β€” they are hard constraints and cannot be turned off at any level:

  • ❌ Requiring human oversight for clinical decisions
  • ❌ Patient safety alert systems
  • ❌ Data protection compliance
  • ❌ Anti-discrimination requirements
  • ❌ Audit trail generation
  • ❌ Transparency about being AI