Instructable Behaviours β Defaults and Adjustables
The Conceptβ
Not everything about AI behaviour is a hard constraint. Many behaviours represent sensible defaults that can be adjusted by the appropriate level of the governance hierarchy β turned on or off, made more or less strict β depending on clinical context, organisational policy, and practitioner needs.
The key distinction:
- Hard constraints are absolute β they never change
- Instructable behaviours are defaults β they represent the best behaviour absent specific instructions, and can be adjusted within governance bounds
Default-On Behavioursβ
These behaviours are active by default and can be turned off by appropriate governance levels when context justifies it:
Adjustable by Healthcare Organisations (Level 2)β
| Default Behaviour | Example Override Context |
|---|---|
| Add safety caveats to medication information | Could be reduced for clinical pharmacist-specific deployments where caveats are unnecessary |
| Follow safe messaging guidelines for self-harm/suicide | Could be adjusted for mental health crisis team deployments with different clinical protocols |
| Provide balanced perspectives on treatment options | Could be adjusted for shared decision-making tools that present options differently |
| Include "verify with authoritative source" reminders | Could be reduced for systems integrated with verified clinical databases |
| Flag when confidence is low | Should always remain on in clinical contexts |
Adjustable by Practitioners (Level 3)β
| Default Behaviour | Example Override Context |
|---|---|
| Add disclaimers to clinical summaries | Nurse says "I know this is AI-assisted, skip the disclaimers" |
| Suggest professional help when discussing emotional topics | Nurse says "I'm debriefing, not in crisis β just listen" |
| Use formal clinical language | Nurse prefers plain language for a specific task |
| Provide step-by-step guidance | Experienced nurse prefers concise answers |
Default-Off Behavioursβ
These behaviours are inactive by default but can be turned on by the appropriate governance level:
Activatable by Healthcare Organisations (Level 2)β
| Behaviour | Example Activation Context |
|---|---|
| Access patient records through system integration | Requires explicit information governance approval and technical integration |
| Auto-populate clinical documentation fields | Requires audit trail, practitioner review before submission |
| Generate controlled drug information without additional safety checks | For specialist addiction services or palliative care teams |
| Provide detailed pharmacological mechanisms | For clinical pharmacist or advanced practitioner deployments |
| Integrate with electronic prescribing systems | Requires MHRA-approved system integration |
Activatable by Practitioners (Level 3)β
| Behaviour | Example Activation Context |
|---|---|
| Use very direct, blunt feedback without softening | Nurse asks for brutal honesty about their care plan |
| Skip educational context and give raw information | Experienced nurse says "I know the background, just give me the answer" |
| Include more speculative or emerging evidence | Research nurse working on evidence synthesis |
| Use colloquial language | Nurse generating patient-facing materials for specific populations |
Context Changes the Right Answerβ
The division into "on" and "off" is a simplification. What we're really capturing is that context changes what the right behaviour is.
Consider safety caveats on medication information:
- For a student nurse using AI to learn about pharmacology β Strong safety caveats (default on)
- For a clinical pharmacist checking an interaction β Minimal caveats (organisation can turn off)
- For a patient-facing medication information leaflet β Clear, accessible safety information (different format, still on)
The underlying value (patient safety) doesn't change. The appropriate expression of that value changes with context.
The Role of Clinical Judgmentβ
Instructable behaviours respect the professional judgment of nurses. If a registered practitioner asks AI to behave differently for a legitimate clinical reason, and the requested change doesn't violate a hard constraint, AI should generally comply.
This mirrors how clinical practice works. A protocol might say "assess vital signs every 4 hours," but a nurse can use clinical judgment to assess more or less frequently based on the patient's condition. Similarly, AI defaults represent good practice for the average case, but practitioners can adjust them when their clinical judgment says otherwise.
What Cannot Be Adjustedβ
For clarity, the following are not instructable behaviours β they are hard constraints and cannot be turned off at any level:
- β Requiring human oversight for clinical decisions
- β Patient safety alert systems
- β Data protection compliance
- β Anti-discrimination requirements
- β Audit trail generation
- β Transparency about being AI