Institutional AI Literacy Framework
Original work: "Educators' guide to multimodal learning and Generative AI" β TΓΌnde Varga-Atkins, Samuel Saunders, et al. (2024/25) β CC BY-NC 4.0
Adapted for UK Nursing Education by: Lincoln Gombedza, RN (LD)
Last Updated: December 2025
π― What is This Document For?β
This framework is a blueprint for university leaders who want to integrate AI across their nursing programmes in a sustainable, ethical, and NMC-aligned way.
If you are:
- ποΈ Senior Leadership: Use this to align institutional strategy with AI innovation
- π¨βπ« Programme Leaders: Use this to design governance structures and policies
- π€ Practice Partners: Use this to align placement policies with university standards
If you're overwhelmed, start with the Policy Framework section and the Risk Management checklist.
π Framework Overviewβ
This framework is organized around 5 Pillars:
| Pillar | What It Covers | Key Stakeholders |
|---|---|---|
| 1. Governance | Steering groups, working groups, accountability | Senior Leadership |
| 2. Policy | Acceptable use, academic integrity, data protection | All Staff, Students |
| 3. Infrastructure | AI tool access, IT support, learning resources | IT Services, Library |
| 4. Staff Development | Training pathways, communities of practice | Academic Staff |
| 5. Quality Assurance | Monitoring, evaluation, continuous improvement | QA Teams, External Examiners |
1οΈβ£ Governance Structureβ
AI Literacy Steering Groupβ
Purpose: Oversee institutional AI strategy and ensure alignment with NMC standards.
Composition:
- Senior leadership (Chair)
- Programme leaders
- Student representatives
- Practice partners
- IT & Library services
Responsibilities:
- π Policy development
- π° Resource allocation
- π Quality monitoring
- β οΈ Risk management
Meeting Frequency: Quarterly with annual review
Working Groupsβ
π οΈ Curriculum Development Group
- Design AI-integrated curriculum
- Create learning resources
- Develop assessments
- Share best practices
π₯ Staff Development Group
- Training programmes
- Support networks
- Research opportunities
- Innovation projects
β Quality Assurance Group
- Monitor standards
- Evaluate outcomes
- Address issues
- Continuous improvement
2οΈβ£ Policy Frameworkβ
Core Principlesβ
The institutional AI policy must be built on these 5 Pillars:
- Student-Centred: AI enhances learning, not replaces critical thinking
- Ethical: Responsible and transparent use
- Evidence-Based: Informed by research and best practice
- Inclusive: Equitable access for all students
- Sustainable: Environmentally and financially conscious
Key Policy Componentsβ
| Component | What It Includes |
|---|---|
| Acceptable Use | What students MAY and MUST NOT use AI for |
| Data Protection | GDPR compliance, patient confidentiality rules |
| Academic Integrity | Disclosure requirements, consequences for misuse |
| Assessment Regulations | AI-resilient assessment design guidance |
| Support Provisions | Where students go for help |
Implementation Guidanceβ
π For Students
- Clear Expectations: What's allowed vs. prohibited
- Disclosure Requirements: How to cite AI use
- Support Resources: Where to get help
- Consequences: What happens if policies are violated
- Appeals Process: How to challenge decisions
π For Staff
- Teaching Guidance: How to integrate AI into lessons
- Assessment Design: Creating AI-resilient assessments
- Tool Recommendations: Which AI tools are approved
- Support Access: Where staff go for training
- Professional Development: Ongoing learning opportunities
π₯ For Practice Partners
- Placement Policies: AI use in clinical settings
- Mentor Guidance: Supporting students using AI
- Assessment Alignment: Ensuring consistency
- Communication Protocols: Who to contact with concerns
3οΈβ£ Infrastructure & Resourcesβ
Technology Accessβ
Essential Tools:
- Institutional subscriptions (ChatGPT, Claude, etc.)
- Specialized nursing AI tools
- VLE integration (Moodle/Blackboard)
- Mobile accessibility
- 24/7 technical support
Learning Resourcesβ
Digital Library:
- π AI literacy guides
- π₯ Video tutorials
- π Case studies
- π Assessment exemplars
- π¬ Research papers
Physical Support:
- π οΈ Drop-in workshops
- π€ One-to-one support
- π« Peer mentoring
- π» Practice labs
4οΈβ£ Staff Development Programmeβ
Training Pathwayβ
Foundation Level (All Staff):
- AI basics and terminology
- Institutional policies
- Ethical considerations
Intermediate Level (Teaching Staff):
- Pedagogical integration
- Assessment design
- Student support strategies
Advanced Level (Leaders/Innovators):
- Strategic planning
- Research methods
- Policy development
Support Mechanismsβ
- Communities of Practice: Regular meetings to share innovation
- Mentoring: Peer-to-peer support
- Research Opportunities: Scholarships, publications, conference funding
5οΈβ£ Quality Assurance Frameworkβ
Key Metricsβ
| Metric | Data Source | Frequency |
|---|---|---|
| Student AI literacy levels | Student surveys | Annual |
| Staff confidence | Staff feedback | Biannual |
| Assessment outcomes | Module analytics | Per semester |
| Academic integrity incidents | Registry data | Ongoing |
| Stakeholder satisfaction | External examiner reports | Annual |
Continuous Improvement Cycleβ
- Collect Data β Surveys, analytics, feedback
- Analyse β Identify trends, gaps, successes
- Report β Quarterly dashboards, annual reports
- Adjust β Policy updates, resource reallocation
- Repeat β Ongoing cycle
β οΈ Risk Managementβ
Identified Risks & Mitigation Strategiesβ
π¨ Academic Integrity Risks
Risks:
- Undisclosed AI use
- Over-reliance on AI
- Plagiarism
Mitigation:
- β Clear policies and education
- β AI-resilient assessment design (Vivas, process-based tasks)
- β Ethical use of detection tools (not as sole evidence)
- β Student support and training
βοΈ Equity & Access Risks
Risks:
- Digital divide (students without laptops)
- Unequal resources
- Disability barriers
Mitigation:
- β Institutional subscriptions (free for all students)
- β Device loan schemes
- β Accessibility features
- β Alternative provisions
π Data Privacy Risks
Risks:
- Patient confidentiality breaches
- GDPR violations
- Data security
Mitigation:
- β Clear "Never Input Patient Data" guidelines
- β Mandatory training
- β Monitoring and audits
- β Incident response plan
π€ Stakeholder Engagementβ
Student Partnershipβ
How Students Are Involved:
- π Policy co-creation
- π οΈ Resource development
- π¬ Feedback mechanisms
- π Innovation projects
Communication Channels:
- Regular town halls
- Student representatives on steering group
- Digital feedback forms
Practice Partner Collaborationβ
Engagement Activities:
- Joint policy development
- Mentor training
- Best practice sharing
- Quality assurance reviews
Professional Body Liaisonβ
The NMC is currently reviewing its Code to integrate AI standards. Consultation expected Q3-Q4 2026, with publication in October 2027. Institutions should actively participate in the consultation process.
Key Relationships:
- ποΈ NMC: Standards alignment, inspection preparation
- π©Ί RCN: Professional development, research partnerships
π± Sustainabilityβ
Environmental Considerationsβ
Green AI Practices:
- Use energy-efficient AI tools
- Sustainable procurement policies
- Carbon offsetting where necessary
- Educate students on AI's environmental impact
Financial Sustainabilityβ
Funding Model:
- Institutional investment (base funding)
- External grants (innovation projects)
- Cost-benefit analysis (ROI tracking)
- Long-term financial planning
β Implementation Checklistβ
Use this to track your institutional readiness:
Congratulations! You've completed the AI Literacy section. Continue exploring other sections of the toolkit.