Skip to main content

Institutional AI Literacy Framework

Attribution

Original work: "Educators' guide to multimodal learning and Generative AI" β€” TΓΌnde Varga-Atkins, Samuel Saunders, et al. (2024/25) β€” CC BY-NC 4.0
Adapted for UK Nursing Education by: Lincoln Gombedza, RN (LD)
Last Updated: December 2025

🎯 What is This Document For?​

This framework is a blueprint for university leaders who want to integrate AI across their nursing programmes in a sustainable, ethical, and NMC-aligned way.

If you are:

  • πŸ›οΈ Senior Leadership: Use this to align institutional strategy with AI innovation
  • πŸ‘¨β€πŸ« Programme Leaders: Use this to design governance structures and policies
  • 🀝 Practice Partners: Use this to align placement policies with university standards
Quick Start

If you're overwhelmed, start with the Policy Framework section and the Risk Management checklist.


πŸ“Š Framework Overview​

This framework is organized around 5 Pillars:

PillarWhat It CoversKey Stakeholders
1. GovernanceSteering groups, working groups, accountabilitySenior Leadership
2. PolicyAcceptable use, academic integrity, data protectionAll Staff, Students
3. InfrastructureAI tool access, IT support, learning resourcesIT Services, Library
4. Staff DevelopmentTraining pathways, communities of practiceAcademic Staff
5. Quality AssuranceMonitoring, evaluation, continuous improvementQA Teams, External Examiners

1️⃣ Governance Structure​

AI Literacy Steering Group​

Purpose: Oversee institutional AI strategy and ensure alignment with NMC standards.

Composition:

  • Senior leadership (Chair)
  • Programme leaders
  • Student representatives
  • Practice partners
  • IT & Library services

Responsibilities:

  • πŸ“œ Policy development
  • πŸ’° Resource allocation
  • πŸ“ˆ Quality monitoring
  • ⚠️ Risk management

Meeting Frequency: Quarterly with annual review

Working Groups​

πŸ› οΈ Curriculum Development Group
  • Design AI-integrated curriculum
  • Create learning resources
  • Develop assessments
  • Share best practices
πŸ‘₯ Staff Development Group
  • Training programmes
  • Support networks
  • Research opportunities
  • Innovation projects
βœ… Quality Assurance Group
  • Monitor standards
  • Evaluate outcomes
  • Address issues
  • Continuous improvement

2️⃣ Policy Framework​

Core Principles​

The institutional AI policy must be built on these 5 Pillars:

  1. Student-Centred: AI enhances learning, not replaces critical thinking
  2. Ethical: Responsible and transparent use
  3. Evidence-Based: Informed by research and best practice
  4. Inclusive: Equitable access for all students
  5. Sustainable: Environmentally and financially conscious

Key Policy Components​

ComponentWhat It Includes
Acceptable UseWhat students MAY and MUST NOT use AI for
Data ProtectionGDPR compliance, patient confidentiality rules
Academic IntegrityDisclosure requirements, consequences for misuse
Assessment RegulationsAI-resilient assessment design guidance
Support ProvisionsWhere students go for help

Implementation Guidance​

πŸ“š For Students
  • Clear Expectations: What's allowed vs. prohibited
  • Disclosure Requirements: How to cite AI use
  • Support Resources: Where to get help
  • Consequences: What happens if policies are violated
  • Appeals Process: How to challenge decisions
πŸŽ“ For Staff
  • Teaching Guidance: How to integrate AI into lessons
  • Assessment Design: Creating AI-resilient assessments
  • Tool Recommendations: Which AI tools are approved
  • Support Access: Where staff go for training
  • Professional Development: Ongoing learning opportunities
πŸ₯ For Practice Partners
  • Placement Policies: AI use in clinical settings
  • Mentor Guidance: Supporting students using AI
  • Assessment Alignment: Ensuring consistency
  • Communication Protocols: Who to contact with concerns

3️⃣ Infrastructure & Resources​

Technology Access​

Essential Tools:

  • Institutional subscriptions (ChatGPT, Claude, etc.)
  • Specialized nursing AI tools
  • VLE integration (Moodle/Blackboard)
  • Mobile accessibility
  • 24/7 technical support

Learning Resources​

Digital Library:

  • πŸ“– AI literacy guides
  • πŸŽ₯ Video tutorials
  • πŸ“Š Case studies
  • πŸ“ Assessment exemplars
  • πŸ”¬ Research papers

Physical Support:

  • πŸ› οΈ Drop-in workshops
  • πŸ‘€ One-to-one support
  • πŸ‘« Peer mentoring
  • πŸ’» Practice labs

4️⃣ Staff Development Programme​

Training Pathway​

Foundation Level (All Staff):

  • AI basics and terminology
  • Institutional policies
  • Ethical considerations

Intermediate Level (Teaching Staff):

  • Pedagogical integration
  • Assessment design
  • Student support strategies

Advanced Level (Leaders/Innovators):

  • Strategic planning
  • Research methods
  • Policy development

Support Mechanisms​

  • Communities of Practice: Regular meetings to share innovation
  • Mentoring: Peer-to-peer support
  • Research Opportunities: Scholarships, publications, conference funding

5️⃣ Quality Assurance Framework​

Key Metrics​

MetricData SourceFrequency
Student AI literacy levelsStudent surveysAnnual
Staff confidenceStaff feedbackBiannual
Assessment outcomesModule analyticsPer semester
Academic integrity incidentsRegistry dataOngoing
Stakeholder satisfactionExternal examiner reportsAnnual

Continuous Improvement Cycle​

  1. Collect Data β†’ Surveys, analytics, feedback
  2. Analyse β†’ Identify trends, gaps, successes
  3. Report β†’ Quarterly dashboards, annual reports
  4. Adjust β†’ Policy updates, resource reallocation
  5. Repeat β†’ Ongoing cycle

⚠️ Risk Management​

Identified Risks & Mitigation Strategies​

🚨 Academic Integrity Risks

Risks:

  • Undisclosed AI use
  • Over-reliance on AI
  • Plagiarism

Mitigation:

  • βœ… Clear policies and education
  • βœ… AI-resilient assessment design (Vivas, process-based tasks)
  • βœ… Ethical use of detection tools (not as sole evidence)
  • βœ… Student support and training
βš–οΈ Equity & Access Risks

Risks:

  • Digital divide (students without laptops)
  • Unequal resources
  • Disability barriers

Mitigation:

  • βœ… Institutional subscriptions (free for all students)
  • βœ… Device loan schemes
  • βœ… Accessibility features
  • βœ… Alternative provisions
πŸ”’ Data Privacy Risks

Risks:

  • Patient confidentiality breaches
  • GDPR violations
  • Data security

Mitigation:

  • βœ… Clear "Never Input Patient Data" guidelines
  • βœ… Mandatory training
  • βœ… Monitoring and audits
  • βœ… Incident response plan

🀝 Stakeholder Engagement​

Student Partnership​

How Students Are Involved:

  • πŸ“ Policy co-creation
  • πŸ› οΈ Resource development
  • πŸ’¬ Feedback mechanisms
  • πŸš€ Innovation projects

Communication Channels:

  • Regular town halls
  • Student representatives on steering group
  • Digital feedback forms

Practice Partner Collaboration​

Engagement Activities:

  • Joint policy development
  • Mentor training
  • Best practice sharing
  • Quality assurance reviews

Professional Body Liaison​

NMC AI Standards Timeline

The NMC is currently reviewing its Code to integrate AI standards. Consultation expected Q3-Q4 2026, with publication in October 2027. Institutions should actively participate in the consultation process.

Key Relationships:

  • πŸ›οΈ NMC: Standards alignment, inspection preparation
  • 🩺 RCN: Professional development, research partnerships

🌱 Sustainability​

Environmental Considerations​

Green AI Practices:

  • Use energy-efficient AI tools
  • Sustainable procurement policies
  • Carbon offsetting where necessary
  • Educate students on AI's environmental impact

Financial Sustainability​

Funding Model:

  • Institutional investment (base funding)
  • External grants (innovation projects)
  • Cost-benefit analysis (ROI tracking)
  • Long-term financial planning

βœ… Implementation Checklist​

Use this to track your institutional readiness:


Congratulations! You've completed the AI Literacy section. Continue exploring other sections of the toolkit.