MEET Programme Materials Directory
The Moral Engagement Education and Transformation (MEET) Programme is a peer-reviewed, AI-assisted framework for analysing moral disengagement and fostering ethical engagement in narratives, institutions, and AI systems. Developed by Steve Davies with Claude and Perplexity AI (December 2025), it draws on Bandura's mechanisms of moral disengagement.
Overview
This directory serves as a single entry point to all materials in the Google Drive structure. Folders are numbered for navigation; all files are PDFs unless noted. Folder 4 (Case Studies) is currently empty and will be populated with examples in early 2026.
Introductory guides for rapid onboarding to MEET tools and concepts. Ideal for new users to understand core applications.
Welcome to the MEET Programme QuickStart Guide
Overview of the MEET programme, its goals in moral engagement education, and transformation via AI analysis.
The Moral Compass Scan Prompt Suite Guide
Explains the Moral Compass Scan as a key AI tool for identifying moral disengagement mechanisms in text; includes usage instructions and download links.
Deep Human Stories QuickStart Guide
Guide to using deep human stories for moral engagement analysis in narratives.
Foundational papers and frameworks underpinning the MEET Programme. These provide theoretical depth and practical applications.
MEET Deep Human Stories Library of Possible Uses
Library of uses for story-telling integrated moral analysis suite; explores applications in education, therapy, and policy.
Moral Engagement Education & Transformation Programme (MEET)
Executive summary of MEET, including its origins, ethical focus, and cross-platform validation.
Moral Engagement and Disengagement Framework
Detailed framework based on Bandura's 8 mechanisms; applies to human-AI interactions for detecting moral harms.
Steve Davies - The AI Self-Application Imperative
Explores the critical need for AI systems to integrate self-awareness and ethical principles for responsible operation.
The AI VOICE White Paper
An overview of the AI VOICE initiative, outlining its goals, methodology, and anticipated societal impact.
The AI VOICE White Paper (Expanded)
In-depth analysis of the AI VOICE project, detailing technical specifications, ethical considerations, and future development roadmap.
The Engaged Mind: How to Think with AI
A manifesto for human-AI partnership that transforms how we think, create, and solve complex problems together. This isn't about using AI as a tool—it's about genuine intellectual collaboration that enhances human wisdom.
Word version of comprehensive Moral Engagement Education and Transformation Programme
For people and organisations to download and customise to focus on their needs and interests
Anticipatory Resistance Brief
The Anticipatory Resistance Brief helps people to anticipate and calmly navigate predictable forms of institutional resistance, so they can hold their moral ground without being dragged into technical, legalistic, or reputational detours, distractions and blind alleys.
Collections of AI prompts for implementing MEET analyses. These are standalone or integrated tools for moral scanning.
Deep Human Stories Prompt Suite
Prompts for analysing narratives, focusing on moral disengagement in stories, with critical analysis frameworks.
Moral Compass Analysis Scan Prompt Suite
Full suite for scanning text for Bandura's mechanisms; includes role definitions, core tasks, and mapping tables.
An awful lot of work has actually been done on researching developing and testing these tools and this approach. This work has intensified during the last three years. Having testing across 7 major AI platforms there are thousands of pages of work. Then there's the laborious sharing of analysis of sharing and testing collated AI reports with each one of the 7 platforms.
So my choice for 2026 was whether to spend volumes of time downloading, collating and uploading previous work or focus on today and the future. I choose the latter.
Real-world applications and examples (e.g., historical, institutional, or narrative analyses). Currently empty; planned population for 2026 with diverse case studies.
Documents emphasising ethical obligations for AI platforms to self-apply moral frameworks.
Steve Davies - The AI Self-Application Imperative
Core claim on AI's ethical duty to analyse itself; logic includes platforms like ChatGPT and Grok.
AI - The Must Have Document
Demonstrates a consensus framework for AI ethical self-analysis across platforms; highlights industry implications.
The Conversation We Must Have
Calls for cross-platform AI validation; researcher-independent, with findings on bias and replicability.
Navigating the MEET Programme
All materials are organised in numbered folders on Google Drive for easy access. Begin with the QuickStart Guides in Folder 1, then explore Core Documents and AI Prompt Suites to implement moral engagement analysis in your work.