
At the 2025 Hay Festival, a panel of thinkers, technologists, and commentators gathered to explore one of the most urgent questions of our time: what kind of future are we building with artificial intelligence? Titled “AI Moral Maze”, the discussion navigated the deep currents of promise, peril, and philosophical tension that define our current relationship with AI.
The conversation ranged widely—from the tangible benefits of AI in healthcare and education to the more troubling implications of misinformation, surveillance, and the erosion of human creativity and accountability. Drawing on insights from myth, mathematics, politics, and lived experience, the panel attempted not to arrive at a final verdict but to illuminate the complex landscape we now inhabit.
What follows is a structured summary of the major themes, insights, and arguments raised in the debate—a mosaic of optimism, warning, and reflection that captures the moral and societal crossroads at which we stand.
Witnesses:
Dr Kaitlyn Regehr, author of Smartphone Nation: Why We’re All Addicted to Our Screens and What You and Your Family Can Do About It
Marcus Du Sautoy, author, mathematician and Simonyi Professor for the Public Understanding of Science at the University of Oxford,
Dorian Lynskey
Sir Nigel Shadbolt, longterm researcher of AI, Professor in Computer Science at Oxford University and government advisor.
Panellists:
Anne McElvoy
James Orr
Mona Siddiqui
Matthew Taylor

1. PROMISE & POTENTIAL OF AI
- AI offers great promise: enhanced efficiency, breakthroughs in medicine, and personalised services.
- AI can enhance creativity: jazz musicians and other creatives use AI feedback for new patterns.
- AI as collaborator, not competitor—helps creatives break repetitive habits.
- AI used in mental health is proving effective for neurodiverse users.
- Examples like chess: AI didn’t kill it but expanded human strategy and engagement.
- AI must be reliable to gain user trust—market forces will push for this.
- AI can hold politicians accountable by verifying claims—democratic potential.
- AI is already writing introductions and performing human tasks in seconds.
- Matthew Taylor: AI enhances patient-doctor interaction in healthcare through ambient AI.
- Marcus du Sautoy: AI can go beyond pastiche—AlphaGo’s novel move as proof.
2. HUMAN IDENTITY & MEANING
- Shortcuts offered by AI may erode the value of human struggle and learning.
- Youth are losing confidence in human communication, turning to AI to write messages.
- Call for valuing humanness alongside machines and teaching communication skills.
- Creativity is rooted in human frailty, vulnerability, and desire—AI lacks these.
- Mona Siddiqui: Cautiously optimistic; warns we’re not paying enough attention to the human in AI.
3. DANGERS & MISUSE OF AI
- Concerns exist about AI replacing objective truth with algorithmic versions of reality.
- Dorian Lynskey: AI floods the world with confident misinformation—not always malicious, just prolific.
- AI has no mechanism for quality control—people don’t always fact-check sources.
- False citations and hallucinated book titles show how unreliable AI can be.
- Disinformation thrives in the current digital environment due to profit-driven algorithms.
- AI often prioritises hate/disinformation due to algorithmic design (Caitlin Agar).
- AI already used in warfare—ethical concerns about human retreat from responsibility.
- AI’s misuse could be mitigated through smarter, decentralised regulation—no need for central gatekeeping.
4. POWER, ETHICS & REGULATION
- Mass job displacement is feared as machines outperform humans.
- Power and wealth may concentrate in the hands of a few AI developers or corporations.
- AI should be regulated like media companies are; social media classified too loosely.
- Free speech cited by tech companies to avoid responsibility for harmful content.
- Prof. Nigel Shadbolt: Existential AI fears are less pressing than human misuse.
- AI has no legal/moral agency; humans deploying it must remain accountable.
- Chain of accountability needs clarity—who trained, who deployed, who oversaw?
- Wider democratic participation is needed in shaping AI regulation, not just tech elites.
- The UK has not signed certain AI treaties—AI sovereignty is an emerging issue.
5. EDUCATION & YOUTH
- Universities witnessing AI-written emails and assignments from students—impacting assessment.
- Return to handwritten exams suggested to counter AI misuse in education.
- Digital literacy must be broadly taught—AI is a tool, but not error-free.
6. BOUNDARIES & LIMITATIONS
- There are areas where AI use should be limited—e.g., legal sentencing, end-of-life care.
- No effective communication pathways between governments and tech companies about self-harm/suicide risks.
- AI has no legal/moral agency—humans must retain responsibility.
7. CULTURAL & MYTHOLOGICAL FRAMING
- Talos myth used to warn against the dangerous behaviour of unchecked invention.
- Final panel remarks: AI is a powerful tool reflecting human capacity and flaws.
True risk lies not in AI itself but in the humans who build and control it.




Leave a Reply