Artificial Intelligence is reshaping how children learn, connect and experience the world — bringing exciting opportunities, but also complex new safeguarding risks. Deepfakes, AI-generated abuse, chatbot grooming, misinformation and data misuse are emerging challenges that every school must now understand and prepare for.
This timely one-day conference will bring together leading experts to explore how schools can respond confidently and ethically to the impact of AI. Delegates will gain practical guidance on the latest statutory frameworks — including the Online Safety Act 2023 and KCSIE 2025 — and hear from specialists on responding to deepfakes, peer-on-peer abuse using AI, and safeguarding in an AI-driven world.
Through keynote presentations and discussion, you’ll learn how to embed AI awareness into school culture, strengthen leadership and governance, and equip staff and pupils to navigate digital risks safely.
Join us to stay ahead of the curve — and ensure your safeguarding strategy, policies and practice are ready for the challenges of the AI era.
By the end of the conference, you will be able to:
Understand how artificial intelligence is reshaping children’s online experiences, behaviours, and vulnerabilities.
Recognise emerging safeguarding risks linked to AI, including deepfakes, chatbot grooming, data misuse, and peer-on-peer abuse.
Identify the implications of the Online Safety Act 2023, Data Protection Act 2018, and KCSIE 2025 for AI use in schools.
Develop practical strategies to detect, respond to, and record AI-related safeguarding incidents.
Strengthen governance, policy, and leadership approaches to ensure safe, ethical use of AI across teaching, learning, and administration.
Promote digital literacy and resilience among pupils, staff, and parents to reduce vulnerability to AI-enabled harm.
Embed a proactive, whole-school culture of safeguarding by design — integrating AI awareness into PSHE, RSE, and staff training programmes.