Are we preparing children for AI or protecting them from it?
In my previous article in this series, I described how shocked I was that many year 6 children were scared about Artificial Intelligence and their futures.
It raises an uncomfortable question for us as educators: what are we doing about this?
Artificial intelligence is arriving in children’s lives at remarkable speed. It is already shaping how we search for information, write, design images and interact with technology.
Yet the education system moves at a very different pace, and many children encounter AI for the first time not in school, but through the digital world around them.
The real question is whether current approaches are preparing children for AI - or primarily shielding them from it.
Policy may be moving - but classrooms move even slower
The UK government has taken a relatively forward-looking position on artificial intelligence – with national strategies and wider AI policy papers all describing a reshaped society and future workforce.
Recent government guidance has also begun addressing generative AI in schools, encouraging teachers to explore the technology responsibly while managing risks around safety, data and accuracy.
But recognising a challenge, and preparing children for it, are not the same thing and translating policy into everyday classroom practice takes time.
Teachers need training. Schools need clear and specific guidance. Resources need developing.
And the recent closing of Regional Computing Hubs - before a clear replacement system has fully emerged – risks creating uncertainty about how teachers, especially those in Primary education, will be supported.
Education systems will always, understandably, move carefully.
The difficulty is that AI development is not.
Children are already living in an AI World
Whether schools address it or not, Children are already encountering artificial intelligence every single day.
Recommendation algorithms shape what they see on platforms like YouTube, TikTok, or Netflix. AI-generated images and videos are becoming common online and even simple google searches now generally return AI-generated responses.
According to a 2024 Ofcom report, around 72% of children aged 8–15 use generative AI tools in some form, even if only occasionally.
2024 … In the world of AI, this feels very outdated, but it is apparent and clear that children are encountering AI faster than education systems are adapting to it.
And access is also not equal. While some children are experimenting with AI tools at home, others encounter them only indirectly through the algorithms shaping their lives. If schools step back from the conversation entirely, the gap between those who understand these systems and those who simply experience their effects may widen.
In other words, the digital divide – that became very apparent during Covid – may increasingly become an AI literacy divide.
A familiar pattern
Society often takes time to adapt to new technologies.
The debates around smartphones and social media provide one example. Concerns about safety, age limits and appropriate use took years to work through, while young people were already growing up with these technologies in their lives.
Artificial intelligence may follow a similar pattern - not because it is the same type of technology - but because rapid innovation often moves faster than policy, education systems and public understanding.
The risk now is not that children use AI too early.
The risk is that they encounter it without fully understanding how it works - or when its outputs should be questioned.
The safety dilemma
In recent discussions, I have found that much of the current conversation around AI in primary schools focuses on risk.
That is entirely understandable. Schools must consider safeguarding, data privacy, misinformation, and appropriate use whenever introducing any new technology. Most generative AI platforms also have minimum age restrictions, often set at 13 or above.
These are legitimate concerns and they reinforce a natural instinct within education to approach any technology cautiously.
But this creates a paradox.
Primary age children may be too young to use these systems independently — but they are not too young to be influenced by them.
Surely this creates an even greater responsibility for schools to help children understand how AI works? Otherwise, children may encounter powerful digital systems without the knowledge needed to question them.
Protection, or a ‘just say no’ approach, alone may not be enough.
Is the curriculum appropriate?
The current Computing Curriculum, introduced back in 2014 does, in fact, already contain many of the foundations needed to understand AI.
Children are required to learn about:
• algorithms
• data
• programming
• computational thinking
These ideas underpin modern artificial intelligence and provide strong foundations.
But, sadly, in practice, computing does not always receive the time or attention it deserves.
In many primary schools, curriculum time is dominated by core subjects such as English and mathematics - particularly in Year 6, where preparation for SATs takes priority.
Computing is sometimes squeezed into limited time slots, and teacher confidence in technical topics can vary widely.
Teaching the principles of computing though remains increasingly essential as the world children are entering is increasingly shaped by technologies that apply those principles at enormous scale - often invisibly.
That raises another important question:
Is teaching the foundations of computing alone enough to prepare children for an AI-driven world?
Or should artificial intelligence itself begin to appear more explicitly in the conversation?
AI across the curriculum
Artificial intelligence does not have to be confined to computing lessons.
Used carefully, it can also support learning in other subjects.
In English, for example, AI image tools can help pupils visualise settings or characters and inspire descriptive writing.
In a recent workshop, it took me back to my classroom teacher days when we used a ‘vocab bank’ on the board to aid image prompts and iterations. The marked difference though, compared to my memories, was the excitement children had in creating it!
Children quickly realise that vague descriptions produce vague results, while precise language produces clearer, and more exciting, imagery.
Rather than replacing writing (which some children have told me AI will do), the technology reinforces the importance of expressing ideas clearly, and algorithmically.
Used thoughtfully, AI can become both a tool for learning and a subject to understand.
Protection or preparation?
Education systems must always prioritise safeguarding and responsibility.
Children do not need unrestricted access to artificial intelligence tools. But they do need understanding. And they need this understanding quickly!
They need opportunities to ask questions such as:
• What is artificial intelligence?
• How does it learn from data?
• Can answers be biased?
• When should we trust it — and when should we question it?
These are not just computer science questions.
They are questions about how the modern digital world works. Increasingly, questions about how our world works.
Because the generation now sitting in primary classrooms will not simply use AI – they will live alongside it. And the difference between how comfortably they sit side-by-side – with confidence or fear; as the architects of the future, or merely its subjects – will come down to a simple question …
Did we help them understand the technology shaping their world?
In the final part of this series, I will explore the role parents can play in helping children navigate an AI-driven world and how we can support them to approach that future with curiosity, and perhaps even excitement, rather than fear.