Do we even need to? After all, children ask Google Assistant about the weather and seek help from ChatGPT with homework. That means they can use “smart” technologies. But do they understand that technologies can also make mistakes? The lack of a critical approach means that they might take every word from AI as the truth. And that’s a direct path to treating machines as authorities or even friends. So the key question isn’t “if” we should teach kids about AI, but when and how to do it safely.
Studies from recent years show that children can surprisingly easily grasp the basic principles of how artificial intelligence works. In 2023, a team from The Education University of Hong Kong conducted an experiment with a group of five-year-olds. For six weeks, the children trained a simple program: they taught it to tell fish from trash, drew objects for it to recognize, and sorted pictures into the right categories. Through these exercises, they discovered that AI learns from examples and patterns—but it can also make mistakes, especially when it encounters something new.
The result? The five-year-olds not only understood the basic mechanisms behind algorithms but also realized that AI, just like them, has to learn. This experiment clearly highlights the difference between the world of adults and that of children. We adults often approach AI with suspicion, worrying about what it might take away from us. For children, it’s simply another game — something to test, explore, and play with fearlessly. They’re not anxious about losing their jobs, nor are their minds confined by rigid thinking patterns. They learn fast because their brains are still forming and naturally curious. Talking with AI feels as natural to them as playing pretend with dolls or teddy bears. They’re not embarrassed to ask questions — even ones adults might consider “silly.” And it’s precisely that openness that helps children grasp the fundamentals of artificial intelligence faster than many grown-ups ever could.