#40. BEYOND THE ZOMBIE APOCALYPSE: THREE MORE AI EXPERIMENTS



Last November, I wrote about what happened when I put a small group of Grade 6 boys in a room with an AI voice simulation and told them zombies were coming. They barricaded the door. They asked about supplies - food, water and AK47s (naturally). They knew it was a simulation, a game. They were laughing and chuckling quite a lot. And as ESL students they forgot they were speaking English. They proved that a machine could bypass the "translation freeze" that keeps Hong Kong students locked in Cantonese during stress. That post — How AI Voice Mode Simulations Can Turn English Lessons into Real-Time Language Challenges — covered the simulation activities in detail.

But that was only part of the story.

The Adventures in AI group at PBPS has run for the full school year in 2025-26. Forty Grade 4–6 students. Weekly sessions with small groups of 4-6 students. And while the various AI voice-mode simulations such as Zombie Apocalypse and Tutor Susan were the headline-grabbers, the quieter experiments taught me more about what happens when you hand young learners an unfamiliar tool and see what they do with it. 

Here is what happened in the other half of the year.


EXPERIMENT 1: THE GEMINI STORYBOOK PROJECT





What I planned: Students would use Google Gemini's Storybook Experiment to generate illustrated stories. I expected a fun, creative activity — a bit of English practice wrapped in novelty, with a focus on using voice and text to prompt and iterate.

What happened: I first gave the task to a group of 4 boys from a non-elite Grade 5 class, who had told me outright that they did not enjoy reading and would rather play games. I was not sure how they would react to being told they could use AI to create a book they actually wanted to read.

In short, they collaborated with an enthusiasm I had not anticipated. They brainstormed ideas in English, argued over plot points, negotiated prompts, reviewed the AI's output, and iterated until the illustrations matched what they had in their heads. They used every productive and receptive skill they had — speaking, listening, reading, writing — and they did it without prompting from me. I had given them a tool and a loose brief. They supplied the drive.

I did not expect students who professed a dislike for reading to spend forty-five minutes debating such book-creation details such as dinosaurs invading supermarkets and what kind of jet a group of superheroes would plausibly travel to Hawaii on. But they did. And they made their fully illustrated audiobook with the help of AI.


EXPERIMENT 2: THE AI ETHICS TRIBUNAL




What I planned: A structured debate on whether AI should, could, and will replace artists, doctors, or teachers. I expected tentative opinions, some polite disagreement, maybe a few blank stares from the participating Grade 5 and 6 students. As all these activities were experiments in themselves, I had no further specific goals for this particular set of sessions.

What happened: The students arrived with strident, fully formed views. They already knew AI was out there. They were already curious about it. And they already had opinions on what it should and should not be allowed to do. One student argued (in simple language) that AI art was theft because the machine had not suffered for its creativity. Another said AI doctors would be fine as long as there was still a human to blame when something went wrong.

This surprised me. I had assumed I would be introducing them to the concept of AI ethics. Instead, I discovered they were already living inside the question. This experiment further persuaded me of the view that educators really need to keep pace with what is happening in wider society and make sure that when young learners enter the classroom, what they do and how they learn mirrors what they see when they step outside of their school life.

The experiment of the AI Ethics Tribunal also proved something important: working with AI and young learners is not the anathema many educators and parents believe it to be. Yes, there are good reasons to keep primary students away from unrestricted AI access. But done carefully — with a teacher leading, framing, and catching them when they go too far — it is a worthwhile pursuit. The students are already thinking about this stuff. The only question is whether we can help them think better.


EXPERIMENT 3: IMAGE AND VIDEO GENERATION



What I planned: An introduction to how AI creates visuals. Students would type prompts, see results, laugh at the weird outputs. Light relief. Silly fun. As these same students will no doubt need to make their own presentations for a variety of school projects in the future, this seemed like a useful, gentle introduction to the powers of AI to help students create visualized information.

What happened: As expected, it was light relief and fun for the first ten minutes. The non-elite P4 and P5 students giggled, typed nonsense, and cheered when the AI produced a three-headed cat or a dragon wearing sneakers.

Then the penny dropped.

They realized they could see their results, tweak their prompts, and watch the image change. They started iterating. Not toward "meaningful" art — they were still generating dragon-pandas in flight — but toward images that actually matched what they had pictured in their minds. The shift was subtle but real. They went from throwing random words at a machine to thinking deliberately about how to control it.

The experiment showed me that young learners can have a genuine revelation about AI: with a bit of thought, they can make the platform serve their imagination rather than replace it.


THE LESSON I DIDN'T PLAN TO TEACH



I started this group without a set curriculum. I had a list of tools and a vague idea that students should play with them. I told the students on Day One: "I am learning about this with you."

That admission changed everything. Not every session ran smoothly. There were times when bugs had to be fixed during the middle of an activity. There were times when technical issues meant we had to start something again from scratch. The students saw things fail, adjust, and we retried in real time. That was the lesson they remembered. Not the AI per se, but the process of experimentation, iteration and results brought about by experimentation.


WHY THIS MATTERS FOR EDUCATORS



If you are reading this and thinking "I don't know enough about AI to run a club like this" — good. You are the ideal candidate.

You do not need to master a tool before you put it in front of students. You need to be willing to say "I don't know, let's find out" and mean it. The students will figure out the interface faster than you will. Your job is to frame the experiment, ask the hard questions, and catch them when they go too far.

I started the Adventures in AI group as a gamble. It could have fallen flat. It could have been 40 kids staring at screens for an hour every week. Instead, it became the most honest teaching I've done in fourteen years as a NET — because I didn't pretend I had all the answers, and started modelling what it actually looks like to learn.


WHAT'S NEXT?




The Adventures in AI site  archives every video, every storybook, every debate from this year. Students can browse it over summer. New recruits can see what they're signing up for in September.

The club will run again next year. I still don't have a curriculum. I still don't know what will work. But I know about 40 students who will figure it out with me.




Comments

Popular posts from this blog

#36. EdTech Stack Review 2024-25

#35. Gemini Now Inside Google Classroom - Free for All

#38. How AI Voice Mode Simulations Can Turn English Lessons into Real-Time Language Challenges