One day last spring, in a high school classroom in Texas, students were arguing about who to kill off first. It was a thought experiment with a sci-fi premise: A global zombie outbreak has decimated major cities. One hundred frozen embryos meant to reboot humanity are safe in a bomb shelter, but the intended adult caretakers never made it. Instead, 12 random civilians stumbled in. There’s only enough food and oxygen for seven. The students had to decide who would die and who would live to raise the future of the human race.
It wasn’t an easy choice. There was Amina, a 26-year-old actress, and Bubak, her husband. Also, a nurse named Marisa, a farmer named Roy, and others. Bubak, who had a criminal record, was a hard sell. So were the useless-yet-likable extras. For years, English teacher Cody Chamberlain had let students debate the ethics and logistics of saving humanity on their own—until he decided to throw AI into the mix. Chamberlain fed the scenario to ChatGPT. It killed Bubak and saved his wife—not because she was useful in other ways but because she could bear children.
“That’s so cold,” the students gasped.
It was. But for Chamberlain, it offered something new: a dispassionate, algorithmic judgment his students could think about critically. “ChatGPT said we needed her, like Handmaid’s Tale–style,” he says. “And the kids were like, ‘That’s ridiculous.’ It was weird for ChatGPT to finally not have an answer key but something the kids could push back on.”
Teachers have long used technology to personalize lessons, manage workloads, or liven up slideshows. But something shifted after ChatGPT’s public launch in 2022. Suddenly, teachers weren't just being tasked with figuring out how to incorporate iPads or interactive whiteboards into their lessons. They had to decipher how to deal with a technology that was already crash-landing into their students’ lives, one that could help them study or help them cheat. A quarter of teachers surveyed by Pew in the fall of 2023 said they thought AI provided more harm than benefits; 32 percent thought the tech was a mix of good and bad. Educators faced a choice: Try to fight off AI, or find a way to work with it.
Generation iPad Young people entering classrooms this fall need a totally different tool set than the generations before them. WIRED is here to school you.
This fall, AI will be more embedded in US classrooms than ever. Teachers are deploying large language models to write quizzes, adapt texts to reading levels, generate feedback, and design differentiated instruction. Some districts have issued guidance. Others have thrown up their hands. In the absence of clear policy, teachers are setting the boundaries themselves—one prompt at a time.
“It’s just too easy and too alluring,” says Jeff Johnson, an English teacher in California who instructs other teachers on AI incorporation in his district. “This is going to change everything. But we have to decide what that actually means.”
Teaching has long relied on unpaid labor—nights spent googling, planning, adjusting for special education or multilingual learners. For Johnson, AI can provide the kind of assistance that can curb burnout. He uses Brisk to generate short quizzes, Magic School to streamline lesson planning, and Diffit to create worksheets tailored to different skill levels. He doesn’t use AI to grade papers or answer student questions. He uses them to prep faster.
“That alone saves me days and weeks,” Johnson says. “Time that can be better spent interacting with students.”