Tech News
← Back to articles

What Past Education Tech Failures Can Teach Us About the Future of AI in Schools

read original related products more articles

This article was originally published on The Conversation.

American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.

I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that’s about to wash over schools and society.

At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.

New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.

It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.

We’ve been wrong and overconfident before

I started teaching high school history students to search the web in 2003. At the time, experts in library and information science developed a pedagogy for web evaluation that encouraged students to closely read websites looking for markers of credibility: citations, proper formatting, and an “about” page. We gave students checklists like the CRAAP test – currency, reliability, authority, accuracy and purpose – to guide their evaluation. We taught students to avoid Wikipedia and to trust websites with .org or .edu domains over .com domains. It all seemed reasonable and evidence-informed at the time.

The first peer-reviewed article demonstrating effective methods for teaching students how to search the web was published in 2019. It showed that novices who used these commonly taught techniques performed miserably in tests evaluating their ability to sort truth from fiction on the web. It also showed that experts in online information evaluation used a completely different approach: quickly leaving a page to see how other sources characterize it. That method, now called lateral reading, resulted in faster, more accurate searching. The work was a gut punch for an old teacher like me. We’d spent nearly two decades teaching millions of students demonstrably ineffective ways of searching.

Today, there is a cottage industry of consultants, keynoters and “thought leaders” traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.

... continue reading