Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
In an age when more and more young children are hooked on digital devices, YouTube is bombarding them with AI slop.
After investigating over 1,000 YouTube shorts recommended to young children by the video platform, The New York Times found that the algorithm is heavily pushing AI-generated content that explicitly targets “toddlers” and “preschoolers.”
On top of being nonsensical, the videos are often presented under the guise of being educational. Two common themes are teaching kids about the alphabet and animals — subject matters, conveniently, that provide threadbare structures for easily produced low-effort slop.
Calling the videos educational is a stretch as well. One video highlighted by the NYT shows a gooey liquid being squeezed into a glass of water, before turning into different animals representing each letter of the alphabet — only the animals are bizarre chimeras with mermaid tails. In another set to an off-key rendition of “Old MacDonald Had a Farm,” a massive egg rolls out of a barn door before hatching an impossibly proportioned horse. And in another alphabet short, a quail transforms into an aerial drone, and a rhino into a dump truck that bears the megafauna’s head.
At best, these videos are redundant regurgitations of mindless “Cocomelon”-style content. But at worst, experts fear they could be actively harming their cognitive development.
“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” Jenny Radesky, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School, told the NYT. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”
The hyper-realistic visuals used in many AI videos — some examples highlighted by the NYT are below — Radesky speculated, could inhibit a young child’s ability to distinguish fantasy from reality.
It’s not a niche issue. YouTube’s algorithm seems astoundingly eager to recommend AI slop; in its tests, the NYT began by watching popular children’s channels, then scrolling through Shorts.
More than 40 percent of the videos that followed in a fifteen minute session appeared to have AI visuals. That’s striking: instead of recommending more traditional children’s content, the algorithm, seemingly by default, gravitated towards AI.
... continue reading