In what sounds like a scene from a comedy movie, a squad of Marines successfully outsmarted an advanced artificial intelligence system by employing tactics that would make any child playing hide-and-seek proud. The remarkable demonstration revealed both the impressive capabilities and surprising limitations of modern AI technology.
The experiment took place as part of DARPA’s Squad X program a while back. It aimed to develop advanced surveillance systems capable of identifying human threats in complex urban environments. The AI had undergone extensive training, spending six days learning to recognize Marines as they moved through various scenarios.
“What DARPA was working on was developing the ability to identify people in complex urban environments,” explained Paul Scharre, author of the book ‘Four Battlegrounds: Power in the Age of Artificial Intelligence,’ which documents this fascinating encounter. “And sense people approaching the squad.”
The system seemed sophisticated enough. As Phil Root, deputy director of the Defense Sciences Office at DARPA, noted the complexity of human detection: “A tank looks like a tank, even when it’s moving. A human when walking looks different than a human standing. A human with a weapon looks different.”
After days of training, the moment of truth arrived. Root set up the ultimate challenge: “If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen.”
The results were both humbling and hilarious for the AI developers. “Eight Marines — not a single one got detected,” Root recounted.
The Marines’ tactics were as creative as they were effective. Two determined servicemembers somersaulted across 300 meters of terrain to approach the sensor undetected. Perhaps most memorably, another pair simply threw a cardboard box over themselves and walked right up to the system.
“You could hear them giggling the whole time,” Root remembered of the box-wearing Marines, capturing the almost playful nature of their successful deception.
Another Marine demonstrated remarkable ingenuity by stripping a fir tree and holding it in front of himself as camouflage while approaching the sensor. The improvised disguise worked perfectly.
The experiment exposed what experts call “distributional shift” – a phenomenon where AI systems trained on specific datasets struggle when confronted with scenarios outside their training parameters. The AI had learned to identify people walking normally, but it had never encountered somersaulting humans or individuals disguised as moving trees.
... continue reading