U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box

Solider hiding in a box

United States Marines outsmarted artificially intelligent (AI) security cameras by hiding in a cardboard box and standing behind trees.

Former Pentagon policy analyst Paul Scharre has recalled the story in his upcoming book Four Battlegrounds: Power in the Age of Artificial Intelligence.

In the book, which will be released on February 28, Scharre recounts how the U.S. Army was testing AI monitoring systems and decided to use the Marines to help build the algorithms that the security cameras would use.

They then attempted to put the AI system to the test and see if the squad of Marines could find new ways to avoid detection and evade the cameras.

To train the AI, the security cameras, which were developed by Defense Advanced Research Projects Agency’s (DARPA) Squad X program, required data in the form of a squad of Marines spending six days walking around in front of them.

‘Not a Single One Got Detected’

After six days spent training the algorithm, the Marines decided to put the AI security cameras to the test.

“If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen,” DARPA deputy director Phil Root tells Scharre in the book.

Within a single day, the Marines had worked out the best way to sneak around an AI monitoring system and avoid detection by the cameras.

Root says: “Eight Marines — not a single one got detected.”

According to Scharre’s book, a pair of marines “somersaulted for 300 meters” to approach the sensor and “never got detected” by the camera.

Meanwhile, two marines successfully evaded the camera by hiding in a cardboard box, a strategy they may have tried based on the video game series Metal Gear Solid.

“You could hear them giggling the whole time,” Root says.

Meanwhile another smiling marine “field-stripped a fir tree and walked like a fir tree” toward the AI security camera and succeeded, too.

Root tells Scharre that the marines were able to easily fool the cameras because “the AI system had been trained to detect humans walking, not humans somersaulting, hiding in a cardboard box, or disguised as a tree.”

“So these simple tricks, which a human would have easily seen through, were sufficient to break the algorithm.”

The Limitations of AI Algorithms

The story offers an important lesson about the limitations of AI algorithms even as the technology continues to take over the digital landscape.

AI is only as smart as the data it is trained on. Scharre writes that while an AI can outperform human beings in a specific task, they have a propensity to “mistake performance for competence.”

As Task & Purpose notes: “In other words, AI can be very good at what it knows how to do. But the AI doesn’t know what it doesn’t know, and it also doesn’t know that it should know what it doesn’t know.”

“An algorithm is brittle, and the takeaway from this is that there will always be these edge cases… people will always have the ability to evolve.” Scharre tells Task & Purpose

Unlike AI, Scharre says “humans tend to have a much richer understanding of the world.”

PetaPixel previously reported on a clothing line that confuses AI cameras and stops them from recognizing the wearer.


Image credits: Header photo via Metal Gear Solid, Nexus Mods.

Discussion