Flickering Lights Encoded Into Videos Could Help Fight Against Fake Footage

A professional video camera with a large microphone and light panel is set up indoors, ready for filming. The background is out of focus, emphasizing the camera equipment.

A team of computer scientists has developed a novel watermarking method that could make it easier to detect AI-generated videos.

The group from Cornell University’s new watermarking technology is called “noise-coded illumination,” and as Engadget reports, it adds a flickering that cameras can detect but video viewers won’t notice.

The research paper published in ACM Transactions on Graphics will be presented by lead author Peter Michael at SIGGRAPH in Vancouver, Canada, on August 10. Michael and the rest of the team explain that their system works by encoding “very subtle, noise-like modulations into the illumination of a scene.”

Artificial light sources in a scene, like in a press room, can themselves be coded to be seen a specific way by cameras. It’s like watermarking a light’s output, basically.

While photographers are used to the concept of a visible watermark, like an artist’s name overlaid on an image, watermarks do not have to be visible to people at all. For example, Google’s SynthID is a digital watermark that software can see but people cannot. While that occurs at the pixel level across an entire image, in this case, AI-generated ones, invisible watermarks can also be applied to real images.

“We propose a new type of watermarking, which we call noise-coded illumination (NCI), that instead watermarks the illumination in a scene,” the researchers explain. “Our approach works by modulating the intensity of each light source by a subtle pseudo-random pattern drawn from a distribution that resembles existing noise.”

To a viewer, videos captured under NCI would look entirely normal, although a camera could pick up the flickering, much like cameras can be affected by LED flicker now. However, tucked away inside the natural noise is a specific code image created by each coded light source in the scene.

When an “adversary” edited or otherwise tweaked that video, like to change what appears to happen in the scene, they would “unwittingly change the code images contained therein,” which would make it easy to detect that the image has been manipulated. This would also work to detect more traditional video editing techniques, like misleading cuts during a recorded interview.

“Knowing the codes used by each light source lets us recover and examine these code images, which we can use to identify and visualize manipulation,” the researchers explain.

While the use cases of such a system are admittedly narrow in scope — it requires coded light sources, for example — it could apply to many critical situations, like when a politician or authority figure is speaking in a press room. When people manipulate what prominent people do or say in an official setting, it can have dramatic societal consequences.

“Our work introduces NCI as a novel forensic strategy that helps protect content in a particular physical space,” the researchers conclude. “Our approach creates an information asymmetry by using randomized illumination codes that resemble noise. It also makes manipulations easier to detect by reducing the manifold of plausible videos. Our approach is inexpensive, simple to implement, and unnoticeable to most observers.”


Image credits: Header photo licensed via Depositphotos. The referenced research paper, ‘Noise-Coded Illumination for Forensic and Photometric Video Analysis,’ was published in ACM Transactions on Graphics.

Discussion