Tech News
← Back to articles

Hiding secret codes in light protects against fake videos

read original related products more articles

Fact-checkers may have a new tool in the fight against misinformation.

A team of Cornell researchers has developed a way to “watermark” light in videos, which they can use to detect if video is fake or has been manipulated.

The idea is to hide information in nearly-invisible fluctuations of lighting at important events and locations, such as interviews and press conferences or even entire buildings, like the United Nations Headquarters. These fluctuations are designed to go unnoticed by humans, but are recorded as a hidden watermark in any video captured under the special lighting, which could be programmed into computer screens, photography lamps and built-in lighting. Each watermarked light source has a secret code that can be used to check for the corresponding watermark in the video and reveal any malicious editing.

Peter Michael, a graduate student in the field of computer science who led the work, will present the study, “Noise-Coded Illumination for Forensic and Photometric Video,” on Aug. 10 at SIGGRAPH 2025 in Vancouver, British Columbia.

Editing video footage in a misleading way is nothing new. But with generative AI and social media, it is faster and easier to spread misinformation than ever before.

Davis and Michael at work in Davis’ Gates Hall lab.

“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, who first conceived of the idea. “Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real.”

To address these concerns, researchers had previously designed techniques to watermark digital video files directly, with tiny changes to specific pixels that can be used to identify unmanipulated footage or tell if a video was created by AI. However, these approaches depend on the video creator using a specific camera or AI model – a level of compliance that may be unrealistic to expect from potential bad actors.

By embedding the code in the lighting, the new method ensures that any real video of the subject contains the secret watermark, regardless of who captured it. The team showed that programmable light sources, like computer screens and certain types of room lighting, can be coded with a small piece of software, while older lights, like many off-the-shelf lamps, can be coded by attaching a small computer chip about the size of a postage stamp. The program on the chip varies the brightness of the light according to the secret code.

So, what secret information is hidden in these watermarks, and how does it reveal when video is fake? “Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.”

... continue reading