A group of Cornell computer scientists has unveiled what they believe could be a new tool in the fight against AI‑generated video, deepfakes and doctored clips.
The watermarking technique, called “noise‑coded illumination,” hides verification data in light itself to help investigators spot doctored videos. The approach, devised by Peter Michael, Zekun Hao, Serge Belongie and assistant professor Abe Davis, was published in the June 27 issue of ACM Transactions on Graphics and will be presented by Michael at SIGGRAPH on August 10.
The system adds a barely perceptible flicker to light sources in a scene. Cameras record this pseudo-random pattern even though viewers cannot detect it, and each lamp or screen that flickers carries its own unique code.
As an example, imagine a press conference filmed in the White House briefing room. The studio lights would be programmed to flicker with unique codes. If a viral clip from that press conference later circulates with what appears to be an inflammatory statement, investigators can run it through a decoder, and by checking whether the recorded light codes line up, could determine whether the footage was doctored.