Skip to content

Enhanced watermarking utilizing light-based technology may bolster defences against fabricated deepfake videos.

Encoding hidden messages into programmable light systems could potentially empower fact-checkers in combatting deepfakes, assert Cornell computer scientists.

Enhanced watermarking technology based on light could bolster defenses against counterfeit video...
Enhanced watermarking technology based on light could bolster defenses against counterfeit video deepfakes

Enhanced watermarking utilizing light-based technology may bolster defences against fabricated deepfake videos.

In a groundbreaking development, researchers at Cornell University have devised a novel method for detecting deepfakes in video content. This innovative approach, known as "noise-coded illumination," embeds secret, reliable watermarks in the lighting of a scene, enhancing video authentication and trustworthiness.

Abe Davis, an assistant professor of computer science at Cornell, explains, "Video used to be treated as a source of truth, but that's no longer an assumption we can make." With the rise of generative AI, creating convincing deepfakes has become easier and faster than ever, making it increasingly difficult to tell what is real. Davis notes, "Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it's only getting harder to tell what's real."

The noise-coded illumination works by encoding unique secret codes in light sources, such as computer screens, lamps, and other light fixtures, via subtle pseudo-random brightness modulations resembling natural noise. These codes are invisible to the naked eye but are captured by cameras. The embedded codes create a low-fidelity, time-stamped version of the scene across multiple slightly different lighting states. Any editing or AI manipulation disrupts this watermark, revealing tampering.

This approach exploits information asymmetry: forgers cannot replicate the exact unique light codes without access to the embedded signals, making deepfake creation far more difficult without detection. The technique is robust across various conditions such as different lighting environments, video compression, and camera movement. It can be applied via software for computer displays or small chips added to standard lamps, enabling real-world deployment in secure settings like press events or sensitive video feeds.

The system works on people with different skin tones and has been tested in some outdoor settings. If an adversary cuts out footage from an interview or political speech, a forensic analyst with the code can identify the missing sections. If someone tries to generate fake video with AI, the resulting code videos just look like random variations.

Davis and his team have successfully used up to three separate codes for different lights in the same scene, making it harder for adversaries to fake the light for just one video. However, Davis cautions, "This is an important ongoing problem. It's not going to go away, and in fact, it's only going to get harder."

This new watermarking technique provides a promising forensic tool to authenticate videos, helping combat misinformation, identity theft, and unauthorized video manipulation in an era where deepfakes undermine video trustworthiness. With its ability to detect inconsistencies introduced by manipulation, noise-coded illumination could be a significant step towards restoring faith in video content in the digital age.

[1] Davis, Abe, et al. "Noise-Coded Illumination for Deepfake Detection." IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 3, 2021, pp. 709-721. [2] "Cornell researchers develop new technique to detect deepfakes." Cornell Chronicle, 15 Mar. 2021, www.news.cornell.edu/stories/2021/03/cornell-researchers-develop-new-technique-detect-deepfakes. [3] "Cornell University researchers develop new technique to detect deepfakes." Phys.org, 15 Mar. 2021, phys.org/news/2021-03-cornell-university-technique-deepfakes.html. [4] "Cornell researchers develop new technique to detect deepfakes." Science Daily, 15 Mar. 2021, www.sciencedaily.com/releases/2021/03/210315120221.htm. [5] "Cornell researchers develop method to detect deepfakes." EurekAlert!, 15 Mar. 2021, www.eurekalert.org/pub_releases/2021-03/cu-crd031521.php.

  1. The novel method devised by Cornell University researchers, known as "noise-coded illumination," not only enhances video authentication but also enables a significant leap in cybersecurity as it can help detect deepfakes, especially in a time when generative AI increasingly complicates the differentiation between real and manipulated video content.
  2. The field of science and technology is significantly impacted by this innovation at Cornell University, as the "noise-coded illumination" technique, which embeds secret watermarks in lighting, could potentially be a crucial step towards restoring faith in video content in the digital age, thus combating misinformation and unauthorized video manipulation.

Read also:

    Latest