Cornell Researchers Develop Invisible Light-Based Watermark to Detect Deepfakes
3 Articles
3 Articles
Cornell researchers develop invisible light-based watermark to detect deepfakes
The concept, called noise-coded illumination, was presented August 10 at SIGGRAPH 2025 in Vancouver, British Columbia, by Peter Michael, a Cornell computer science graduate student who led the project. The approach was first envisioned by Abe Davis, an assistant professor.Read Entire Article
Light Water Signs: A newly developed light code for lamps can mark videos like a kind of "water sign" – and expose manipulations and forgeries. Because if videos are subsequently falsified or completely generated by artificial intelligence, they do not lack invisible brightness fluctuations or their clock does not fit. Even where the video has been recorded, it could be detected by such light water signs as the team explains. Since videos exist,…
Seeing isn't believing – flickering lights could reveal deepfaked videos
Thanks to advances in generative AI, seeing video of an event is no longer proof that it actually happened as shown. There could be new hope on the horizon, however, in the form of an authentication system that watermarks videos using fluctuations in the on-location lighting.Continue ReadingCategory: ScienceTags: Cornell University, Video, Artificial Intelligence, Deepfakes, Security, Authentication
Coverage Details
Bias Distribution
- 100% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium