AI-generated images can not be 100% secure with digital watermarks researchers prove in different tech studies. (Source: watermarkley.com.)

University Research Finds Digital Watermarks Ineffective in War against Deepfakes

The threat of deepfakes in the media is a real and dangerous issue. And with the presidential elections in the U.S. coming up we can expect a lot of AI-generated deep fakes to be thrown at us to try and manipulate our vote. Remember that some of our foreign adversaries did it quite thoroughly in the 2016 election. We found a lot of information about this problem in an article published by wired.com.

With AI-generated text and images, the threat of deep fakes takes on a new level of deception. And while tech giants like Google are trying to develop ways to detect watermarks on AI-generated items the battle so far has been less than successful.

Watermarks

Watermarking has been one of the latest strategies to identify AI-generated images and text. Just as physical watermarks are embedded on paper money and stamps to prove authenticity, digital watermarks are meant to trace the origins of images and text online, helping people spot deepfaked videos and bot-authored books.

On the surface that sounds great. However, there have been several papers recently that pointed out what a seemingly impossible feat it is proving to be.

“This summer, OpenAI, Alphabet, Meta, Amazon, and several other major AI players pledged to develop watermarking technology to combat misinformation. In late August, Google’s DeepMind released a beta version of its new watermarking tool, SynthID. The hope is that these tools will flag AI content as it’s being generated, in the same way that physical watermarking authenticates dollars as they’re being printed.”

The Papers Results

This Google study is not the only work pointing to watermarking’s major shortcomings.

“It is well established that watermarking can be vulnerable to attack,” says Hany Farid, a professor at the UC Berkeley School of Information.

This August, researchers at the University of California, Santa Barbara, and Carnegie Mellon coauthored another paper outlining similar findings, after conducting their experimental attacks.

“All invisible watermarks are vulnerable,” it reads.

This newest study goes even further. While some researchers have held out hope that visible (“high perturbation”) watermarks might be developed to withstand attacks, Soheil Feizi and his colleagues say that even this more promising type can be manipulated.

“The flaws in watermarking haven’t dissuaded tech giants from offering it up as a solution, but people working within the AI detection space are wary. ‘Watermarking at first sounds like a noble and promising solution, but its real-world applications fail from the onset when they can be easily faked, removed, or ignored,’ says Ben Colman, the CEO of AI-detection startup Reality Defender.”

Harm Reduction

AI-made fake images detection will need new solutions, possibly created through AI. However, humans will have to find the legal angles.

Google researcher and University of Maryland computer science professor Soheil Feizi is largely skeptical of the idea that watermarking is a good use of resources for companies like Google.

“Perhaps we should get used to the fact that we are not going to be able to reliably flag AI-generated images,” he says. “Based on our results, designing a robust watermark is a challenging but not necessarily impossible task,” it reads.

read more at wired.com