MIT Researchers Develop PhotoGuard to Protect Images from AI Manipulation
According to a story on MIT’s technologyreview.com, a tool developed by the university’s researchers called PhotoGuard, “works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated.”
Researchers presented the tool at the International Conference on Machine Learning last week. Hadi Salman, a PhD researcher at MIT who contributed to the research, said the program will help prevent people from altering photos, for instance. He calls PhotoGuard:
“ ‘…an attempt to solve the problem of our images being manipulated maliciously by these models.’ The tool could, for example, help prevent women’s selfies from being made into nonconsensual deepfake pornography.”
Running an image through PhotoGuard makes it impossible to use Stable Diffusion, among other AI apps, to manipulate photos. Previously, AI companies only used watermarking to protect images. The way PhotoGuard works to foil manipulation is two-pronged. One is an encoder attack, which involves adding imperceptible signals to the image so that the AI model interprets it as something else. The second is a diffusion attack, which disrupts the way the AI models generate images, encoding them with secret signals that alter how they’re processed by the model.
Preventing the manipulation of an image, instead of just identifying it, is a much more effective way to tackle the problem, according to Henry Ajder, an expert on generative AI and deepfakes.
read more at technologyreview.com
Leave A Comment