Legislators around the world are issuing an urgent call for deepfake protection to safeguard women’s dignity and privacy as companies like Google develop new methods. (Source: Image by RR)

Surge of Deepfake Porn Highlights the Need for Array of Safeguards Including Watermarks, Shields

The rise of nonconsensual deepfake pornography, particularly targeting women, has become a widespread concern as generative AI technology advances, according to a story on technologyreview.com. In a recent incident involving pop star Taylor Swift, explicit deepfake images went viral, highlighting the urgent need for effective countermeasures. While several strategies have emerged to combat this issue, each approach comes with its own set of advantages and limitations.

Watermarks: One proposed solution is the use of watermarks, invisible signals embedded in images to help computers identify AI-generated content. Google’s SynthID, for example, uses neural networks to modify image pixels and adds a watermark, detectable even if the image is edited or copied through a screenshot. This could improve content moderation on social media platforms, making it quicker to spot fake content, including nonconsensual deepfakes. However, watermarks are still experimental and not widely implemented. Determined attackers might find ways to tamper with them, and companies do not universally apply this technology to all images, limiting its effectiveness.

Protective Shields: Protective tools such as PhotoGuard, Fawkes, and Nightshade offer individuals defenses against AI image abuse. They subtly alter image pixels to make AI-generated manipulations look unrealistic, protecting individuals from unauthorized use of their images. For instance, PhotoGuard distorts images invisibly, making deepfake alterations stand out. While these tools show promise in providing protection against AI image exploitation, they work on the latest AI models and may not withstand future advancements. Moreover, they cannot be applied retroactively to images already online and may be challenging to implement for public figures who do not control their online image presence.

Regulation: Stricter regulation has been advocated as a crucial step in combating nonconsensual deepfake pornography. In the United States, recent bipartisan bills have been introduced to make sharing fake nude images a federal crime, with states like California and Virginia already implementing bans on the creation and distribution of such content. Globally, efforts to address the issue vary, with the UK’s Online Safety Act outlawing the sharing of deepfake porn but not its creation, and the European Union introducing bills requiring disclosure of AI-generated content and quicker removal of harmful material. China has taken a comprehensive approach with its deepfake law, demanding consent, authentication, and labeling for AI-generated content.

Challenges and Limitations: While regulation provides a legal framework and potential recourse for victims, enforcement can be challenging. Identifying perpetrators and building cases against them may be difficult, especially when they operate from different jurisdictions. Additionally, as deepfake technology evolves, staying ahead of its advancements becomes increasingly complex. Despite these challenges, stricter regulations can send a clear message that creating nonconsensual deepfakes is unacceptable, potentially changing societal attitudes toward this form of sexual abuse.

Addressing nonconsensual deepfake pornography requires a combination of technical solutions and legal measures. Watermarks and protective shields offer immediate protection, but their effectiveness depends on widespread adoption and continuous adaptation to evolving technology. Regulation provides a broader framework for accountability but faces enforcement challenges. Ultimately, tackling this issue demands ongoing efforts and international cooperation to protect individuals from the harms of nonconsensual deepfakes and uphold their rights to privacy and consent.

Read more at technologyreview.com