As well as generating image, audio, and video content with these AI tools, extremists are also experimenting with using the platforms more creatively, to produce blueprints for 3D-printed weapons or generate malicious codes designed to steal the personal information of potential recruitment targets. (Source: Image by RR)

Far-Right Platforms Deploy AI Chatbots to Deny the Holocaust, Spread Hate with Videos

Extremists across the United States are increasingly using AI tools to spread hate speech, recruit members, and radicalize online supporters at an unprecedented speed and scale, according to a report from the Middle East Media Research Institute (MEMRI). AI-generated content has become a staple in their operations, with extremists developing their own AI models to create blueprints for 3D weapons and recipes for making bombs. According to a stoyr on wired.com, researchers at MEMRI’s Domestic Terrorism Threat Monitor detail how domestic actors, including neo-Nazis and white supremacists, are utilizing AI to enhance their hateful propaganda and operational capabilities.

Initially hesitant about the technology, extremists have now embraced AI, particularly for generating video content. The release of advanced video generation tools, like OpenAI’s Sora, has allowed them to produce sophisticated video propaganda, including fake videos of public figures making offensive statements. This has significantly boosted their ability to disseminate hate speech and extremist ideologies, contributing to a troubling trend as the US election approaches.

AI tools are also being used to create and manage fake accounts and generate text, images, and videos on a large scale. Extremists are employing these technologies to produce blueprints for 3D-printed weapons and malicious codes for stealing personal information. They exploit loopholes in content filters, framing requests in ways that bypass restrictions, which has led to the creation of harmful materials without detection by existing moderation systems.

The development of extremist-specific AI models, devoid of content moderation safeguards, poses a significant threat. These AI engines, developed by tech-savvy extremists, can generate malicious content unchecked. For example, the far-right platform Gab has rolled out chatbots trained to deny the Holocaust and promote extremist ideologies. This surge in AI-driven hate content underscores the urgent need for robust countermeasures to prevent further radicalization and violence.

read more at wired.com