Russia’s Safe City projects are rapidly expanding, with plans to deploy more surveillance systems nationwide and centralize video streams, alongside digitizing public services to facilitate the creation of extensive databases for tracking individuals. (Source: Image by RR)

Moscow’s AI Surveillance: The Thin Line Between Safe and Suppressed

Moscow’s ambitious smart city project, promised to reduce crime through extensive surveillance including an AI system named Sfera, has taken a dark turn following Russia’s invasion of Ukraine, according to a story on Residents like Sergey Vyborov, previously detained for protesting the invasion, find themselves under constant watch by one of the world’s most efficient surveillance systems, marking a chilling realization of privacy advocates’ warnings about the potential for such technology to be used for oppression. This system, part of the broader “Safe City” initiative, which boasted about improving public safety with its 217,000 surveillance cameras, has increasingly targeted protestors, journalists, and political rivals instead of its intended criminals and terrorists.

The inception of Safe City and its foundational technology began innocuously with NTechLab’s development of face recognition software, initially hailed for its innovative capabilities, such as the FindFace app. However, as the technology evolved, it became integral to Moscow’s sprawling surveillance apparatus, assisting in everything from managing World Cup security to compiling databases of political dissidents. This transformation reflects a broader global trend where the integration of AI in public safety systems blurs the lines between security and surveillance, raising ethical and privacy concerns that were once theoretical but are now all too real for those under Moscow’s watch.

The escalation of Safe City’s surveillance capabilities has coincided with Russia’s increasing authoritarianism, particularly evident during the COVID-19 pandemic when the system expanded under the guise of public health. This period also saw the project extend its reach, centralizing vast amounts of data from across Russia, intensifying the state’s control over its citizens. As noted in, this digital repression has sparked significant backlash, including lawsuits and international criticism, yet the system’s opaque nature and the government’s tightening grip on digital privacy continue to stifle dissent.

Amidst growing scrutiny, the architects of Moscow’s surveillance state, including companies like NTechLab, face a moral reckoning. Their technology, originally celebrated for its potential to enhance public safety, has become a tool of oppression, forcing some creators to confront the unintended consequences of their innovations. This introspection is part of a broader exodus of IT talent from Russia, as professionals grapple with their roles in enabling an increasingly repressive regime.

The future of Safe City and similar projects across Russia appears set on expansion, with plans to centralize and digitize surveillance data further, mirroring dystopian predictions about the power of smart city technologies to erode privacy. As Russia continues to refine its surveillance capabilities, possibly drawing inspiration from models like China’s, the dilemma of technology’s dual use—both as a tool for societal benefit and as a weapon against personal freedom—becomes increasingly pronounced. This ongoing development poses not just a national challenge, but a global question about the balance between innovation, security, and human rights.