![](https://seeflection.com/wp-content/uploads/2025/02/deepseek-data.png)
Wiz Research uncovered a major security breach in DeepSeek’s publicly accessible ClickHouse database, exposing over a million sensitive records, including chat histories, API keys, and backend details, highlighting the urgent need for stronger AI infrastructure security measures. (Source: Image by RR)
Security Experts Warn That AI Startups Must Prioritize Data Protection to Avoid Breaches
Wiz Research uncovered a critical security vulnerability in DeepSeek’s publicly accessible ClickHouse database, exposing over a million lines of sensitive log data, including chat history, API keys, backend details, and operational metadata. The security flaw, which provided full database access without authentication, posed significant risks to DeepSeek’s internal security and end-user data, allowing potential privilege escalation and unauthorized access to proprietary information. Upon discovery, Wiz Research promptly notified DeepSeek, which swiftly secured the database to prevent further exposure.
The breach, as noted in wiz.io, was identified through external reconnaissance, revealing open, unauthenticated ports (8123 & 9000) on multiple DeepSeek subdomains, leading to the compromised ClickHouse database. Using basic SQL queries, Wiz Research accessed a log_stream table containing highly sensitive data, including timestamps, API endpoint references, chatbot metadata, and plaintext logs. The exposure meant that an attacker could retrieve plaintext passwords, exfiltrate sensitive local files, and compromise DeepSeek’s entire system infrastructure. Though Wiz Research adhered to ethical security protocols and did not execute malicious queries, the scale of the exposure highlights a severe AI infrastructure security gap.
This breach raises alarms about the security practices of AI startups, as rapid adoption of AI services often outpaces security measures. While much of AI security discourse focuses on future threats like model exploitation, basic misconfigurations—such as unprotected databases—present immediate risks to both companies and their users. The DeepSeek exposure underscores the importance of securing AI infrastructure, as failing to do so can result in data leaks, system vulnerabilities, and potential cyberattacks.
As AI companies become critical infrastructure providers, security frameworks must evolve to match the scale of their influence. The DeepSeek incident serves as a wake-up call for the industry to enforce cloud-level security standards and ensure that AI services protect sensitive data as rigorously as public cloud and enterprise platforms. Moving forward, collaboration between AI engineers and cybersecurity teams is essential to prevent future breaches, protect user data, and maintain trust in AI-driven applications.
read more at wiz.io
Leave A Comment