OpenAI disclosed that a Mixpanel security breach exposed limited API user metadata—such as names and emails—but no sensitive data or credentials, prompting the company to sever ties with the vendor, warn users about phishing risks and tighten security across its entire partner ecosystem. (Source: Image by RR)

Mixpanel Breach Highlights Ongoing Risks of External Analytics Dependencies

OpenAI disclosed a security incident tied to Mixpanel, a third-party analytics vendor that previously supported web analytics for the API platform interface. The breach occurred entirely within Mixpanel’s systems, and OpenAI emphasized that its own infrastructure, along with ChatGPT and other non-API products, were not compromised. Mixpanel notified OpenAI shortly after discovering unauthorized access on November 9, 2025, and later provided the affected dataset to OpenAI on November 25.

As noted in an article at openai.com, the data exposed consisted solely of limited analytics metadata associated with some API users. Included were names, email addresses, approximate city-level location data, browser and operating system information, referring websites and internal organization or user IDs. No chat data, prompts, API requests, API usage logs, passwords, API keys, payment details, government IDs, authentication tokens or other sensitive credentials were affected.

In response, OpenAI immediately removed Mixpanel from all production services, reviewed the compromised dataset, and began direct notifications to impacted organizations and users. The company is working closely with Mixpanel and other partners to fully assess the scope of the incident and continues to monitor for signs of malicious activity. OpenAI also pledged heightened vendor oversight following the incident, initiating expanded security reviews and updating requirements for all third-party partners.

OpenAI warned that the limited data exposed—especially names, email addresses, and API-related identifiers—could be used in phishing or social-engineering attempts. Users are advised to scrutinize unexpected messages, verify that OpenAI communications originate from official domains, and enable multi-factor authentication to reduce risk. OpenAI reiterated its commitment to transparency, promising to update customers should new, materially relevant information emerge.

read more at openai.com