Microsoft and OpenAI are investigating whether a group linked to Chinese AI startup DeepSeek gained unauthorized access to OpenAI’s technology.
Sources familiar with the matter revealed that Microsoft’s security team detected individuals, potentially connected to DeepSeek, extracting a large volume of data using OpenAI’s API.
While developers can legally access OpenAI’s proprietary AI models through paid licenses, concerns have arisen over whether the data retrieval in this case was within permitted use.
The inquiry underscores growing worries about AI security and potential misuse of advanced AI technologies. With AI development becoming highly competitive, any unauthorized access to OpenAI’s systems could have serious implications for intellectual property and data protection.
The involvement of a group linked to DeepSeek, a Chinese AI firm, adds another layer of complexity to the situation, given the increasing global scrutiny on AI-related data sharing between countries.
Microsoft and OpenAI have not yet disclosed the full extent of the data extraction or whether it was an intentional breach or an overuse of permitted API access. However, the incident highlights the challenges tech companies face in securing their AI models while offering access to third-party developers.
OpenAI’s API allows businesses to integrate its models into their applications, but excessive or improper use can raise red flags, prompting security measures.
As AI models continue to evolve, ensuring that data remains protected from unauthorized access has become a major priority. With tensions growing over AI competition, incidents like this may lead to stricter controls on API access and greater monitoring of AI usage.
Microsoft and OpenAI are expected to continue their investigation to determine whether any violations occurred and what actions may be necessary to prevent future incidents.