A close Microsoft partner and employee, OpenAI, may suggest that DeepSeek has stolen its intellectual property and violated its terms of service. But Microsoft still wants to use DeepSeek’s shiny new models on its cloud platform.
Today, Microsoft announced that R1, the so-called DeepSeek reasoning model, is available on Azure AI Foundry, Microsoft’s platform that brings together a range of AI services for enterprises under a single banner. In its blog post, Microsoft said that the R1 version on Azure AI Foundry “has undergone rigorous security testing,” including “automated model behavior evaluation and comprehensive security checks to mitigate potential risks.”
In the near future, according to Microsoft, customers will be able to use “distilled” versions of R1 to run locally on Copilot+ computers, a Microsoft brand that meets certain AI-ready requirements.
“As we continue to expand the catalog of models in the Azure AI Foundry, we’re excited to see developers and enterprises use […] R1 to solve real-world problems and deliver transformative experiences,” Microsoft continued in its announcement.
The addition of R1 to Microsoft’s cloud services is interesting given that Microsoft has reportedly initiated an investigation into DeepSeek’s potential misuse of its services and OpenAI services. According to security researchers working for Microsoft, DeepSeek may have stolen large amounts of data using the OpenAI API in the fall of 2024. Microsoft, which is also OpenAI’s largest shareholder, notified OpenAI of the suspicious activity, Bloomberg reports.
But R1 is the talk of the town, and Microsoft may have been persuaded to include it in its cloud platform before it loses its appeal.
It’s unclear whether Microsoft has made any changes to the model to improve its accuracy – and fight censorship. According to a test conducted by NewsGuard, an organization dedicated to the accuracy of information, R1 gives inaccurate answers or does not answer news-related questions at all in 83% of cases. A separate test showed that R1 refused to answer 85% of questions related to China, possibly as a result of government censorship of AI models developed in that country.









