Italy sends first data verification request to DeepSeek

0
247
Italy sends first data verification request to DeepSeek

The jury is still out on whether Chinese artificial intelligence startup DeepSeek is a game-changer or part of an elaborate plan by its hedge fund parent company to sell shares in Nvidia and other tech companies. Whatever the case (or maybe both?), DeepSeek and its big language model have made a big splash. And now they are attracting the attention of data protection supervisors.

This appears to be the first serious move by one of these supervisory authorities since DeepSeek became widely publicized in recent days: Euroconsumers, a coalition of European consumer groups, filed a complaint with the Italian Data Protection Authority over how DeepSeek handles personal data under the GDPR, the European data protection framework.

The Italian DPA confirmed today that it has subsequently sent DeepSeek a request for information. “A rischio i dati di milioni di persone in Italia,” the agency said.

Two key details about DeepSeek that many have noticed are that the service is created and operated in China. According to the privacy policy, this includes the information and data that DeepSeek collects and stores, which is also hosted in its home country.

In its policy, DeepSeek also briefly notes that when it transfers data to China from the country where DeepSeek is used, it does so “in accordance with the requirements of applicable data protection laws.”

But Euroconsumers – an organization that won a case against Grok last year over how the company used data to train its artificial intelligence – and the Italian DPA are demanding more details.

In addressing Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, the Italian DPA said it wants to know what personal data is collected, from what sources and for what purpose – including what information is used to train their artificial intelligence system – as well as what the legal basis for the processing is. He also wants more information about the servers in China.

In addition, in its information request, it writes that it wants to know “if personal data is collected through web scraping” how users who are “registered and not registered with the service have been informed or notified about the processing of their data”.

MLex notes that Euroconsumers also emphasizes that there are no details on how DeepSeek protects or restricts minors’ access to its services, from age verification to how it processes minors’ data.

(DeepSeek’s age policy states that it is not intended for users under the age of 18, although it does not provide a way to enforce this rule. For users between the ages of 14 and 18, DeepSeek suggests that they read the privacy policy with an adult).

The European consumers and the Italian supervisory authority are the first attempt to take a step against DeepSeek. They may not be the last, although further action may not be as swift.

Earlier today, DeepSeek was the main topic at a press conference at the European Commission. Thomas Regnier, the Commission’s spokesperson for technical sovereignty, was asked if there were any concerns about DeepSeek at the European level related to security, privacy and censorship. For now, however, the main message was that it was too early to talk about any investigations.

“The services offered in Europe will respect our rules,” Renier said in response to a question about data privacy, adding that the Artificial Intelligence Act applies to all AI services offered in the region.

He declined to answer whether DeepSeek complies with these rules, in the EU’s view, or not. Then he was asked whether censoring an app on politically sensitive topics in China violates freedom of speech rules in Europe and whether it warrants an investigation. “It’s very early stages, I’m not talking about an investigation yet,” Renier quickly replied. “Our structure is strong enough to deal with potential problems if they arise.”

TechCrunch’s questions sent to the UK ICO about DeepSeek received a similar response: DeepSeek will essentially be subject to the same scrutiny as any other GenAI developer. But so far, no further action has been taken.

“Developers and implementers of generative AI must ensure that individuals have meaningful, concise, and easily accessible information about the use of their personal data, as well as clear and effective processes to allow people to exercise their information rights,” the company spokesperson said. “We will continue to work with stakeholders to promote effective transparency measures without shying away from taking action when our regulatory expectations are ignored.”

Meanwhile, could new avenues open up for regulatory issues in areas such as copyright and intellectual property protection?

Many have marveled at how DeepSeek’s very existence seems to challenge assumptions about the real costs of training and operating an LLM or generative AI service: its cheaper infrastructure and cost base undermine the idea that building fundamental AI and running generative AI applications must cost a fortune in terms of chips, data center usage, and power consumption.

But recently, some have begun to question all of this. Microsoft and OpenAI claim that there is evidence that they have partially learned from “distillations” of their own models. If this turns out to be true, it would be a strange irony – given the numerous legal and other dramas that have unfolded around how some LLM developers allegedly treat intellectual property and copyright.

We have reached out to DeepSeek regarding the Italian DPA’s complaint and will update this post as more information becomes available.

LEAVE A REPLY

Please enter your comment!
Please enter your name here