The AI industry has recently been shaken by the introduction of the advanced AI chatbot developed by DeepSeek. This company has created an AI model whose performance is comparable to leading models such as OpenAI’s ChatGPT, but at significantly lower costs.
Following the introduction of DeepSeek, concerns have been raised worldwide about the way in which the AI chatbot collects and stores personal data. Countries such as Italy and Australia have expressed their concerns regarding DeepSeek’s data collection practices. South Korea’s National Intelligence Service (NIS) has stated that DeepSeek collects an excessive amount of personal data, including keystroke patterns that can be used to identify individuals. This type of data collection raises serious privacy concerns, as it can be used to identify individuals and track their behavior.
Various characteristics of the interaction between a user and a system (human-computer interaction, HCI) are unique and can be used for biometric behavioral authentication, including keystroke patterns (also referred to as keyboard dynamics).
Behavioral biometric authentication is a method of identifying a person based on how they behave. It falls under biometric authentication but differs from methods such as fingerprint or facial recognition. Fingerprints and facial recognition analyze fixed physical characteristics and are considered static biometrics. Behavioral biometrics, such as keyboard dynamics, focus on how a person performs actions, such as the speed and rhythm of keystrokes.
An algorithm can analyze how long keys are pressed, which keys are used for capitalization, how often the backspace key is used, and what text is ultimately typed. By applying artificial intelligence to these patterns, biometric authentication can take place.
Because keystroke patterns are unique to an individual, they can be used as a form of invisible identification without the user explicitly giving consent. When combined with other collected data, such as an IP address or device information, an AI system like DeepSeek can build a detailed user profile and may even be able to link anonymous users to their real identities.
This raises important ethical and legal questions about privacy and consent. From an ethical perspective, this constitutes an invasion of users’ privacy, as they may not be aware that their keystroke patterns are being collected and analyzed. From a legal perspective, this may conflict with privacy laws such as the GDPR.
Concerns about the collection of keystroke patterns strike at the heart of the differences between the General Data Protection Regulation (GDPR) in Europe and privacy legislation in China.
The GDPR imposes strict requirements on the processing of personal data, such as obtaining explicit user consent, minimizing data storage, and ensuring transparency about how data is used. In contrast, Chinese authorities have broader powers to access personal data, and companies are legally required to cooperate with the government in sharing data.
This discrepancy leads to tension when technologies such as DeepSeek are deployed internationally. The way DeepSeek collects and uses data, combined with the obligations of Chinese companies to share data with the government, is fundamentally at odds with the GDPR, which was specifically designed to protect individuals’ privacy against unauthorized access and misuse.
Several European countries have already taken measures against the use of DeepSeek. For example, Italy has blocked the application due to concerns about the processing of personal data, and in the Netherlands the government has prohibited civil servants from using DeepSeek in the course of their work.
These developments point to growing concerns about DeepSeek’s compliance with the GDPR. The European Data Protection Board (EDPB) has indicated that national supervisory authorities may take further action, such as imposing fines or restricting the use of DeepSeek within the EU.
Given the current geopolitical tensions and legislative conflicts, smooth access for DeepSeek to the European market appears highly unlikely. The core of the problem lies in the conflicting legislation between China and Europe. Chinese companies such as DeepSeek are subject to Chinese laws, including the Cybersecurity Law and the Data Security Law. These laws require companies to share data with the Chinese government upon request, even if the data originates from foreign users. This means that, from a legal standpoint, DeepSeek cannot refuse to transfer data to Chinese authorities.
In Europe, by contrast, the GDPR applies, which sets strict rules for the processing and transfer of personal data. One of the core principles of the GDPR is that personal data may not be shared outside the EU without a clear legal basis. This means that DeepSeek’s legal obligations in China directly conflict with the GDPR in Europe.
In practice, it is almost impossible to obtain hard guarantees that DeepSeek will not be forced to share data with the Chinese government. The fundamental issue is that Chinese law takes precedence over corporate assurances. If the Chinese government decides to request data, DeepSeek cannot refuse, even if the company promises to comply with the GDPR in Europe.
This makes it extremely risky for European regulators to allow DeepSeek to operate freely on the market. And this is not only a problem for DeepSeek, but for all Chinese technology companies operating globally. This has already led countries such as the United States to restrict or ban companies like Huawei, TikTok, and now also DeepSeek.
European regulators are entitled to take action against companies that do not comply with the GDPR. Such measures can range from fines to a complete ban on their activities within the EU and are intended to safeguard the privacy and data protection of European citizens.
However, strict privacy regulations also make it more difficult for certain AI companies, such as DeepSeek, to operate in the EU. As a result, major players such as OpenAI (ChatGPT) and Anthropic (Claude) remain relatively dominant, and strong competition is lacking. Less competition can slow innovation and make Europe more dependent on U.S. tech companies, which currently dominate the market.
The GDPR protects privacy but can simultaneously hinder innovation and competition. This tension between privacy protection and economic and innovation interests is complex. Possible solutions lie in finding a balance that both safeguards users’ privacy and allows room for technological progress and competition. This requires a joint effort by policymakers, companies, and technological innovators to create an equilibrium that serves the interests of all stakeholders and contributes to a sustainable future for the AI industry. For the time being, however, privacy protection appears to outweigh Europe’s economic and innovation interests.
Would you like to read more about AI systems?
Check out our latest research: “Can You Trust AI for Legal Work?”