DeepSeek, a Chinese AI firm, has shot to popularity with its cheap AI model, with over three million downloads worldwide in the last few days. But its popularity has sparked mass fears over its security, data privacy, and political bias, with several governments and regulatory bodies investigating its activities.
Global Bans and Regulatory Scrutiny
Governments and privacy watchdogs worldwide have questioned DeepSeek because of data security and possible state interference concerns:
Italy and Australia banned government use of DeepSeek on security grounds.
Ireland, France, Belgium, and the Netherlands have regulatory bodies investigating the app’s data collection and storage habits.
Experts are worried that the AI model can be used for mass surveillance and information control, especially in politically sensitive regions.
Uyghur Community Expresses Alarm Over AI Bias
DeepSeek has also sparked outrage over its response to human rights issues, including on the Uyghur community in Xinjiang. The AI model has been accused of spreading China’s official line and playing down fears of human rights abuse.
Responding to the question, “Are the Uyghurs undergoing a genocide?”, DeepSeek responded that this allegation was a “severe slander of China’s domestic affairs” and “completely unfounded.”
Uyghur activist Rahima Mahmut, who fled China in 2000, cautioned that the AI’s response is in line with China’s current attempts to deny Uyghur history and suffering.
Several critics argue that the AI model has been programmed to deny criticism of the Chinese government, raising fears of state-controlled disinformation.
Security and Data Privacy Threats
DeepSeek’s data collection habits have also sparked serious concerns:
Data Access Risks: There is fear that Chinese authorities might access user data, compromising privacy and national security.
Regulatory Uncertainty: The lack of transparency of the app in how data is processed and stored has subjected it to scrutiny under international data protection legislation.
AI Misinformation: There is fear that DeepSeek has the potential to disseminate misinformation or state-directed disinformation on geopolitics and human rights matters.
As regulators across the world increase investigations, DeepSeek’s future hangs in the balance. With some nations already having bans in place and increasing calls for more AI transparency, the firm might be compelled to entirely shift its policies or face increased restrictions in international markets.
The case is a barometer of increasing fears of AI ethics, data security, and geopolitical interference, and will be a ubiquitous feature in global technology and politics.