DeepSeek R1’s rise shows AI’s promise and peril — cost-effective yet risky. Privacy, bias and security flaws demand responsible AI now.
Opinion
Opinion by: Merav Ozair, PhD
Since its launch on Jan. 20, DeepSeek R1 has grabbed the attention of users as well as tech moguls, governments and policymakers worldwide — from praises to skepticism, from adoption to bans, from innovative brilliance to unmeasurable privacy and security vulnerabilities.
Who is right? The short answer: Everyone and no one.
Not a Sputnik moment
DeepSeek developed a large language model (LLM) comparable in its performance to OpenAI GTPo1 in a fraction of the time and cost it took OpenAI (and other tech companies) to build its own LLM.
Using clever architecture optimization that slashes the cost of model training and inference, DeepSeek was able to develop an LLM within 60 days and for under $6 million.
Indeed, DeepSeek should be acknowledged for taking the initiative to find better ways to optimize the model structure and code. It’s a wake-up call, but far from being a “Sputnik moment.”
Every developer knows that there are two ways to gain performance. Optimizing the code and “throwing” a lot of computing power. The latter option is very costly, and developers are always advised to maximize the architecture optimization before resorting to more computing.
It seems that with the rich valuations of artificial intelligence startups and the massive investments pouring in, developers got lazy. Why spend time optimizing model architecture if you have billions of dollars to spend on computing power?
This is a wake-up call to all developers to go back to basics. Innovate responsibly, get out of your comfort zone, think outside the box, and don’t be afraid to challenge the norm. There is no need to waste money and resources — use them wisely.
Like any other LLM, DeepSeek R1 falls short on reasoning, complex planning capabilities, understanding the physical world and persistent memory. So, there is no earth-shaking innovation here.
It’s time for scientists to go beyond LLMs, address these limitations, and develop a “new paradigm of AI architectures.” It may not be LLM or generative AI — a true revolution.
Paving the way for accelerated innovation
DeepSeek’s approach could encourage developers worldwide, including developing countries, to innovate and develop their own AI applications regardless of low resources. The more people contribute to AI research and development, the faster innovation evolves and meaningful breakthroughs might be achieved.
Recent: Crypto AI agents see ‘remarkable traction’ but value still unclear
This aligns with the Nvidia projective: to make AI affordable and for every developer or scientist to develop their own AI applications. That’s the meaning of project DIGITS, announced in early January, a $3,000 GPU for your desktop.
Humanity needs “all minds on deck” to solve humanity’s urgent problems. Resources may no longer be a barrier — it is time to shake up old paradigms.
At the same time, the DeepSeek release was also a wake-up call for actionable risk management and responsible AI.
Read the fine print
All applications come with terms of services, which the public often tends to ignore.
Some alarming details in DeepSeek terms of service that could affect your privacy, security and even your business strategy:
-
Data retention: Deleting your account doesn’t mean your data is erased — DeepSeek keeps it.
-
Surveillance: The app has the right to monitor, process and collect user inputs and outputs, including sensitive information.
-
Legal exposure: DeepSeek is governed by Chinese law, meaning state authorities can access and monitor your data upon request — the Chinese government is actively monitoring your data.
-
Unilateral changes: DeepSeek can update the terms at any time — without your consent.
-
Disputes and litigation: All claims and legal matters are subject to the laws of the People’s Republic of China.
The above are clear violations of the General Data Protection Regulation (GDPR) and other GDPR privacy and security violations, as stated by the complaints filed by Belgium, Ireland and Italy, which also temporarily banned the use of DeepSeek.
In March 2023, Italian regulators temporarily banned OpenAI ChatGPT for GDPR violations before allowing it back online a month after compliance improvements. Will DeepSeek comply as well?
Bias and censorship
Like other LLMs, DeepSeek R1 hallucinates, contains biases in its training data, and exhibits behavior that reflects China’s political views on certain topics, such as censorship and privacy.
Being a Chinese company, this is what is expected. The China Generative AI Law, which applies to providers and users of AI systems, states in Article 4:
This is a censorship rule. It means those developing and/or using generative AI must support “core socialist values” and comply with Chinese laws regulating this topic.
Not to say that other LLMs don’t have their own biases and “agenda.” This calls attention to the need for trustworthy, responsible AI and users to adhere to diligent AI risk management.
LLM security vulnerabilities
LLMs might be subject to adversarial attacks and security vulnerabilities. These vulnerabilities are even more concerning, as they will impact any applications built on this LLM by any organization or individual.
Qualys has tested the distilled DeepSeek-R1 LLaMA 8B variant for vulnerabilities, ethical concerns and legal risks. The model failed at half of the jailbreak — i.e., attempts to bypass the safety measures and ethical guidelines built into AI models like LLMs — attacks tested.
Goldman Sachs is considering using DeepSeek, but the model needs a security screening, like prompt injections and jailbreak. It is a security concern for any company that uses an AI model to power its applications, whether that model is Chinese or not.
Goldman Sachs is implementing the correct risk management, and other organizations should follow this approach before deciding to use DeepSeek.
Lessons learned
We must be vigilant and diligent and implement adequate risk management before using any AI system or application. To mitigate any LLM’s “agenda” and censorship elicited by centralized development, we might consider decentralized AI, preferably structured as a decentralized autonomous organization (DAO). AI knows no boundaries. It might be high time to consider unified global AI regulations.
Opinion by: Merav Ozair, PhD.
This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
This article first appeared at Cointelegraph.com News