European startups are rolling out products built using Chinese AI developer DeepSeek’s technology, despite safety concerns from some in the sector.
DeepSeek shook the tech world last month when the company said it had developed a high performing large language model (LLM) for less than $6m — a fraction of the costs stacked up by tech giants like Google and OpenAI.
Two weeks on from the release of DeepSeek’s R1 model, some European startups are already deploying the model in their products, as they seek cheaper alternatives to the better-known AI providers.
Elevenlabs, a synthetic voice startup and industry leader, was among the first to announce it had integrated R1 into its products. Others, including London-based AI unicorn Synthesia, have begun experimenting with the model to test its capabilities.
“I know two German AI startups that have already integrated it into their products and a lot of European startups from my network are starting to experiment with it,” Denis Kalinin, head of APAC at Runa Capital, tells Sifted.
But there are risks. Critics have highlighted R1’s apparent in-built biases, with the bot apparently refusing to produce material critical of the Chinese government. Meanwhile, privacy and security experts have warned those who fail to take proper precautions risk legal action.
Cheaper model
Berlin-based AI startup TextCortex, which helps business customers tailor the technology to suit their needs, has long used Anthropic’s Claude 3.5 Sonnet as the default underlying model for new users on its platform.
But after testing out a DeepSeek integration, cofounder Dominik Lambersy says the company may soon switch to R1. “The cost is ridiculous. It’s so cheap in comparison to everything else,” he tells Sifted.
“We’re currently considering whether we should place the biggest DeepSeek model as the default.” (TextCortex is currently hosting DeepSeek via a US provider.)
In the past week, German AI companies such as Langdock, which is building a ChatGPT-like platform for businesses, and Novo AI confirmed they had deployed a DeepSeek model. Swedish notetaking startup Sana AI both announced the same on Friday.
Many more are experimenting with the model.
There’s still a risk the model somehow gives an answer clearly pointing back to China.
UK AI unicorn Synthesia — which raised $180m last month — tells Sifted its currently running tests on DeepSeek’s R1. “Our research teams are experimenting with DeepSeek, and if it meets our high standards for performance, security, and ease of integration, we will then make a decision whether to deploy it in our products.”
Compatriot Tessl, which is building AI agents for creating software and raised $125m in 2024, also says it’s evaluating R1.
“We see lots of AI coding companies looking to integrate DeepSeek,” says Jose Gaytan de Ayala, investor at Kinnevik. “This makes sense given the better reasoning capabilities [at par with Open AI o1 model].”
Alongside performance, cost is also a factor. “DeepSeek-V3 is 18 times cheaper than ChatGPT4o,” says Kalinin.
Concerns
There are concerns over data privacy risks when using DeepSeek’s APIs directly from the Chinese cloud — though the model being open source means that it can be deployed on local servers in Europe,keeping sensitive information secure.
That hasn’t stopped some regulators from clamping down. Italy has already banned DeepSeek’s model from use, and regulators in France, Belgium, Taiwan, South Korea and the US are all reportedly investigating the platform.
The probes have given some startups pause for thought.
“Companies that manage more sensitive data, such as legal tech or HR tech, are testing it but are hesitant to deploy until fully vetted,” Jose Gaytan de Ayala, investor at Kinnevik.
Other sectors like fintech — which typically handle very sensitive customer data — are also more cautious about deploying products using DeepSeek due to privacy and security concerns, says Kalinin.
One cannot ignore critical gaps in key areas that directly impact business risks.
LatticeFlow, a trustworthy AI company based in Switzerland, published a study of DeepSeek on Tuesday, which suggested DeepSeek’s models may not be compliant with the EU’s AI Act, which is slowly coming into force over the next two years.
The report suggested DeepSeek’s models were more susceptible to hijacking — being manipulated to leak sensitive information — and show significantly higher bias compared to rivals.
“While progress has been made in improving capabilities and reducing inference costs, one cannot ignore critical gaps in key areas that directly impact business risks — cybersecurity, bias and censorship,” says Peter Tsankov, CEO at Swiss trustworthy AI startup LatticeFlow.
But others think such concerns won’t matter in the near-future.
“You can deploy it safely, but judging by some of the more prominent answers the model gave about some political topics, there’s still a risk that the model somehow gives an answer that is clearly pointing back to China,” says Rasmus Rothe, general partner at Merantix Capital.
“I think the whole China debate will be less important in the next six months. That’s not the most important news,” he added. “The most important news is that there’s an independent team coming up with a model that’s super efficient with much less resources.”
Read the orginal article: https://sifted.eu/articles/europe-startups-use-deepseek-news/