Recently, two of the most important artificial intelligence (AI) companies in the world (Google and OpenAI) have launched a worrying warning. According to both companies, some competitors would be trying to copy their advanced artificial intelligence systems by extracting how they work internally. Among the mentioned names we can find DeepSeek alongside other Chinese suppliers and universities.
The thing is that, according to these American companies, some actors may be using existing artificial intelligence systems to secretly replicate their reasoning abilities. So, let’s talk about this issue a bit more.
What’s happening?
Big technological companies have invested billions of dollars in training advanced language models, known as LLM (Large Language Models). Among them we can find Gemini (developed by Google) and ChatGPT (created by OpenAI).
These models can respond to questions, write texts, solve issues, and have complex conversations. They are extremely valuable products because behind them there are years of investigation, huge amounts of data, computing power, and engineering to develop.
According to Google and OpenAI, some competitors would be using a method called ‘’model distillation’’ to copy their capabilities. Google reported detecting a campaign that used more than 100,000 prompts in an attempt to replicate Gemini’s reasoning abilities in multiple non-English languages across many different tasks.
Model distillation
Distillation means asking a large number of carefully designed questions to an artificial intelligence model, collecting its responses, and then using those responses to train another model. This is done internally and with permission.
The problem is when an external company uses access to a public artificial intelligence system to extract its knowledge and reasoning patterns without authorization. According to John Hultquist, chief analyst at Google’s Threat Intelligence Group, an artificial intelligence model represents extremely valuable intellectual property. So, if someone can figure out how it reasons by repeatedly querying it, they may be able to recreate similar technology without paying the full cost of development.
Importance of this AI issue
The current debate in the artificial intelligence field is about economic competition and national security concerns.
In a memo submitted to a U.S. House committee focused on China, OpenAI stated that DeepSeek and other Chinese LLM providers have engaged in activities consistent with “adversarial distillation.” According to OpenAI, accounts linked to DeepSeek employees allegedly attempted to bypass access restrictions by using third-party routers to hide their source.
OpenAI also reported observing code developed to access U.S. AI models programmatically in order to collect outputs for distillation. The company mentioned some occasional activity linked to Russia as well. In addition, OpenAI warned that illicit model distillation poses a risk to what it describes as “American-led, democratic AI.” The company believes that protecting artificial intelligence systems cannot be the responsibility of one lab alone.
For this reason, OpenAI has called for an “ecosystem security” approach, possibly involving cooperation between artificial intelligence companies and the U.S. government, including closing API router loopholes and restricting adversary access to U.S. computing and cloud infrastructure.
Why is it so hard to stop this situation?
Artificial intelligence systems like Gemini and ChatGPT are designed to be publicly accessible. Their usefulness depends on allowing users to ask questions and receive answers. However, if someone automates thousands or even hundreds of thousands of queries, they can collect large amounts of output data. Distinguishing between normal usage and systematic attempts to copy a model can be extremely challenging.
As more businesses, including financial institutions, develop their own artificial intelligence models and provide access to them, the risk of distillation attacks could spread beyond major technology firms.
So…
This shows that the future of artificial intelligence will not depend solely on technical breakthroughs, but also on how well companies and governments manage security, intellectual property, and international competition.
