White House advisor David Sacks says there is ‘substantial evidence’ of DeepSeek using OpenAI technology
OpenAI has claimed that it has evidence that Chinese competitor DeepSeek used the American company’s AI model to train its rival chatbot, according to Bloomberg News.
The release of DeepSeek’s open source R1 model has roiled global financial markets after the Chinese company appeared to have achieved comparable results to rivals who used far greater money and computing resources.
The claims prompted investors to question the underpinnings of the US stock market boom, which has been predicated on the idea that AI “hyperscalers” will need huge amounts of computing power to train AI models. The share price of chip company Nvidia recorded the biggest one-day decline in value in stock market history on Monday, before recovering some of its losses on Tuesday.
Global share prices steadied on Wednesday. Japan’s Topix index rose by 0.7%, while Australia’s ASX rose by 2.9%. The FTSE 100 was roughly flat at the opening bell.
AI companies and investors have been scrambling to understand the implications of DeepSeek’s rapid rise. OpenAI and its major backer, Microsoft, have been investigating whether DeepSeek obtained data in an unauthorised manner, after observing some individuals exporting large amounts of data from OpenAI’s products, Bloomberg reported.
The Financial Times reported that OpenAI, led by Sam Altman, said it had seen some evidence of “distillation”, which it suspects to be from DeepSeek. That would violate OpenAI’s terms of service.
OpenAI has itself faced heavy criticism for its own approach to others’ intellectual property. The company is facing early hearings in a case led by the New York Times in which media companies claim the company used their data without permission.
Nevertheless, the claims could open up a new front in the technological struggles between the US and China.
Venture capitalist David Sacks was appointed by US president Donald Trump as AI and cryptocurrency “tsar”. He said on Tuesday night that there was evidence of “distillation”, when one AI model asks repeated questions of another to train itself on how to respond.
Sacks told Fox News:
There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI models, and I don’t think OpenAI is very happy about this.
I think one of the things you’re going to see over the next few months is our leading AI companies taking steps to try to prevent distillation.
Source: The Guardian
Recent Comments