AI and Automation
Wednesday February 5, 2025
To people who see the performance of DeepSeek and think: 'China is surpassing the U.S. in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones'
Yann LeCun
Chief AI Scientist, Meta
If you’ve been caught up in the wave of DeepSeek mania, there’s one aspect of this powerful new AI model you may not have had a chance to reflect on: its open-source nature.
Straight to the source: "Proprietary" AI models, where the IP is a closely guarded secret (think OpenAI/ChatGPT) have long dominated the landscape. But DeepSeek's success has highlighted the potential for achieving high-powered performance and efficiency when development is thrown open to a community of enthusiasts.
Open all hours: The open-source approach has fueled DeepSeek's rapid success by making cutting-edge AI widely accessible, fostering global collaboration, reducing costs, enhancing transparency, and intensifying competition with proprietary models.
A different perspective: Meta's AI model, Llama, is another example of open-source development, so it’s not surprising the company’s Chief AI Scientist, Yann LeCun, is enthusiastic about another open-source AI winning critical acclaim. "To people who see the performance of DeepSeek and think: 'China is surpassing the U.S. in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones'," LeCun said in a post on Threads.
The downside: While open-source AI development offers numerous advantages, its free-flowing nature also comes with several potential drawbacks, including security risks and quality control issues. Research by Cisco found critical security flaws in DeepSeek's R1 model.
"DeepSeek R1 was purportedly trained with a fraction of the budgets that other frontier model providers spend on developing their models. However, it comes at a different cost: safety and security," Cisco researchers Paul Kassianik and Amin Karbasi wrote.