In just a week, DeepSeek’s latest reasoning model erased a trillion dollars in market value, sparked new security concerns, and upended conventional wisdom about AI development. This forced policymakers and tech leaders to confront the implications of an affordable, high-performance model created by a geopolitical competitor.
Chinese startup DeepSeek unveiled DeepSeek-R1, an advanced LLM that matches or exceeds OpenAI’s top-tier o1 model on benchmarks spanning mathematics, coding, and logical reasoning. DeepSeek claims to have developed R1 in under nine months for roughly $5 million—a stark contrast to the hundreds of millions typically invested by its competitors. Its open-source nature could reduce per-query costs by up to 90 percent compared to proprietary APIs. DeepSeek’s remarkable success has raised concerns in the US national security community that America’s top AI products may struggle to compete with cheaper Chinese alternatives.

The Cost Debate: Reality vs. Exaggeration
DeepSeek’s claim of a $5 million development cost has been met with skepticism. SemiAnalysis estimates that DeepSeek’s total hardware spending exceeds $500 million. Anthropic CEO Dario Amodei has also questioned the claim, arguing that R1 is comparable to US models that are 7–10 months older. Given that AI costs are falling by approximately 4× per year, models like R1 may not be outliers but rather part of a predictable trend in the field.


The Jevons Paradox Enters the Chat.
US companies will adopt and enhance their own models by building upon DeepSeek’s approach. In fact, just one day after R1’s launch, Google updated its Gemini 2.0 Flash model, outperforming R1 and reclaiming the top spot on the Chatbot Arena leaderboard. An accompanying paper from Google DeepMind also revealed that it employed the same reinforcement learning techniques that contributed to R1’s success.
This rapid innovation cycle of improved capabilities at lower costs fuels the Jevons paradox, an economic principle that increased efficiency often leads to higher rather than lower resource consumption. As AI costs continue to plummet and capabilities improve, AI adoption will accelerate across industries.
The Importance and Limits of US Export Controls.
DeepSeek’s emergence has reinvigorated debate over US export controls on advanced chips. Critics argue that the model highlights a failure of these controls. However, experts like Meta’s Chief AI Scientist Yann LeCun note that a significant portion of AI infrastructure investment now focuses on inference—handling billions of AI requests—rather than just on training. Miles Brundridge adds that stringent, modernized GPU export controls remain crucial, as models like o1 and R1 demand substantial on-demand computing power.
Yet, DeepSeek also exposes a drawback of these controls: they have incentivized China to optimize efficiency on older chips. As Klon Kitchen warns, “Beijing isn’t waiting for the US to loosen restrictions; it’s aggressively pursuing ways to extract every ounce of performance from the hardware it has.”
Security Risks: The TikTok of AI?
Security researchers have already identified multiple vulnerabilities in DeepSeek’s models, including easily exploitable jailbreaking methods and exposure of sensitive API keys from public databases. Researchers have also found that DeepSeek’s censorship is embedded in both the application and training levels.
If policymakers were alarmed enough to spend billions on the “rip and replace” of Huawei telecommunications equipment due to security concerns or to block TikTok, the risks of an AI model deeply embedded in business infrastructure would be even greater. DeepSeek processing sensitive enterprise data and routing it through China-controlled servers poses numerous security risks. These concerns have already led the US Navy, Congress, and Texas to ban DeepSeek, with more entities likely to follow.
Open-Source AI: A National Security Imperative
While US policymakers debate the risks of open-weight AI models, China aggressively advances its own. Most nations will only access advanced AI through open models. The US must ensure that democratized AI aligns with democratic values, not authoritarian ones.
Mark Zuckerberg emphasized this last July: “Open-source AI is the world’s best shot at harnessing this technology for broad economic opportunity and security.” US-led initiatives like Meta’s Llama 3, Google’s Gemma family, IBM’s Granite 3.1, and NVIDIA’s NVLM are not just lowering adoption barriers and accelerating innovation—they’re crucial for embedding liberal democratic values into AI systems around the world.
America Must Run Faster
Bruce Mehlman rightly urges policymakers to accelerate innovation in R&D, AI infrastructure, and energy, but without top talent, we cannot win. Weeks ago, I proposed a “talent dominance” strategy, including high-skilled immigration reform—but that alone won’t suffice. Last week’s NAEP results were alarming: just 39 percent of fourth-graders and 28 percent of eighth-graders are proficient in math. If we want to put America first, we must put our students first.


DeepSeek’s rise is a stark reminder that AI leadership is a race, and America must sprint, not stroll. The alternative? Falling behind in the most consequential technological revolution of our time.