My fellow pro-growth/progress/abundance Up Wingers,
At minimum, the Trump White House’s new AI Action Plan deserves credit for being honest about what it is: not a blueprint for technocratic governance or navigating a world of superintelligence, but rather a stab at geostrategic competitiveness policy designed to ensure American dominance in what could be the defining technology of our time. This is no ambitious regulatory manifesto.
The president, earlier today:
From this day forward, it’ll be a policy of the United States to do whatever it takes to lead the world in artificial intelligence. America is the country that started the AI race, and as president of the United States, I’m here today to declare that America is going to win it.
Along those lines, the 28-page document rejects the notion that AI can be effectively managed through comprehensive regulatory frameworks. And I think that’s correct. You don’t regulate an emerging techno-industrial revolution with compliance checklists nicked from the social-media wars. Better to continually push forward the technological frontier with infrastructure and talent, dealing with real-world harms as they emerge. Proactionary rather than precautionary.
As such, the plan’s three main pillars — accelerate innovation, build infrastructure, and lead via international diplomacy — will no doubt irk those who fear the potential downside of AI, both economic and existential, or wish for a policy that mainly treats AI as a risky new gadget being marketed to American consumers. While Brussels drafts massive rulebooks and US states pursue their own regulatory hobbyhorses, the White House is thinking in terms of national competitive advantage.
Again, focus on those three verbs: “accelerate,” “build,” and “lead.” Here’s a flavor of how the plan supports those action words:
▶ The plan ramps up federal R&D for foundational AI models, expands access to secure testbeds for real-world deployment, and supports startup innovation through various small business programs. It backs open science and open-source models while promoting geographically distributed access to AI resources “regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools.”
▶ On infrastructure, the plan promotes faster permitting for data centers, chip fabs, and energy projects by expanding NEPA exclusions and streamlining Clean Air and Clean Water Act rules. It calls for a national push to stabilize and modernize the power grid, prioritizing next-generation energy sources like nuclear fission, fusion, and enhanced geothermal. Strategic investments target not only data centers and compute access — including high-security systems for military use —but also the physical buildout of AI-related energy and telecom infrastructure.
▶ The international pillar reflects geopolitical realism. Instead of getting caught up in global debates about AI ethics, it focuses on building strong partnerships with allies, sharing American-made AI technology, and encouraging other countries to follow U.S. standards. The obvious goal is to push back against Chinese influence in international institutions and make sure America’s rules and values shape the future of AI. It’s a modern version of Cold War-era tech diplomacy — only this time, with models and silicon.
Critics will note what’s missing: robust consumer protections and algorithmic bias safeguards. But these concerns might be better addressed through existing law than AI-specific regulations that could hobble American companies. At least that seems to be the way the White House is seeing things.
More notably, as mentioned above, the action plan is conspicuously silent on the prospect of artificial general intelligence or superintelligence. Its focus on national competitiveness and near-term innovation — both good things! — may suffice for today’s models, but I worry it leaves the country vulnerable to the risks of machines that could one day outthink their creators and take actions contrary to their wishes.
What are the chances of that? Likely not zero right? We should talk about it! While the plan nods at model evaluation and national security, it does not substantively engage with the core governance challenges of AGI or superintelligence.
What might that look like? The recent paper “Superintelligence Strategy” paper warns that advanced AI systems could upend global stability, triggering AI arms races, strategic sabotage, or catastrophic failure. Its authors call for novel deterrence doctrines, alongside tighter compute controls, data center oversight, and rigorous alignment testing. RAND researchers strike a similar chord, urging the development of monitoring regimes for frontier models before they perhaps one day spiral out of control.
What would be the Trump version of such plans? A truly strategic policy would look not only to the next quarter, but to the edge of the map. I hope some folks in the White House are thinking about such issues, but this report, for all its strengths, leaves me wondering.