Skip to main content
Article

America’s AI Action Plan: What to Watch

AEIdeas

July 28, 2025

The Trump administration’s AI Action Plan outlines bold steps to accelerate innovation and boost US leadership in AI. My recent post highlights some of the needed proposals to cut red tape, streamline permitting, and spur private-sector growth. 

Yet for all its ambition, the plan overlooks several high-stakes gaps—areas where evolving risks may require more proactive federal involvement.

Interpretability and Model Behavior: One of the most unsettling realities about today’s advanced AI systems is that we don’t fully understand how they work—not even those who design them. Take this opening paragraph from an Anthropic blog post:

We mostly treat AI models as a black box: something goes in and a response comes out, and it’s not clear why the model gave that particular response instead of another. This makes it hard to trust that these models are safe: if we don’t know how they work, how do we know they won’t give harmful, biased, untruthful, or otherwise dangerous responses? How can we trust that they’ll be safe and reliable?

Unlike traditional software, which is explicitly programmed by humans, large language models (LLMs) learn by identifying patterns in massive datasets. This creates systems that are powerful but opaque, with internal reasoning that’s difficult to interpret or predict. That opacity isn’t just a technical quirk—it’s a real risk. If we don’t understand how these models generate their outputs, we can’t anticipate when they might act unpredictably, deceive users, or cause harm in high-stakes environments. Some of this is theoretical, but a growing body of research—not from AI doomers, but from researchers supporting responsible progress—has documented troubling behavior from the LLMs, including deception and scheming.

Researchers from top AI labs including Google, OpenAI, and Anthropic released a paper warning that we may be losing the ability to understand advanced AI models. Models trained for results, rather than transparent reasoning, are slipping into dense, unintelligible shortcuts that humans can’t easily interpret. As our visibility into model internals fades, so does our ability to evaluate them or intervene effectively. 

Though the plan acknowledges this, the recommendations are modest given the potential risks.  If we’re going to accelerate deployment, we must also accelerate our understanding. The plan’s nod to a partnership between the Defense Advanced Research Projects Agency, the Center for AI Standards and Innovation, and the National Science Foundation is a start, but it lacks the urgency and scale this issue demands. Governing opaque, increasingly autonomous systems requires a dedicated national effort that prioritizes interpretability as a core pillar of AI safety and security. 

State-Federal Tensions: One provision suggests that federal agencies may restrict AI funding to states with “burdensome” regulations. While framed as anti-red-tape, it echoes Race to the Top-style conditional funding and could become a flashpoint depending on how “burdensome” is defined. Conservatives have long pushed back against federal efforts that tie funding in a coercive way to additional policy compliance, from Common Core education standards to Obamacare’s Medicaid expansion. In each case, the concern wasn’t simply about the policies, but about Washington using financial leverage to override state autonomy. This AI provision, if not carefully scoped, could trigger similar backlash. 

Copyright: Despite its growing legal and commercial implications, the plan doesn’t mention copyright or content provenance—an odd omission given the litigation reshaping the AI landscape and the training of next-generation frontier models. Some of the most pressing issues include whether the use of copyrighted materials to train AI models qualifies as fair use, whether AI-generated content can infringe on existing works, and who owns the output of generative AI systems. Courts are wrestling with questions that may determine the future of AI development, such as whether scraped content from artists, authors, and news organizations can be used without permission. The executive branch could take proactive steps ranging from issuing guidance on fair use boundaries for training data, setting standards for content provenance and attribution, or launching a public comment process to develop a balanced licensing framework. 

Implementation Challenges: While the plan’s ambition is evident, its real-world impact hinges on execution, and here, the details are thin. Roughly one-third of the recommended actions lack a designated lead agency. No implementation timelines are offered, nor is it clear whether any new funding or resources will be made available. A pressing concern is whether agencies have enough technically skilled staff to execute the plan’s complex and consequential provisions.

The plan is a meaningful step toward accelerating American leadership in AI. But leadership won’t be measured just by how quickly we move, but by whether we move wisely, building systems the public can trust, and institutions prepared to govern them.  With sustained focus and deliberation, we have an opportunity not just to shape AI’s trajectory, but to do so in a way that earns public confidence and reflects our highest values.