With the second Trump administration settling into its second year, 2026 promises to bring continued evolution in technology policy. Our scholars are examining the developments likely to shape the year ahead across antitrust, AI infrastructure, broadband deployment, and emerging challenges in digital platforms.
The Trump administration will continue reorienting antitrust enforcement away from its use as an ideological tool to bring companies under greater government control and back toward its roots of watching out for customers. This will face resistance within the Republican Party, as some hold grudges against Big Tech while others fail to differentiate between businesses that are large because they offer great products and those that are illegally monopolistic. Both perspectives are flawed, but Trump’s instincts as a businessman and his desire to have the country be a clear leader in AI ought to help keep politically motivated tech regulation at bay.
The Trump administration will continue promoting policies that facilitate AI growth, including easing restrictions on nuclear power for data centers. However, some states will push back. Under the banner of protecting residential electricity consumers, certain states will impose barriers on data center access to electricity. While such policies may gain voter traction in 2026, they ultimately harm the very consumers they purport to protect. State discrimination against specific electricity consumers conflicts with the foundational principles of utility regulation. Our public utility framework was established to obligate companies to serve all customers in their territory under nondiscriminatory terms, with rates reasonable and sufficient to sustain necessary investment. States that pursue discriminatory policies will undermine their own regulatory legitimacy.
The Trump administration’s effort to streamline the Broadband Equity, Access, and Deployment (BEAD) program is poised to bear fruit. After the removal of the extra-statutory requirements that the Biden administration imposed, 37 state plans have received approval and are breaking ground in 2026, with some states on track to achieve universal broadband coverage before year’s end. Yet the National Telecommunications and Information Administration’s success may spark a secondary controversy: how to allocate the billions of dollars saved by cutting Biden-era regulatory requirements.
Yet Trump’s team is adding its own extra-statutory requirements, such as conditioning BEAD funding on states not regulating AI. While market-based AI policy is the right approach, using broadband subsidies as leverage makes federal oversight arbitrary and will result in less broadband deployment and more wasted taxpayer dollars.
The year 2026 will likely bring significant momentum toward universal service reform. The congressional Universal Service Fund Working Group has collected public comments and appears poised to advance proposals aimed at providing long-needed stability to the program. The timing is urgent: The Universal Service Fund contribution factor for the first quarter of 2026 has reached 37.6 percent, underscoring the current funding mechanism’s unsustainability. While many advocates will push to expand the contribution base to include broadband providers and large technology companies, a more durable solution would involve fundamental reform—funding the program through appropriations and restoring meaningful congressional oversight. Ideally, contribution reform would be paired with a comprehensive program review to ensure that this 20th-century framework is updated to meet the demands of the modern communications landscape.
The consolidated social media addiction cases in Southern and Northern California will finally garner significant mainstream news media attention as summary judgment motions and bellwether trials occur. However, this attention will be overshadowed by lawsuits blaming the companies behind various chatbots for causing harm to—and in some instances, the deaths of—users. Cases such as Garcia v. Character Technologies (involving the Character.AI chatbot) raise important questions about the causation of harm, protection of minors, and chatbot users’ First Amendment rights to receive speech. The cases will prompt states to adopt more laws, such as California Senate Bill 243, that regulate chatbots, building on a wave of legislation in 2025. Chatbot companies already are responding with their own initiatives to safeguard minors.
State laws restricting minors’ access to social media platforms through age-verification and parental-consent requirements won’t go away in 2026. NetChoice and the Computer & Communications Industry Association will continue to fight them on First Amendment grounds and battle for the right of parents—not government—to determine the platforms and speech suitable for their children. States will try to use the US Supreme Court’s 2025 opinion in Free Speech Coalition v. Paxton to argue that their laws should be subject to only intermediate (not strict) scrutiny despite that case’s extremely narrow holding.
One prediction that is—sadly—guaranteed to come true in 2026? An attorney somewhere in the US will cite output from a generative artificial intelligence tool that is erroneous, and we’ll once again need to remind lawyers of this danger.
In 2026, the Tech Policy team will continue publishing informed analysis on these and other pressing policy developments.