Article

Advancing AI Literacy in the US Workplace

By Bronwyn Howell

February 20, 2026

If there is one truism of the so-called information revolution, it is that the rollout and uptake of each new technological iteration in day-to-day business practice occurs faster than the last. It took over half a century for computers to be widely used, three decades for personal computers to arrive on every desk, 20 years for internet applications to become ubiquitous, and 10 years for smartphone apps to dominate. But in just over three short years since the public release of ChatGPT, businesses and employees are being urged to incorporate the use of generative pretrained transformers (GPTs)—AI tools—into their everyday work practice.

To be sure, their incorporation is inevitable given the tremendous productivity benefits on offer. (McKinsey estimates a 0.1–0.6 percentage point addition to annual labor productivity growth through 2040.) There is little doubt that GPTs have revolutionized computer coding activities and improved the throughput and quality of many knowledge workers’ outputs—for example, agents in contact centers. Medical practitioners’ efficiency is increasing with the use of ambient listening tools to take notes during consultations, and care quality is improving with customized summaries and treatment plans provided subsequently to patients.

However, as with all new technologies, caution is necessary. The same tools can lead to decreases in efficiency and quality if used inappropriately, meaning without sufficient human interpretation or outside the bounds of its capabilities. For example, a University of Sydney academic found that a Deloitte report to an Australian government department contained phantom citations and fabricated footnotes due to unchecked GPT use in its preparation. This cost Deloitte up to AUD 300,000 in foregone fees and a lot of public embarrassment. Likewise, court and opponent time was wasted, and a lawyer faced sanctions after a ChatGPT-prepared brief was found to contain entirely fictitious case law, including invented citations and opinions.

Importantly, just as for all other information technology advances, benefits come not from substituting existing tasks but from enhancing already skilled practitioners’ ability to carry out their activities. GPTs can create computer code quickly, but coders still have to understand the tasks for which the code will be used to craft the appropriate prompts and then select the best code from multiple generated examples. Similarly, AI tools may capture consultation records, but medical practitioners must still edit them and check treatment plans generated to ensure they are appropriate for the specific case.

AI application risks come, therefore, not from the GPTs themselves but from inappropriate use. Such risks are best addressed by education, not regulation. Workers need new skills in AI literacy to make the most of GPT-based applications and avoid their pitfalls.

Propitiously, therefore, last week the US Department of Labor released its Artificial Intelligence Literacy Framework. AI literacy is defined as “a foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly, with a primary focus on generative AI, which is increasingly central to the modern workplace.” The aim is “to support a wide range of users working to strengthen AI literacy across the American workforce.”

The framework is voluntary. It defines five foundational content areas: how AI works, common workplace applications, effective prompting, evaluating AI outputs, and responsible and secure use. Seven delivery principles stress adaptability across sectors, experiential learning, and pairing AI use with “human skills” like judgment, creativity, communication, and problem solving. The framework is targeted at states, workforce boards, community colleges, apprenticeships, and employers and is intended as national guidance for workforce and training programs.

The US framework differs from other national and international AI literacy strategies in its focus on workers rather than on the whole population. This is pragmatic given the extensive use (and misuse) of AI already in the US workplace. If the promised productivity gains are to be made as soon as possible in line with wider economic policy objectives, then workers already at the coalface need these relevant skills now. There just isn’t time to fashion a comprehensive AI-in-education strategy focused first on teachers and students, national curricula, and teacher-training reforms, as promulgated by UNESCO.

The US framework also differs in its tight focus on workplace contexts rather than on broader civic, societal, and ethical competencies and its understanding of AI’s social impact and action toward a “sustainable and fair society.” It eschews a European Union–type tight connection to AI governance (e.g., AI Act implementation, digital rights, and algorithmic transparency) and civic, democratic, and rights-based dimensions of AI literacy.

The Department of Labor framework is unashamedly instrumental: It aims to help workers safely use AI to stay productive rather than equip them as critical agents in shaping AI governance or contesting its deployment. It supports real economy delivery in a timely manner—as it should for a technology already in widespread use.