Skip to main content
Article

New FDA Policies Could Limit the Full Value of AI in Medicine

JAMA Health Forum

February 10, 2025

The application of artificial intelligence (AI) in medicine is expanding at an astonishing pace, mirroring the rapid advances in AI technology itself. Some experts within the field predict that in the next several years, developers may realize artificial general intelligence (AGI)—a revolutionary form of AI capable of understanding, learning, and applying knowledge across various tasks with human-like proficiency. Unlike today’s narrow AI systems that excel at tasks such as image recognition or language translation, AGI can tackle any intellectual challenge a human can, demonstrating a deep comprehension of diverse disciplines. This technology could transform medical practice, empowering machines to reliably synthesize large amounts of clinical data on a patient’s condition and interpret complex medical problems.1

One challenge will be how the US Food and Drug Administration (FDA) views these tools—its recent changes to policies related to the regulation of AI have added new uncertainties. Artificial intelligence tools with advanced analytical capabilities used in clinical practice, especially tools that synthesize complex clinical information from distinct sources, may automatically be classified as medical devices, regardless of their intended use.

This may be particularly true when AI is integrated into electronic medical record (EMR) software, allowing AI to generate insights that might otherwise go unnoticed by clinicians. Such classification, however, could be at odds with the original intent of laws that were designed to regulate digital health tools based on their clinical use rather than on their analytical sophistication alone, or the sources of clinical data that they rely on.

The current regulatory posture of the FDA for classifying tools with these advanced analytical capabilities as medical devices could impede or even block their integration into EMR systems. This policy could encumber one of the most high-impact applications of these tools—the ability to embed them alongside a patient’s health record, where they are able to synthesize multiple complex data streams and generate novel clinical insights.

In 2016, Congress addressed in the 21st Century Cures Act how FDA regulation could be both an accelerant and impediment to developing medical software. This legislation clarified the definition of clinical decision support software (CDSS) and excluded certain software functions from being classified as a “device,” thereby exempting them from FDA oversight. The law established criteria that CDSS must meet to qualify for this safe harbor. It exempted tools that offer nonmandatory, informational recommendations to clinicians without overriding their clinical judgment.

During my tenure as FDA commissioner between 2017 and 2019, the agency issued a set of policies guiding the use of CDSS within the context of the increasingly sophisticated capabilities of these tools. The guidance stipulated that CDSS tools that explain their recommendations or provide information to clinicians without making explicit treatment recommendations would fall outside the FDA’s regulation. Although we could not have foreseen the rapid advances in AI and AGI—or the breakthroughs enabled by natural language processing like ChatGPT (OpenAI)—these policies nonetheless addressed the evolving technologies, acknowledging the potential of AI to improve clinical decision-making tools.2

In September 2022, the FDA issued new guidance,3 tightening the criteria for CDSS exemptions from premarket review. The updated guidelines addressed concerns such as “automation bias” and “time criticality”—situations where CDSS was used in time-sensitive situations or other circumstances when the FDA believed that clinicians may not have the opportunity or impetus to apply their own judgment to the recommendations from the CDSS. Under the new guidance from the FDA, this circumstance alone could classify these tools as devices. Moreover, the FDA would consider software that integrates data from multiple sources (such as imaging, consultation reports, and clinical laboratory results) as medical devices because of its ability to synthesize diverse data and formulate insights when the chain of reasoning behind the tool’s verdict can remain murky (leaving the clinician uncertain about exactly how the final judgment was reached).4 Based on these considerations, it is possible that any AI functionality that is integrated into an EMR could fall outside the initial exemption and render the new tool, and indeed the entire EMR, a medical device subject to premarket review.

In the 21st Century Cures Act, Congress established clear criteria for when digital health tools would be classified and regulated by the FDA as medical devices. However, by defining the specific capabilities that would subject these software platforms to the premarket review by the FDA, the policy inadvertently imposed a ceiling on their functionality. Many EMR developers intentionally limited their software’s features to avoid incorporating analytical tools that would trigger costly and uncertain regulation. These compromises were manageable when the potential applications of these digital health tools were limited (such as analysis of laboratory data or medical imaging). In many cases, digital health tool developers created these analytical functions as stand-alone medical devices, which could then be independently regulated by the FDA, separately purchased by health care systems or clinicians, and integrated into EMR systems as discrete modules. The EMR developers cooperated with third-party developers who developed these stand-alone digital devices.

However, a high-value application of AI and AGI in medicine hinges on their seamless integration into EMRs, accessing and synthesizing diverse data. It may be difficult to develop these CDSS tools separately from the EMR and purchase them as distinct modules without limiting their inherent utility. If these tools are classified as medical devices merely because they draw from multiple data sources or possess analytical capabilities that are so comprehensive and intelligent that clinicians are likely to accept their analyses in full, then nearly any AI tool embedded in an EMR could fall under regulation. The risk is that EMR developers may attempt to circumvent regulatory uncertainty by omitting these features from their software. This could deny health care clinicians access to AI tools that have the potential to transform the productivity and safety of medical care.

One recent study5 investigated whether using ChatGPT-4 could enhance clinical reasoning compared with traditional resources. Although the chatbot exhibited superior clinical reasoning abilities compared with the participating physicians, integrating the tool into the daily practice of physicians did not significantly improve their clinical decisions. Some of the limited effect of the tool on the practice of physicians was owed to how the tool was offered as a stand-alone portal that they would access outside the EMR. The study suggested that the tool’s integration into a physician’s workflow, perhaps through the EMR—and its ability to extract information and present analysis in an efficient way—are key to its effectiveness.

A solution lies in returning to the intent of the 21st Century Cures Act and the policies advanced from 2017 to 2019. The intent was to regulate CDSS based on how the data analysis is presented to health care clinicians instead of focusing on how clinicians would use the information to inform their judgment. If these AI tools are designed to augment the information available to clinicians and do not provide autonomous diagnoses or treatment decisions, they should not be subjected to premarket review. The FDA could allow EMR providers to come to market with these tools as long as they meet FDA criteria for how they are designed and validated. Then, by drawing on real-world evidence of these systems in action in the postmarket setting, the agency can verify that they genuinely enhance the quality of medical decision-making. Artificial intelligence has an inherent ability to synthesize complex information streams and deliver enhanced analyses or recommendations that might otherwise evade notice. That aptitude alone should not classify them as devices.