FDA’s New Elsa AI Tool Launches Under a Cloud of Doubt

This week, the U.S. Food and Drug Administration (FDA) officially launched Elsa, a new generative AI tool designed to help FDA employees do their jobs faster and more efficiently. The goal? The goal is to expedite the delivery of safe and effective drugs, devices, and food to the public without compromising on safety.
Elsa is essentially an AI assistant explicitly built for FDA workers. It's meant to handle time-consuming tasks like reading clinical trials, summarizing adverse event reports, comparing product labels, and writing codes to organize nonclinical data. Instead of having scientists spend days manually reviewing documents, Elsa can now assist in minutes.
How does this actually help you?
If you've ever waited months for a new medication to get approved—or if you're a patient relying on a new medical device—you’ve felt the effects of how long the review process can take. Elsa is designed to help speed up that process.
By helping FDA staff work faster, Elsa could lead to:
- Quicker approvals of life-saving drugs and devices
- Faster investigations into products that may be causing harm
- More efficient recalls or safety updates when something goes wrong
- Better use of resources, especially as the agency operates under tight staffing
What Elsa can do—and what it can't
Elsa was built inside a highly secure system (GovCloud), and the FDA says it doesn’t train on any sensitive industry data, meaning it's not snooping through drug company submissions or learning from confidential medical studies. It also doesn’t connect to the public internet, so while it’s safe, it limits its capabilities.
Some of the tasks Elsa can already do include:
- Summarizing reports of drug side effects
- Helping scientists compare drug or device labels
- Assisting with reviewing safety profiles of new treatments
- Generating code to build better internal databases
Sources within the agency say that although Elsa is promising, it’s still struggling with some basic tasks. According to two FDA staffers familiar with the rollout, Elsa sometimes gives incorrect or incomplete summaries, even for public information like approved product lists.
A related tool called CDRH-GPT, which was designed for the FDA’s Center for Devices and Radiological Health (the team that oversees pacemakers, insulin pumps, X-rays, and more), is reportedly even further behind. According to the same sources, it's still in beta testing and “doesn’t yet connect to the FDA’s internal systems and has issues when it comes to uploading documents or allowing users to submit questions.”
Critics say the FDA may be rushing the technology out too quickly. “I worry that they may be moving toward AI too quickly out of desperation, before it’s ready to perform,” said Arthur Caplan, head of the medical ethics division at NYU Langone Medical Center. “It still needs human supplementation. AI is really just not intelligent enough yet to really probe the applicant or challenge or interact.”
There are also concerns about trust and ethics. Richard Painter, a law professor at the University of Minnesota and former government ethics lawyer, warned, “We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts. A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.”
These growing pains have sparked concern among staff, some of whom feel that the agency is pushing AI too quickly before it’s fully ready. With recent layoffs across the Department of Health and Human Services, including support roles at the FDA, there’s anxiety that AI may be seen as a replacement rather than a supplement to human expertise.
Despite its growing pains, Elsa represents a significant shift in how the FDA operates—and by extension, how quickly the public might benefit from new therapies, food safety updates, or medical technologies.
If Elsa continues to improve and is deployed responsibly, it could help the FDA keep pace with modern health challenges, serve the public faster, and make critical safety decisions more efficiently. That’s a future worth working toward.