Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611
STAT News – Wednesday, March 11, 2026
By Casey Ross
AI agents are proliferating in health care faster than they can be counted.
On Tuesday, Epic Systems, the nation’s largest electronic health records vendor, touted the benefits of three agents it recently added to the population. “Art” is taking faster notes and drafting other pieces of medical documentation; “Penny” is helping hospitals collect bills and avoid coverage denials; and “Emmie” is answering patients’ questions and helping them schedule appointments.
Oracle has rolled out its own agent to help physicians in 30 specialties draft their notes and suggest next steps for patients. Meanwhile, Amazon, Google, and Microsoft are all adding AI personas into the mix — part of a flood of new tools launched this week in Las Vegas at HIMSS, one of the industry’s largest health software conferences.
The rush of new AI agents that autonomously perform a range of health care work highlights concerns that have been percolating since health AI picked up steam several years ago. While these products are often backed by glowing testimonials from clinicians, their rapid commercialization is also accompanied by questions about how to effectively embed them into doctors’ work routines, establish trust with patients, and prevent bad outcomes, such as biased decisions, inaccurate bills, and medical errors.
“A lot of the products being rolled out are not as validated as we would like them to be,” said Nicholson Price, a law professor at University of Michigan who co-authored a recent study on factors that influence patients’ trust in AI. “It’s just bad for governance.”
One recent review found that AI agents are rapidly being deployed in areas such as documentation and decision support for diagnosis and treatment, but that most evaluations of their performance are based on simulated medical settings or basic question answering. The Trump administration has broadly sought to deregulate AI tools, prioritizing the speed of deploying the technology over the creation of guardrails.
In some clinical circumstances, advances in AI are rapidly speeding up the development of new products. At Aidoc, a company that makes AI tools to analyze medical images, the advent of larger, more sophisticated AI models has allowed the company to, within months, double the size of a portfolio it took nine years to build, said Jesse Ehrenfeld, the company’s chief medical officer.
“It’s unbelievable,” he said. “As we scale all of this and go quickly, I really think the conversation has to shift to how do we maintain a traceability layer, so that the AI is transparent” to its users. He said Aidoc can do that by showing the pieces of an image an AI used to make its determination. However, that type of information is not always flagged in other types of applications.
While evaluation and monitoring methods are still under development, technology companies in the race for AI dominance are pressing forward.
In talking up its new group of AI agents at HIMSS, Epic said 85% of its customers are using its AI offerings. In marketing materials, the company offered statistics touting the technology’s value. In the case of “Art,” its AI agent for clinicians, the company asserted that it’s helping produce patient discharge summaries 20% to 30% faster. Meanwhile, “Penny,” its billing assistant, has reduced medical coding related denials by 20%.
Amazon said it is expanding access to its agentic AI health assistant to customers on Amazon.com and the Amazon app. Previously, access to the technology was restricted to patients of its One Medical clinics. The company’s health assistant can offer personalized advice by reviewing a patient’s conditions, medications, and health history — and it can manage prescription renewals.
Not to be outdone, Google announced at HIMSS that its AI agents are being adopted by a wide array of health care companies, including CVS Health, Highmark Health, Humana, Quest Diagnostics, and Waystar. Each organization is using Google’s technology differently, but the common thread is to help with administrative work and data wrangling.
Medical technology experts who are watching the marketing blitz unfold said an important question in the rollout of these tools is whether patients are being consulted in their development and testing.
Leo Celi, a research scientist at the Massachusetts Institute of Technology and physician at Beth Israel Deaconess Medical Center in Boston, said efforts to include patients are often performative. “We make them guests of honor. We make them sit there and shut up,” Celi said. “‘Hey, I’m doing the talking here.’ … That’s not going to work.”
Celi said he’s organizing an event in Toronto in which patients with anxiety and depression will interact with AI agents and design the best ways to evaluate them, rather than using industry-created tools to benchmark performance. Another series of events involves incorporating feedback on the usefulness of the technology from clinicians and researchers at historically black colleges and universities.
Celi said he’s not an opponent of AI, or against the idea of using it to automate administrative work. But he said the only way to realize its potential is to involve a wider group of people in its creation, including those who have been historically marginalized.
“Their life experiences are more important than the skillset of the Silicon Valley engineers,” Celi said. “We poo poo that as scientists, ‘Oh that’s not going help us.’ And that’s exactly why we’re not solving any problems.”
It remains to be seen whether the latest crop of AI agents will help patients get more timely and effective care, or reduce documentation burdens for doctors. Price, the University of Michigan professor, said it’s especially difficult to tell because of the lack of information provided publicly about how the tools were developed or what kinds of tests were performed prior to their launch.
In his recent study, Price and his colleagues found that patients, in considering factors that would increase their trust in AI, said the level of the AI’s performance — that is, whether it could perform at a generalist or specialist level — was the most important factor. Also influential were factors such as whether a product had been reviewed by the Food and Drug Administration, tested in an independent laboratory, or was being overseen by a doctor or another human reviewer.
So far, he said, that information is not making its way to patients, even though disseminating it might actually make the technology more attractive to purchasers.
“It’s a selling point to say, ‘Our system is certified. It is approved. We have representative data and we have demonstrated across a bunch of different circumstances that this is how well it performs.’” Price said. “That’s not only good for you as a health system, but also something your patients will care about.”
