DAILY NEWS CLIP: February 17, 2026

In a financial pinch, major health insurers are turning to AI for help


STAT News – Tuesday, February 17, 2026
By Casey Ross

Facing shrinking profit margins and higher medical costs, the nation’s largest health insurers are accelerating adoption of artificial intelligence throughout their sprawling operations, promising a wave of automation designed to cut expenses and boost productivity.

References to AI were a common part of the script during insurers’ calls with Wall Street analysts in the early weeks of 2026.

Executives at UnitedHealth Group pledged to lean heavily on the technology to cut $1 billion in costs this year, with CEO Steven Helmsley declaring that “we are clearly embarking on a new age of technology” in health care.

Coming off a down financial year, Centene’s chief executive, Sarah London, said the managed care company is systematically building AI into cost-control programs and efforts to root out fraud and unnecessary care. “As we create the roadmap to harvest Centene’s full potential earnings power, there is no question that data, technology and artificial intelligence will be a critical lever and accelerant to this work,” London said.

Elevance, Cigna, Humana, and CVS Health, which owns Aetna, all flagged investments in AI as part of their strategies. Elevance is already using the technology for thousands of tasks, ranging from answering benefits questions to wrangling documents to reviewing requests for care.

The rapid uptake of the technology promises to speed up decision-making and streamline health care’s notoriously costly and slow-moving bureaucracy. But its involvement in pivotal medical decisions also poses thorny questions about how to monitor and oversee AI models whose conclusions will bear directly on the health and financial well-being of consumers — as well as the bottom lines of the nation’s largest health care companies.

With the Trump administration proactively dismantling safety guardrails around AI — and rapidly adopting the technology into federal agencies and the Medicare program — state authorities are under pressure to sort through an intensifying debate over how to protect consumers from the technology’s flaws. States such as Colorado and California have led the way with new laws, but insurance trade groups have warned the new regulations will only stifle innovations that promise to improve consumers’ experiences and cut costs.

“We are at a real crossroads when it comes to the regulation of AI across the health care ecosystem,” said Carmel Shachar, a Harvard Law School professor who researches the use of AI in health care. She pointed to an asymmetry in oversight between payers and providers, with the FDA and other entities reviewing AI devices used by doctors and hospitals, while no centralized authority imposes similar scrutiny of health insurers.

The resulting opacity is compounded, she said, by the secretive posture of the large insurance companies, who often guard details around AI use as proprietary. “Rightly so, insurance is run as a business,” Shachar said. “So it’s not like they’re fully transparent about AI.”

Meanwhile, adoption has surged in the last two years amid technological advances. In a survey across 16 states, the National Association of Insurance Commissioners found that 84% of health insurers are using AI. The most commonly cited use case was utilization management, or the systematic levers insurers pull to control access to medical services.

Even with that level of uptake, some signals suggest that insurers, which normally lead technology adoption within health care, are falling behind providers who are swiftly rolling out generative AI models for clinical documentation and billing. Insurers have speculated that widespread use of those products is inflating medical coding and billing, driving up medical expenses.

“The fact that they are seeing the impact of AI on their business is creating a heightened urgency to apply it themselves,” said Jessica Lamb, a partner at consulting firm McKinsey & Co. who advises insurers and other companies on technology adoption.

In the area of medical costs, McKinsey has projected that insurers could save up to $970 million for every $10 billion in revenue. For the largest companies, that could translate to tens of billions of dollars a year. UnitedHealthcare, for instance, had revenues of $345 billion in 2025 when it served about 50 million consumers.

While many insurers have rolled out chatbots to help consumers analyze plan options and get faster answers to common benefits questions, executives said they are building the technology into internal processes designed to root out fraud, process claims, and decide when patients should get coverage for certain drugs and treatments.

Executives at UnitedHealth Group said they expect to invest $1.5 billion in AI in 2026 and at least as much in 2027. The company is already using AI across its enterprise and recently announced a product it calls Optum Real, an AI designed to help adjudicate claims at the point of care, so providers and patients have more accurate information about costs and coverage when making treatment decisions.

Patrick Conway, chief executive of Optum Health, the company’s health care delivery business, said the product has the potential to “transform health care transactions” by eliminating the costly back and forth that often occurs after care is delivered, resulting in clawbacks for providers and unexpected bills for patients.

“It reduces areas of long-standing processing friction,” Conway said during a recent earnings call, “while enhancing Optum Insight’s growth and margin potential” by creating a simpler, faster experience for customers.

But experts said turning that marketing pitch into a reality will require that insurers and providers operate under a common set of rules with shared access to data used for decision-making.

That’s especially true in the case of tools applied to claims adjudication and prior authorization, where insurers and doctors often butt heads over delays and denials in care. Recent data released by officials in Massachusetts revealed that insurers denied more than 20 percent of claims filed in 2024.

Amy Killelea, a researcher at Georgetown University’s Center for Health Insurance Reform, said prior authorization is already a source of enormous angst, because the underlying criteria used to make decisions are often open to interpretation. If you add AI to that already imperfect decision-making process, she said, the potential for error and disagreement might actually increase. “Then all you’re doing is making bad decisions faster,” Killelea said. “I’m not sure you can call that efficiency.”

The trust problem

Late last year, Bartho Caponi, a physician at UW Health, a Wisconsin health system, noticed a sudden uptick in denials from insurers for patients with symptom-based complaints. People with problems such as severe abdominal pain or generalized weakness were getting rejected for hospital stays.

As the medical director for utilization management, Caponi is used to seeing, and contesting, denials. But this was unusual, he said, for the volume, consistency, and speed of the rejections. “I suspected AI,” he said, noting patients were being denied within hours of arrival, before medical records were even fully updated.

Caponi said he never had an insurer cop to using AI in relation to the denials. But his suspicion alone points to a broader breakdown in trust around the use of the technology. Because AI’s use is seldom disclosed outside of scripted events and press releases, doctors and insurers do not know when the other side is using the technology to gain a financial advantage.

In the fall of 2025, executives with several large insurers blamed rising medical costs and constrained profit margins, in part on providers’ use of AI for coding, billing, and documentation tasks.

Asked about that dynamic during an earnings call, Cigna CEO David Cordani said the company was pushing back in multiple ways, including through the use of “our own AI and technology capabilities.”

This arms race is being fed by a growing sub-industry of companies selling AI products to both providers and insurers. Dozens of startups have entered the fray in recent years. The common pitch is for frictionless health care — the idea that technology can connect the parties seamlessly to produce faster and more predictable financial outcomes.

Companies such as Cohere, an AI vendor started in partnership with insurer Humana, promise faster processing times, especially in the arena of prior authorization.

“Almost 85 percent of our authorizations are auto-decisioned, so that’s a significant improvement,” said Lalithya Yerramilli, a senior vice president of payment solutions at Cohere. She said the company’s technology can ingest medical information about a patient, cross-check it with an insurance policy and other documents, and “within seconds” render a decision that would normally take insurers many hours, if not days, to deliver.

Yerramilli said the company employs a staff of more than 70 physicians to help ensure that the AI’s conclusions are clinically sound and appropriate. Those physicians also review denials for potential errors or contextual misunderstandings, to prevent bad decisions from being made. “One hundred percent of it goes for human review,” she said. “We do not do an adverse decision automatically.”

To Caponi, the pitch for faster decisions sounds plausible and potentially helpful. At UW Health, he does not deal with prior authorizations, but instead handles an adjacent dispute category in which insurers and providers often disagree on whether a patient should be covered for an inpatient hospitalization, as opposed to a less costly “observation” stay.

In that domain, Caponi said, speedier decisions supported by AI are only better if they are accurate and based on transparent rules and reasoning. “The efficiencies are great,” he said, “but the challenge is, how is [AI] making judgments? How do we know that this is all being done in good faith?”

AI as the guardian of integrity

One of the biggest areas for insurers’ AI investment is fraud detection.

Known in the insurance business as payment integrity, it is an area where AI’s ability to detect patterns in massive datasets can be uniquely beneficial to insurers.

Consulting firm Everest Group estimated the market for fraud detection at $10 billion, with AI-enabled companies promising to identify fraud before payments are made, rather than trying to mount costly recovery operations after the fact.

London, the CEO of Centene, whose biggest business line is Medicaid, promised a “more aggressive approach” to fraud in 2026, adding that the company has built AI into core business operations across its enterprise to help detect and root out improper billing.

“We currently score claims data against 75 different algorithms, designed to triangulate potential fraud,” London explained during the company’s fourth quarter earnings call. “Alerts are triggered and sent to a group of cross-functional experts for immediate review and intervention.”

She also gave the example of a task force created by the company in mid-2025 to address higher medical costs, targeting a kind of behavioral therapy for autistic children. She said the task force analyzed data in the 29 states where it operates to identify anomalies in billing data and clinical practices.

“What we found were consistent patterns of outlier providers with volume versus outcomes-driven care patterns, where the maximum number of hours are prescribed for every patient, instead of an individualized care plan,” London said.

Meanwhile, some behavioral health providers in Florida have pushed back, arguing that Sunshine Health, a Centene subsidiary that manages Medicaid in the state, has undercut their businesses and jeopardized their ability to serve vulnerable children. The billing disputes in Florida involve broader changes in Medicaid reimbursement and are not solely tied to the use of analytics or artificial intelligence.

In a statement, a Centene spokesperson said the company complies with all state billing requirements: “When billing abnormalities occur, we work closely to educate providers on appropriate billing practices through provider town hall trainings, ongoing support outreach, and resource guides,” the statement said.

Killelea, the researcher at Georgetown University, said efforts to use technology to catch fraud earlier in the process will cause legitimate care providers, and patients, to get caught in the dragnet.

“Inevitably, when the focus moves to a hard prevention of any fraud at the front end, you’re going to harm consumers at the back end,” she said, adding that insurers should proceed cautiously and avoid erecting blanket barriers to wide swaths of care. “You have to make sure to target the actual perpetrators of fraud, and not have consumers be collateral damage.”

Insurers say AI will help them bring about that level of precision in fraud detection, prior authorization, and other areas where manual labor and siloed information systems have led to delays and errors.

At Elevance, a frequently-cited statistic is that AI has actually helped reduce prior authorization denials at one provider group by 68%, by helping to ensure the completeness and accuracy of documentation.

“We should be able to do a lot more approvals faster. That’s our intent,” said Ratnakar Lavu, the company’s chief digital information officer. “We’re at a point where I do think we have the ability to simplify health care.”

Access this article at its original source.

Digital Millennium Copyright Act Designated Agent Contact Information:

Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611