Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611
STAT News – Wednesday, April 30, 2025
By Katie Palmer
It’s been a year since the federal government clarified that technology tools used in health care shouldn’t discriminate against patients based on protected traits such as age, sex, race, and disability. A lot has changed in that time, including a new president in the White House. But as the May 1 deadline approaches this week, health systems face significant clinical and political uncertainty over how to comply with the rule.
The lack of clarity is further delaying and potentially disrupting an already complex effort by health systems and technology vendors to avoid discrimination from artificial intelligence and other clinical tools used in making decisions about patient care.
The nondiscrimination provision of the Affordable Care Act, called Section 1557, has always prohibited discrimination on the basis of traits like race, sex, age, and disability. But last year the Biden administration issued a final rule clarifying that patient care decision support tools, including artificial intelligence algorithms, fall under that banner. As of May 1, federally-funded health systems will have to show they’ve worked to identify tools that use protected traits — and to mitigate the risk of discrimination from their use.
But civil rights officials have been quiet about how health systems can comply. Early in the Trump administration, information about Section 1557 was removed from Health and Human Services websites, leaving health systems questioning how and whether it would be enforced by the department’s Office for Civil Rights.
“It’s really like a giant game of wait and see,” said Lou Hart, medical director of health equity at Yale New Haven Health System. “Everything could change.”
While most civil rights violations are resolved through voluntary corrective action, health systems that do not comply with the federal nondiscrimination rule could face defunding or litigation from the Department of Justice. Despite doubts about the future of its enforcement, “that remains the law and the requirements,” said Jeff Wurzburg, a healthcare regulatory attorney at Norton Rose Fulbright who served as an HHS attorney for five years. “And until there is further instruction from this administration, stakeholders should continue to follow those requirements.”
Even as health care systems await clarity from the federal government, clinicians in many cases still don’t know which tools they use are discriminatory, or how to fix them.
At Yale New Haven Health, a team uncovered 57 algorithms or tools that used the protected traits of race, color, or national origin. “Whether we consider it racial adjustment or racial preferencing in clinical medicine, it seems consistent with both the past administration and the current administration, in my opinion, that we should not be using race to predict clinical risk and dictate clinical care,” said Hart. So the health system took each of those 57 tools and decided whether they could eliminate it, substitute it, or mitigate its risk of discrimination.
A handful of commonly-used race-based tools, like the eGFR calculator to estimate kidney function, have gone through medical society-led efforts to validate race-free versions. Those, Yale could substitute. Another small group of tools were outdated enough to strike from use entirely.
About half of the tools, though, didn’t have a straightforward fix. For those, said Hart, clinicians have to work on a case-by-case basis to safeguard against the risk of, for example, racial variables in a tool: Sharing with a patient that a tool uses race, being transparent about the limitations of racial adjustment, and using that context to inform a shared decision about treatment.
For many clinical tools, the answers are far from clear. Just because a tool uses a protected trait to make its predictions doesn’t mean it will cause discrimination — indeed, it’s sometimes those variables that can make their predictions useful, especially those like age and sex. And just because a tool doesn’t use those variables doesn’t mean it won’t result in discrimination when applied to patient care.
In the face of these difficult clinical questions, health systems are often at a loss for concrete guidance. Researchers have published frameworks for evaluating the inclusion of certain protected traits in clinical decision tools, but they require careful implementation from health systems. Race and other protected traits are regularly used as variables in vendor-sold AI algorithms, which aren’t always made transparent to the health systems using them.
Even when evidence is clear that a tool can result in discrimination, as in the case of pulse oximeters with a well-documented history of skin tone bias recognized by a recent draft guidance from the Food and Drug Administration, there are no concrete guidelines for clinicians hoping to mitigate their impact. “All of us need to spread that awareness to our colleagues,” said Nirav Bhakta, a pulmonologist at the University of California, San Francisco who has studied the role of race in pulmonary function testing.
Legal counsel for health care systems are aware of the deadline, but “nobody is really doing much,” said Diane Hoffmann, a professor of health law at the University of Maryland, Baltimore. “They know that they need to be doing something but don’t, I think, have a clear plan.”
A few points of reference are emerging. The Radiological Society of North America issued a fact sheet for radiologists, many of whom use AI algorithms to analyze and interpret medical images or prioritize patients for imaging or review. Health information technology vendors can also play a role: Some AI developers are answering the call for transparency and real-time monitoring from their health system customers. Electronic health record vendor Epic released a programmatic search for customers to review the traits used in their clinical decision support tools.
Last week, the Digital Medicine Society published a toolkit to support the removal of harmful race-based algorithms from a health system in affiliation with the SCAN Foundation and NYC Health Department’s Coalition to End Racism in Clinical Algorithms. The coalition began as an effort to remove some race-based tools from nine major New York City hospitals in 2021 — a painstaking, years-long process.
“It’s really a landscape analysis of your own system that you have to do in order to understand what’s going to work,” said physician Toni Eyssallenne, deputy chief medical officer for NYC Health. A health system might need to engage its laboratories, medical department chairs, and IT departments to safely change a clinical decision support tool. “It’s hard to get a stock answer as to, you need A, you need B, you need C. Every health system is at a different place.”
In an interview last year with STAT, former OCR director Melanie Fontes Rainer spoke about the difficulty of identifying and mitigating discriminatory tools. “Just like any discrimination, it is challenging to identify it and it’s always challenging to prove it up,” she said, especially in the fast-moving world of AI. Given that complexity, the office could consider voluntary audits or compliance reviews “to start to get a better sense of … how this is interacting with patients or systems or communities.”
The turnover in presidential administration may have delayed those already slow-moving efforts. About six months ago, questions started rolling in about 1557 compliance from health systems to Aidoc, a company that builds AI-based triage and notification systems, said senior vice president of regulation and legal Amalia Schreier. Then, they quieted.
“If there hadn’t been a change of administration and the rule was clearly going into effect, I get the sense they might be more on top of it,” said Hoffman. The Office for Civil Rights, like much of HHS, is in the midst of a major reorganization that could significantly impact its enforcement capabilities and priorities.
HHS did not respond to STAT’s request for comment about plans for enforcing the nondiscrimination rules.
Clinicians and health policy leaders who have been following the nondiscrimination rule’s application to clinical tools expect that the Trump administration may dismantle aspects of the final rule implementing Section 1557, as it did during Trump’s first term. States have already challenged the inclusion of gender identity as a protected trait in Biden’s implementation, and the current administration has sought to undo other Section 1557 rulemaking that would protect against discrimination on the basis of sexual orientation and gender identity.
“It is hard to predict how seriously health systems will take this deadline unless the HHS Office for Civil Rights or pending federal litigation set a precedent by calling for active enforcement,” said Rohan Khazanchi, an internal medicine and pediatrics resident physician in Boston and a research affiliate at Harvard University’s FXB Center for Health & Human Rights.
Regardless of how the rule is changed or enforced in coming months, health systems can choose to continue to make progress toward clinical algorithms that don’t discriminate. “We have a long way to go,” said Eyssallenne. “But we’ve started the work and we want to keep the conversation going.”