DAILY NEWS CLIP: April 4, 2025

Why AI legislation has stalled at the state level


Modern Healthcare – Friday, April 4, 2025
By Gabriel Perna 

Potential state regulation of artificial intelligence is competing against worries that laws might dampen development and revenue opportunities for companies in the space.

Most industry stakeholders and lawmakers say comprehensive federal legislation of AI eventually will be necessary but it’s unlikely this year. That was the consensus before President Donald Trump took office and nothing in his first two months has changed that perception.

State legislatures could take up the issue themselves this year, but the first attempt fell flat. Late last month, Virginia Gov. Glenn Youngkin (R) vetoed the High-Risk Artificial Intelligence Developer and Deployer Act. The measure would have required users of AI in high-risk industries such as healthcare to implement safeguards against algorithmic discrimination, which is what happens when AI displays bias against minority and at-risk populations.

It could be a signal that getting legislation passed at the state level will stall because it could hamper the growth of technology companies developing AI, said Randi Seigel, a partner at consulting firm Manatt Health, which tracks state AI legislation. In a statement accompanying his veto, Youngkin said the bill would “stifle the AI industry” within Virginia.

“This may foreshadow that governors are taking seriously the economic and innovation impacts of these broad AI laws and may be more comfortable legislating more narrow use cases where there is more alignment among stakeholders and limited burdens on developers and deployers,” said Seigel, who provides advisory services to AI companies in healthcare.

Only three states have passed comprehensive laws regulating AI’s usage in healthcare since the public release of OpenAI’s ChatGPT large language model in November 2022. That release launched the use of generative AI among end users, developers and other stakeholders in healthcare and other industries.

In September, California Gov. Gavin Newsom (D) signed several AI-related bills into law, including two focused on healthcare. Newsom signed legislation requiring healthcare providers to disclose the use of generative AI when it is used for patient communications. Another law requires insurance companies to meet certain conditions when using AI to make coverage determinations.

Colorado Gov. Jared Polis (D) signed a law in May, which takes effect in February 2026, requiring developers of high-risk AI models in healthcare and other industries to notify consumers if the model will make a consequential decision concerning the consumer. Developers also must try to prevent algorithmic discrimination in their models and ensure transparency between themselves and end users, such as hospitals.

In Utah, Gov. Spencer Cox (R) signed a law in March 2024 requiring professionals, including physicians, to “prominently” disclose the use of generative AI in advance of its use. The disclosure must be provided verbally at the start of an oral exchange or conversation and through electronic messaging before a written exchange.

Arkansas, Illinois, Kentucky, New Jersey, New York, Oklahoma, Oregon, Rhode Island and Virginia have narrower laws, most of which were passed before the generative AI boom began in late 2022 and are not as comprehensive.

Pending bills in New York, Massachusetts and California could be next to pass, said Jared Augenstein, senior managing director at Manatt. There also are pending bills in Connecticut, Nebraska and New Mexico.

All the bills are similar to the one passed in Colorado. They require developers and users of AI in healthcare and other high-risk industries to safeguard against algorithmic discrimination.

“This concept of algorithmic discrimination is important, particularly for healthcare because it means that an algorithm accidentally favors one group over another,” said Julie Barnes, who advises tech companies on policy as founder and CEO of Maverick Health Policy. “This is a real problem for the healthcare system, if a computer code could result in some groups of patients getting better medical treatment than others.”

Texas lawmakers started with an AI bill similar to the Colorado law but Republicans revised it last month so developers and users would not be required to be as proactive in assessing AI models for discrimination.

This isn’t just a prevailing thought in Republican-led states. Gov. Ned Lamont (D) in Connecticut signaled he would veto a separate, previous AI bill last year because it would scare businesses away from the state. The state’s hospital association asked legislators to amend the bill so healthcare would be exempt from it. Congressional Democrats in Connecticut reintroduced the bill in January but Lamont’s administration remains opposed.

Complying with state AI laws will be a challenge for health systems, insurers and digital health companies whose operations cross state lines. It will force developers and users of AI in healthcare to take the safest possible route, said Dr. Lee Schwamm, chief digital health officer at New Haven, Connecticut-based Yale New Haven health system.

“It’s going to be rough if every state has its own set of restrictions,” Schwamm said. “You’ll end up with products that meet the most stringent restrictions. It will be a race to the bottom.”

Smaller startups don’t have the resources to comply with multiple state AI laws, said René Quashie, vice president of digital health at the Consumer Technology Association.

Still, he predicted state regulation on AI was inevitable. “I don’t think the issue of state activity in this area is going away, particularly in the absence of comprehensive federal legislation on the issue,” Quashie said.

Access this article at its original source.

Digital Millennium Copyright Act Designated Agent Contact Information:

Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611