Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611
Hartford Courant – Wednesday, December 25, 2024
By Alison Cross
Democratic leaders in the State Senate say they are resurrecting an artificial intelligence bill that stalled last session when the governor threatened a veto over fears that too much regulation would scare business away from Connecticut.
After passing the Senate on party lines, last year’s attempt to place consumer safeguards around AI died in the House without a vote.
The bill sought to hold AI developers and businesses accountable for algorithms that discriminate against protected groups, criminalize deepfake pornography and deceptive election material, require labels on AI-generated content and disclosure notices for consumers, and implement education and workforce training programs. But critics pushed back on the bill, elevating concerns that the policies would stifle the ever-evolving field of AI, as others regarded the proposal as a work in progress.
Sen. James Maroney, the legislation’s architect said the next iteration of the AI bill will borrow from its predecessor but with a “focus on light touch regulation” and more concrete definitions.
One business-friendly change: The forthcoming legislation reduces some of the liability placed on businesses that use unmodified “off-the-shelf” programs from developers who have agreed to take on legal responsibility and conduct impact assessments for their products.
Under the previous bill, businesses using “high-risk AI systems” were responsible for implementing risk management policies and programming, as well as conducting impact assessments and annual reviews to “ensure the system is not causing algorithmic discrimination.”
Only businesses with fewer than 50 employees who met other criteria were exempt from the rule. In the new draft, Maroney said all businesses will be exempt from these responsibilities, as long as they do not modify the system and the developer assumes liability.
“If you’re using a product off the shelf and not changing it, then you don’t have as many obligations,” Maroney said. “The thought was, if you’re not inserting new risk, and the developer was willing to take that (on), there’s still someone liable. There’s still someone testing the program.”
Maroney said the new bill will also include “more specificity” in its definitions, an adjustment that he hopes will assuage some skeptics.
“A lot of the concerns we’re hearing from people are issues that are not drawn into the bill, but they feel that it’s ambiguous,” Maroney said. “We’re giving more clarity to … reduce the potential for unintended consequences.”
Previous challenges
Last session, anxieties over how regulation might inadvertently hamper innovation and business loomed over the legislation.
Just days after the bill passed the Senate, the U.S. Chamber of Commerce cautioned Connecticut House Speaker Matt Ritter against advancing the legislation, warning in a letter about the impact of “increased litigation and compliance costs stemming from navigating a patchwork of state AI and tech laws” and the ways “premature action … could result in the derailment of the important and beneficial uses of this significant and evolving technology.”
In the Senate debate, the Republican caucus also expressed doubt.
“If we get it wrong in a technology field that is moving at remarkable speed … for many businesses, it’s life or death. … They may leave the state or they may do the wrong thing and lose their business,” Hwang said after telling his colleagues that he is “Inclined to just take a step back — to reduce the complexity.”
As the 2024 session wrapped, Gov. Ned Lamont told reporters he would oppose the AI bill, which would have made Connecticut among the first in the nation to adopt such regulations.
“How much do you regulate the startup industries in a place like AI, where it is just so brand new?” Lamont said. “Do you want 50 states doing their own thing or is maybe that not the right way?”
“Whatever you do, you don’t want one state to do it. You want this done on a much broader basis,” Lamont said, adding that he spoke to entrepreneurs who felt that “all other things being equal, if it’s more likely that I get sued in Connecticut than I do in Georgia, maybe I’ll start my company in Georgia.”
This year more than 30 states “adopted resolutions or enacted legislation” related to AI, according to the National Conference of State Legislatures. The laws targeted a range of objectives from algorithmic discrimination and government AI use, to deepfakes and computer-generated child sexual abuse material, to taskforces and education grants.
Maroney, who said he has been “meeting with legislators from around the country twice a month” to collaborate and develop AI policies, said Colorado, Minnesota and California have passed legislation that mirror proposals in Connecticut.
“We anticipate over a dozen more states will introduce similar … versions of the bill,” Maroney said.
While critics of state-level AI laws see federal regulation as the best course of action, advocates have expressed doubts over the country’s splintered Congress’s ability to act. They argue that states cannot wait for legislation that may never materialize.
AI harms
In the absence of regulation, Maroney said policymakers are already “seeing real harms.”
Maroney highlighted one lawsuit involving iTutorGroup, an online English education service whose hiring algorithm automatically rejected female applicants who were older than 54 and male applicants over the age of 59.
iTutorGroup agreed to pay $325,000 in 2023 after the U.S. Equal Employment Opportunity Commission sued the company for employment discrimination. In all, the EEOC said iTutorGroup’s software excluded “more than 200 qualified applicants based in the United States because of their age.”
In addition to hiring, Maroney said unchecked algorithms are also producing disparate impacts in housing.
According to a case pending in U.S. District Court in Connecticut, a disabled man who had lost the ability to speak, walk and take care of himself could not move into his mother’s apartment because an automated tenant screening software allegedly rejected the man’s application after flagging a shoplifting charge that was later dropped.
The alleged denial took place in 2016, years before ChatGPT and other AI models became household names.
“Automated decision-making technology is not new, it goes back over a decade,” Maroney said. “It’s important, before we go too much further, to introduce some standards in how we’re deploying this technology.”
In a statement last week, Senate President Pro Tempore Martin Looney said “Connecticut needs to require guidelines to ensure decisions are made fairly, accurately and transparently.”
Maroney also underscored the pressing need to criminalize the spread of non-consensual AI-generated intimate images, known as deepfakes.
“We see harms happening, real harms happening disproportionately to high school girls, where people are using nudification software to generate naked images and then sharing those,” Maroney said. “We want to criminalize that behavior.”
Establishing trust
Despite growing concerns over adverse applications, Maroney emphasized AI’s enormous potential to transform our society for the better. Maroney said the 2025 proposal will also include workforce development initiatives as well as incentives for AI businesses and partners.
“AI is going to have the power to transform the way we work, live and play in so many ways, but we won’t really see that full transformation and unleash the full power until people feel safe in how it’s used,” Maroney said.
Maroney believes regulation will help foster the public’s trust in AI — something that is lacking, according to recent polls.
A 2024 Bentley University and Gallup survey found that 13% of Americans believe that AI does “more good than harm” and 77% have “not much” or no faith in “businesses to use artificial intelligence responsibly.”
In a 2023 poll from Pew Research, just 10% of Americans are “more excited than concerned” about the “increased use of (AI) in daily life” while 52% feel “more concerned than excited.”
Maroney said he is “in the 10%.”
“I’m more optimistic for the future, but I’m in the minority, and I also know that … people need to feel safe. It’s the most important thing,” Maroney said. “Until people feel safe, we’re not going to see that radical transformation and get the full upside that AI can bring to society.”