DAILY NEWS CLIP: March 11, 2025

Opinion: Make CT a leader in responsible AI innovation


Hartford Courant – Sunday, March 9, 2025
By State Sen. James Maroney

In the technology world, the mantra is often to move fast and break things. Since the launch of ChatGPT, it seems that not a day has gone by without a mention of a new AI tool being developed and used to make important decisions about our lives.

Whether screening resumes and housing applications, or determining student loan rates and eligibility, AI now reaches across all areas of our lives. This begs the question, when people’s personal lives are at stake, should we be moving fast and breaking things, or should we be moving quickly and acting responsibly? Connecticut has an opportunity to join a growing number of states who have chosen to encourage innovation while still protecting residents from the unintended harms of untested AI technology.

Over the past couple of years there has been increasing concern over the rapid adoption of automated decision-making tools without clear government guidelines. While this type of technology may seem new, many of the known harms associated with algorithmic decision making go back over a decade.

One noted example is an Amazon hiring algorithm that the company started developing in 2014. By 2015, the company realized that there was gender bias in how the algorithm was rating candidates. Since the algorithm was trained to vet candidates based on resumes that had been submitted to the company over a 10-year period, and there were significantly more men than women in the tech field during that time, the algorithm determined that it only wanted to hire men.

In housing, there are numerous examples of how incorrect data has led to adverse decisions affecting people’s lives. In California, a 75-year-old man named Chris Robinson applied to live in a senior living community but was denied because an algorithm determined him to be a high-risk renter. It turns out the reason for the denial was because of a littering conviction for Chris Robinson, a 33-year-old man in Texas, a place where the older Chris Robinson had never even been.

Unfortunately, these types of adverse impacts are too common. In 2018, a disabled Connecticut man, was denied a request to live with his mother in her apartment complex due to a “disqualifying criminal record.” It turns out that the disqualifying record was a charge of retail theft from when he was 20 years old that had been dropped years earlier.

The harms are not just limited to employment and housing. There are also examples of discrimination in lending. In a 2020 report, the Student Borrower Protection Center detailed instances of a financial services company’s algorithm that charged higher student loan rates to students who attended an HBCU, when all other factors were controlled for. Students who attended Historically Hispanic Serving Institutions were also charged higher rates.

Ultimately the cause of these harms is the same: data that inadvertently produce biased outcomes. In machine learning, an algorithm studies data from the world the way it is and uses that to make predictions for the future. Since we know the world isn’t perfect, existing biases are at risk of being perpetuated and amplified when they are inadvertently wrapped into the foundational data layer of a decision-making AI tool. Further, there is the potential to spider the wrong data.

There is a saying that “garbage in equals garbage out.” Bottom line, if you are not using appropriately crafted data sets to feed your algorithm, then you will not get high quality results.

One solution to this problem is for companies to pause and assess any potential unintended consequences before launching a new product. This simple but meaningful step helps protect people and mitigates the potential harms of untested and unproven AI while still encouraging innovation. In fact, according to Pew Research data in December of 2023, over 80% of respondents wanted the government to regulate AI and take steps to limit these harmful effects.

In order for Connecticut to reap the full benefit of the AI revolution, it is critical that we shift from the “move fast and break things” mantra that is already proving harmful to the “move quickly but don’t rush” mantra that was popularized by UCLA basketball coach John Wooden. When we rush, we are likely to make mistakes, and with the increasingly widespread adoption of AI technologies by employers, landlords, and financial institutions, the stakes have never been higher.

As Marcus Aurelius wrote, you can commit an injustice by doing nothing. Not passing meaningful AI legislation would be an injustice. We have a responsibility to the people of Connecticut to act now and ensure that new opportunities create a future that is fair for everyone.

State Sen. James Maroney represents the14th District, encompassing Milford, Orange, and parts of West Haven and Woodbridge.

Access this article at its original source.

Digital Millennium Copyright Act Designated Agent Contact Information:

Communications Director, Connecticut Hospital Association
110 Barnes Road, Wallingford, CT
rall@chime.org, 203-265-7611