SB 5, An Act Concerning Online Safety
TESTIMONY OF THE CONNECTICUT HOSPITAL ASSOCIATION
SUBMITTED TO THE GENERAL LAW COMMITTEE
Wednesday, March 4, 2026
The Connecticut Hospital Association (CHA) appreciates this opportunity to submit testimony concerning SB 5, An Act Concerning Online Safety. CHA supports various sections of the bill but opposes other portions of the bill, as written.
Connecticut hospitals make our state stronger by delivering nationally recognized, world-class care, supporting jobs and economic growth, and serving communities across Connecticut. Every day, hospitals improve access, affordability, and health equity — providing care to all patients regardless of ability to pay. At the same time, hospitals invest in their workforce and local communities, even as they navigate significant financial and federal challenges.
SB 5 contains provisions relating to online safety but also contains numerous provisions regulating artificial intelligence (AI) separate and apart from online safety. CHA appreciates that the bill recognizes that AI is going to need to be managed, and specifically that we are well-beyond the point at which AI can be banned or avoided. AI has a ubiquitous presence in the daily lives of practically everyone and impacts all industries and government functions at some level.
We appreciate the many sections of SB 5 that reflect the recognition that AI is a tool that, when properly managed, can be used to improve systems, advance development and planning, and enhance people’s lives. Specifically, we welcome the provisions of the bill that devote resources to AI learning labs; support educational opportunities to foster students’ exposure, understanding, and expertise in AI; provide pathways to support individuals in crisis or with mental health needs that may include suicidality; protect minors from insidious uses of AI; set parameters and expectations for development and interface of AI that functions as a companion or human substitute; and provide opportunities for government leaders, businesses, and the public to learn more and work together to move forward in a world that will include AI.
There are several provisions in the bill that are unworkable or which would unnecessarily interfere with appropriate uses of AI in the state. These problematic areas are discussed below.
Employment Related Situations (Sections 12-19, 21, 22, and 34)
We oppose Sections 12-19, 21, 22, 34 and any other interpretation or portions of the bill that would make the use of AI in employment practices and situations virtually impossible or at least extremely high-risk for businesses.
SB 5 appears to presume that the use of AI is discriminatory and illegitimate in employment, without providing necessary explanation or parameters on how the technology could be used in a compliant manner. Forcing employers to rebut what amounts to an unwarranted presumption is not a fair way to approach the use of AI in employment settings. That approach is in stark contrast to many parts of the bill that recognize that AI is already an integral part of business and society and a technology in which we must invest resources to better understand and grow safely — not demonize.
We urge removal of sections 12-19, 21, 22, 34 and any other provision that would unfairly create a presumption that AI is inherently harmful or problematic when used in the context of employment. If the state regulates AI uses in the employment context, there should be a meaningful convening of stakeholders, government experts, and others to have an evidence-based discussion about how to leverage AI in business, including for business uses relating to employment decisions.
Synthetic Digital Content (Section 20)
Section 20 seeks to regulate “synthetic digital content.” We appreciate the need to ensure individuals and the public are not misled by duped content or displayed information that is computer generated but made to look authentic without attribution to its artificial origins.
But the language in Section 20 is overly broad and could be used to limit legitimate healthcare uses of AI, such as use in developing or providing clinical reports, radiology imaging, 3D modeling, and other similar features used in healthcare that may be created by AI or which are modified by AI.
To ensure this bill does not produce this unintended consequence, at lines 1171- 1172, we suggest the following revisions:
“semantics thereof, [or] (C) is used to detect, prevent, investigate or prosecute any crime where authorized by law, or (D) is used for a legitimate medical or healthcare related purpose.
Using Connie Data To Feed AI Models For Research About AI (Section 41)
Section 41(b)(1) of the bill seeks to use the state-wide health information exchange (Connie) to convert patients’ “private health data” in Connie for researchers to use to pilot AI systems. The language reads:
“to provide private health data, after removing all personally identifying information from such health data, to researchers to pilot artificial intelligence systems”
To be clear, this says the state will reach into Connie, pull out patients’ private, protected health information, strip it down, and feed those patient data into entities (maybe state entities) to help train their AI models. That’s a broad leap for the use of those protected data and potentially an unacceptable repurposing of those patient data depending on how it is managed.
HIPAA has strict processes for the methods by which HIPAA protected data may be de-identified. There is also universal understanding in the healthcare industry that, because of “big data” sources, which include commercial sources of data and social media giants, “de-identification” is meaningless unless there is also no realistic probability that the data could be re-identified and re-linked to an individual. Section 41 does not appear to set the necessary controls to implement relevant privacy mandates or secure data against the real-world risks of re-identification.
The bill does not provide sufficient detail to understand if this use of Connie patient data would be done in a compliant manner in light of myriad state and federal privacy laws. We cannot afford to make mistakes with Connie data and then expect patients to have trust in the system. The state feeding AI training models Connie data is not an obvious choice for how patient data should be used, particularly without patient consent.
We urge that Section 41 be deleted from the bill unless and until there can be a meaningful convening of healthcare data analysts, privacy experts, healthcare providers who are required to donate data to Connie, the AI research community, patients, and the government entities or contractors who would perform this work.
Thank you for your consideration of our position. For additional information, contact CHA Government Relations at (203) 294-7301.
