Connecticut Looks to Strengthen Regulations on AI Chatbots and Children’s Privacy
Time 2 Minute Read

On February 5, 2026, Connecticut Attorney General William Tong and Senator James Maroney announced that the state’s lawmakers will soon consider new measures aimed at protecting children and teenagers from potential risks associated with artificial intelligence (“AI”) technologies. This announcement comes amid growing concerns about the increasing use of chatbots and other AI tools by young people.

The announcement coincided with the release of the Attorney General’s Office 2025 Connecticut Data Privacy Act (“CTDPA”) Enforcement Report, which outlines recent enforcement actions and priorities, including those focused on minors’ privacy. Notably, the Report dedicates a section to chatbots and reveals that the AG’s Office is actively investigating a technology company’s chatbot platform for alleged harm to minors tied to certain design features. In addition, the Report notes that Attorney General Tong recently joined a bipartisan coalition of 42 Attorneys General in urging major AI software companies to implement stronger quality controls and safeguards for chatbot products, warning that the race to innovate is putting children’s health at risk. While existing laws, including the CTDPA and Connecticut’s unfair trade practices act, already apply to chatbot providers, the AG’s Office emphasizes that the severity of these risks demands specific legislation to better protect Connecticut residents, especially minors.

In light of these announcements and the Enforcement Report, we anticipate Connecticut lawmakers will explore additional regulation and guardrails for AI-powered chatbots, particularly those used by minors. Companies involved in developing or deploying chatbot technologies may see increased scrutiny and potential new compliance requirements as the legislative process unfolds.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page