Connecticut AG Clarifies AI Compliance Obligations Under CTDPA
Time 3 Minute Read

On February 25, 2026, the Connecticut Attorney General (“CT AG”) issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act (“CTDPA”), to the use of artificial intelligence (“AI”).

Addressed to state officials, agencies and other concerned parties, the memorandum is organized into four main sections, each outlining Connecticut state laws the CT AG may use to enforce AI-related regulations: (1) civil rights laws; (2) the CTDPA; (3) safeguards and breach notification laws; and (4) the Connecticut Unfair Trade Practices Act.

With respect to the CTDPA, the memorandum emphasizes that businesses developing or using AI systems must comply with existing CTDPA requirements, including the following:

  • clearly disclosing the use of Connecticut consumers’ personal data in AI models through their privacy notices;
  • ensuring that any use or sharing of personal data obtained from third parties is properly disclosed by the original data collector - otherwise, such use is unlawful; and
  • notifying consumers of any changes to privacy practices related to AI and providing a mechanism for consumers to withdraw consent for new uses of their data.

The memorandum also underscores that businesses have specific obligations under the CTDPA regarding the collection and use of sensitive data, such as consumer health information, biometric data, and precise geolocation data. In particular, data controllers are required to conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, including any handling of sensitive data. Accordingly, AI models that process Connecticut consumers’ personal data in these high-risk contexts must undergo regular data protection assessments to ensure compliance with the CTDPA.

In addition, the memorandum highlights that businesses have heightened responsibilities under the CTDPA when collecting or processing minors’ data. Controllers offering products or services to minors must exercise reasonable care to prevent increased risk of harm to children. Notably, companies providing AI-driven products and services (such as chatbots and large language models) are specifically prohibited from employing design features intended to increase, sustain, or extend minors’ use of these offerings.

As we noted in our prior blog post, the CT AG’s recent Enforcement Report reinforces that while existing laws such as the CTDPA and Connecticut’s unfair trade practices act already apply to AI technologies like chatbots, the AG’s Office believes the unique risks these technologies pose to minors warrant additional, targeted legislation to ensure stronger protections for Connecticut residents.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page