Children’s Advertising Review Unit Issues Compliance Warning on AI in Child-Directed Advertising and Data Collection
Time 2 Minute Read

The Children’s Advertising Review Unit (CARU) of BBB National Programs issued a new compliance warning aimed at addressing the use of artificial intelligence (AI) in advertisements and data collection efforts targeted at children. The warning emphasizes that CARU will “strictly enforce” its Advertising and Privacy Guidelines for advertisers, brands, influencers and manufacturers that utilize AI in marketing and data collection involving children. The warning specifically highlights CARU’s concerns about the risks of AI in connection with the susceptibility of children to marketing that fails to distinguish between what is real and what is not.

In its announcement, CARU highlighted several areas where brands using AI in marketing efforts should tread carefully to avoid misleading children, including:

  • “AI-generated deep fakes; simulated elements, including the simulation of realistic people, places, or things; or AI-powered voice cloning techniques within an ad.”
  • “Product depictions, including copy, sound, and visual presentations generated or enhanced using AI indicating product or performance characteristics.”
  • “Fantasy, via techniques such as animation and AI-generated imagery, that could unduly exploit a child’s imagination, create unattainable performance expectations, or exploit a child’s difficulty in distinguishing between the real and the fanciful.”
  • “The creation of character avatars and simulated influencers that directly engage with the child and can mislead children into believing they are engaging with a real person.”

CARU similarly instructed companies to exercise caution when using AI in data collection involving children to ensure compliance with the Children’s Online Privacy Protection Act (COPPA), disclosure and destruction obligations related to collection of a child’s personal information and verifiable parental consent requirements.

Litigation involving AI issues has risen steadily since 2018, with a record number of AI-related lawsuits filed just last year, according to the Hunton Andrews Kurth LLP Emerging Technologies Tracker. This year is shaping up to be another record breaker in terms of newly filed AI litigation, demonstrating the importance of ensuring that companies using AI comply with constantly evolving federal and state statutory and regulatory guidelines.

You May Also Be Interested In

Time 1 Minute Read

The California Consumer Privacy Act continues to drive significant enforcement activity—particularly when minors’ data is involved. In a recent action, the California Privacy Protection Agency imposed a $1.1 million fine on youth sports platform PlayOn Sports for alleged violations involving student data and inadequate opt-out mechanisms. The case highlights growing regulatory scrutiny around how companies collect, share, and provide transparency about personal information—especially when schools and students are involved. 

Time 3 Minute Read

BBB National Programs’ Children’s Advertising Review Unit (CARU) has released new Guardrails for Child-Directed Advertising and Privacy in the Metaverse. As explained in a BBB press release, the Guardrails are intended to provide companies with best practices as they navigate the complexities of engaging with children in metaverse experiences. The Guardrails offer “actionable recommendations” on developing metaverse experiences directed to children, complying with existing advertising and privacy law, and engaging responsibly with children online. These guidelines build on earlier CARU guidance regarding metaverse activities.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Authors

Archives

Jump to Page