FTC Provides Recommendations on Preventing and Mitigating Cyber Risks in Developing AI and Other Products
Time 2 Minute Read

On December 13, 2024, the Federal Trade Commission’s Office of Technology and Division of Privacy and Identity Protection posted a set of recommendations related to the security risks posed by developing products like AI, targeted advertising and surveillance pricing tools.

The overarching risk the FTC identifies in relation to product development is the potential for companies to create “valuable pools” of personal information that can be targeted and exploited by bad actors. Essentially, developing more and better datasets creates more cyber risk, particularly in the form of data breaches and digital threats like ransomware. The FTC’s recommendations focus on security practices in data management, software development and product design for humans, pointing to a number of recent enforcement actions as examples of security failures.

  • Security in data management: The FTC highlights the importance of enforcing retention schedules, limiting third-party data sharing and encrypting sensitive data. Notably, the FTC also recommends mandatory deletion of data that “was ill-gotten, collected or sold without user consent or knowledge,” or “unnecessarily retained,” including models and algorithms trained on such data.
  • Security in software development: The FTC notes the criticality of applying principles like “secure by design” to the development stage, including measures like building products using memory-safe programming languages, implementing rigorous testing (e.g., pre-release scanning and vulnerability testing), and securing external product access.
  • Security in product design for humans: The FTC stresses the ongoing risk of human error as a factor in security breaches, outlining mitigation measures including enforcing least privilege access control, mandating the use of phishing-resistant MFA, and designing products and services without dark patterns that influence users to share more of their personal data.

The FTC’s recommendations include various links to related FTC guidance and enforcement actions, and the agency reiterates its continued focus on digital security threats.

Read additional coverage on related FTC enforcement.

You May Also Be Interested In

Time 3 Minute Read

On March 24, 2026, Washington Governor Bob Ferguson signed House Bill 2225, an Act regulating artificial intelligence companion chatbots.

Time 3 Minute Read

The Connecticut Attorney General recently issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act, to the use of artificial intelligence.

Time 1 Minute Read

As reported on the Hunton Employment & Labor Perspectives blog, SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making.

Time 3 Minute Read

On March 20, 2026, Oklahoma Governor Kevin Stitt signed SB 546 into law, enacting the Oklahoma Consumer Data Privacy Act, which will take effect on January 1, 2027.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page