UK Publishes AI Cyber Security Code of Practice and Implementation Guide
Time 3 Minute Read

On January 31, 2025, the UK government published the Code of Practice for the Cyber Security of AI (the “Code”) and the Implementation Guide for the Code (the “Guide”).  The purpose of the Code is to provide cyber security requirements for the lifecycle of AI.  Compliance with the Code is voluntary.  The purpose of the Guide is to provide guidance to stakeholders on how to meet the cyber security requirements outlined in the Code, including by providing examples of compliance.  The Code and the Guide will also be submitted to the European Telecommunications Standards Institute (“ETSI”) where they will be used as the basis for a new global standard (TS 104 223) and accompanying implementation guide (TR 104 128).

The Code defines each of the stakeholders that form part of the AI supply chain, such as developers (any business across any sector, as well as individuals, responsible for creating or adapting an AI model and/or system), system operators (any business across any sector that has responsibility for embedding/deploying an AI model and system within their infrastructure) and end-users (any employee within a business and UK consumers who use an AI model and/or system for any purpose, including to support their work and day-to-day activities).  The Code is broken down into 13 principles, each of which contains provisions, compliance with which is either required, recommended or a possibility.  While the Code is voluntary, if a business chooses to comply, it must adhere to those provisions which are stated as required. The principles are:

  • Principle 1: Raise awareness of AI security threats and risks.
  • Principle 2: Design your AI system for security as well as functionality and performance.
  • Principle 3: Evaluate the threats and manage the risks to your AI system.
  • Principle 4: Enable human responsibility for AI systems.
  • Principle 5: Identify, track and protect your assets.
  • Principle 6: Secure your infrastructure.
  • Principle 7: Secure your supply chain.
  • Principle 8: Document your data, models and prompts.
  • Principle 9: Conduct appropriate testing and evaluation.
  • Principle 10: Communication and processes associated with End-users and Affected Entities.
  • Principle 11: Maintain regular security updates, patches and mitigations.
  • Principle 12: Monitor your system’s behavior.
  • Principle 13: Ensure proper data and model disposal.

The Guide breaks down each principle by its provisions, detailing associated risks/threats with each provision and providing example measures/controls that could be implemented to comply with each provision. 

Read the press release, the Code, and the Guide.

You May Also Be Interested In

Time 3 Minute Read

On March 24, 2026, Washington Governor Bob Ferguson signed House Bill 2225, an Act regulating artificial intelligence companion chatbots.

Time 3 Minute Read

The Connecticut Attorney General recently issued a legal memorandum regarding the application of existing Connecticut laws, such as the Connecticut Data Privacy Act, to the use of artificial intelligence.

Time 1 Minute Read

As reported on the Hunton Employment & Labor Perspectives blog, SB 574 is a California bill that would set specific duties for attorneys who use generative artificial intelligence and would restrict how arbitrators may use such tools in decision-making.

Time 2 Minute Read

On March 25, 2026, the UK Information Commissioner’s Office and the UK Office of Communications released a joint statement addressing the intersection of online safety and data protection in relation to age assurance.

Search

Subscribe Arrow

Recent Posts

Categories

Tags

Archives

Jump to Page