UK Releases Code of Practice for Securing AI
The UK government has developed a voluntary Code of Practice aimed at addressing AI cybersecurity risks.
This Code of Practice applies to developers, system operators, and organizations that create, deploy, or manage AI systems. And, according to the announcement, it “equips organizations with the tools they need to thrive in the age of AI. From securing AI systems against hacking and sabotage, to ensuring they are developed and deployed in a secure way, the Code will help developers build secure, innovative AI products.”
Specifically, the Code sets out 13 cybersecurity principles encompassing the software development lifecycle – secure design, secure development, secure deployment, secure maintenance, and secure end of life. The general principles are:
- Raise awareness of AI security threats and risks.
- Design your AI system for security as well as functionality and performance.
- Evaluate the threats and manage the risks to your AI system.
- Enable human responsibility for AI systems.
- Identify, track and protect your assets.
- Secure your infrastructure.
- Secure your supply chain.
- Document your data, models and prompts.
- Conduct appropriate testing and evaluation.
- Communication and processes associated with end-users and affected entities.
- Maintain regular security updates, patches and mitigations.
- Monitor your system’s behavior.
- Ensure proper data and model disposal.
See the announcement for details.
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.
