The UK has recently released the world’s first global guidelines for securing AI systems against cyberattacks. These guidelines, which aim to ensure the safe and secure development of AI technology, were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already received endorsements from 17 other countries, including all G7 members.
The guidelines provide recommendations for developers and organizations using AI to incorporate cybersecurity at every stage of the process. This “secure by design” approach emphasizes the importance of integrating security measures from the initial design phase through development, deployment, and ongoing operations.
The guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They offer specific security behaviors and best practices for each phase.
The launch event in London brought together over 100 industry, government, and international partners. Notable speakers at the event included representatives from Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany, and the UK.
Lindy Cameron, CEO of NCSC, highlighted the need for proactive security in the face of AI’s rapid development. She emphasized that security should not be an afterthought but a core requirement throughout the entire process.
These guidelines build upon the UK’s existing leadership in AI safety. Just last month, the UK hosted the first international summit on AI safety at Bletchley Park.
US Secretary of Homeland Security Alejandro Mayorkas emphasized the importance of cybersecurity in building safe, secure, and trustworthy AI systems. He stated that the jointly issued guidelines provide a common-sense path to integrating cybersecurity into all aspects of AI development and operation.
The 18 endorsing countries span across Europe, Asia-Pacific, Africa, and the Americas. The full list of international signatories includes Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, Republic of Korea, Singapore, the United Kingdom, and the United States of America.
Michelle Donelan, UK Science and Technology Secretary, sees these guidelines as solidifying the UK’s role as an international standard bearer for the safe use of AI. She believes that this global effort will unite nations and companies in promoting AI security.
The guidelines are now available on the NCSC website, along with explanatory blogs. The adoption of these guidelines by developers will be crucial in translating the vision of secure by design into real-world improvements in AI security.
(Photo by Jan Antonin Kolar on Unsplash)
See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Global AI security guidelines endorsed by 18 countries appeared first on AI News.