The UK has published the world’s first global guidelines for securing AI systems against cyberattacks. The new guidelines aim to ensure AI technology is developed safely and securely.
The guidelines were developed by the UK’s National Cyber Security Centre (NCSC) and the US’ Cybersecurity and Infrastructure Security Agency (CISA). They have already secured endorsements from 17 other countries, including all G7 members.
The guidelines provide recommendations for developers and organisations using AI to incorporate cybersecurity at every stage. This “secure by design” approach advises baking in security from the initial design phase through development, deployment, and ongoing operations.
Specific guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They suggest security behaviours and best practices for each phase.
The launch event in London convened over 100 industry, government, and international partners. Speakers included reps from Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany, and the UK.
NCSC CEO Lindy Cameron stressed the need for proactive security amidst AI’s rapid pace of development. She said, “security is not a postscript to development but a core requirement throughout.”
The guidelines build on existing UK leadership in AI safety. Last month, the UK hosted the first international summit on AI safety at Bletchley Park.
US Secretary of Homeland Security Alejandro Mayorkas said: “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.
“The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a common-sense path to designing, developing, deploying, and operating AI with cybersecurity at its core.”
The 18 endorsing countries span Europe, Asia-Pacific, Africa, and the Americas. Here is the full list of international signatories:
- Australia – Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
- Canada – Canadian Centre for Cyber Security (CCCS)
- Chile – Chile’s Government CSIRT
- Czechia – Czechia’s National Cyber and Information Security Agency (NUKIB)
- Estonia – Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
- France – French Cybersecurity Agency (ANSSI)
- Germany – Germany’s Federal Office for Information Security (BSI)
- Israel – Israeli National Cyber Directorate (INCD)
- Italy – Italian National Cybersecurity Agency (ACN)
- Japan – Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
- New Zealand – New Zealand National Cyber Security Centre
- Nigeria – Nigeria’s National Information Technology Development Agency (NITDA)
- Norway – Norwegian National Cyber Security Centre (NCSC-NO)
- Poland – Poland’s NASK National Research Institute (NASK)
- Republic of Korea – Republic of Korea National Intelligence Service (NIS)
- Singapore – Cyber Security Agency of Singapore (CSA)
- United Kingdom – National Cyber Security Centre (NCSC)
- United States of America – Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)
UK Science and Technology Secretary Michelle Donelan positioned the new guidelines as cementing the UK’s role as “an international standard bearer on the safe use of AI.”
“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” adds Donelan.
The guidelines are now published on the NCSC website alongside explanatory blogs. Developer uptake will be key to translating the secure by design vision into real-world improvements in AI security.
Source: AI NEWS – Ryan Daws
Recent Comments