This week, the UK Government has announced its Artificial Intelligence (AI) strategy, outlining plans to address the long-term demands of the AI ecosystem. The strategy emphasises the importance of ensuring that the UK gets the national and international governance of AI technologies right in order to encourage innovation, investment, and protect the public and the UK's fundamental values.

The strategy aims to position the UK as a global leader in raising standards which put safety, security and trust at the heart of AI products and services. It includes measures to launch a new National AI Research and Innovation Programme and ensure that the next generation of talent in the AI industry are recruited from a diverse talent pool. .

Some key highlights from the strategy announcement include:

  • The Strategy notes that cyber security should be considered early in the development and deployment of AI systems to minimise security risk, with a 'secure by design' approach adopted throughout the development life cycle.
  • In collaboration with AI businesses, the Office for AI will develop a national position on governing and regulating AI to be set out in a White Paper in early 2022. The White Paper will set out the Government's position on the potential risks and harms posed by AI technologies and proposals to address them.
  • To support the development of a secure AI ecosystem, the Centre for Data Ethics and Innovation (CDEI) is publishing an AI assurance roadmap which aims to clarify the set of activities needed to build a mature assurance ecosystem and identifies the roles and responsibilities of different stakeholders across these activities.
  • An AI Standards Hub will be established, on a trial basis, to coordinate UK engagement in setting the rules globally.
  • The Government will work with The Alan Turing Institute - the UK's national institute for data science -to update guidance on AI ethics and safety in the public sector and create practical tools to make sure the technology is used ethically.

Chris Anley, chief scientist at NCC Group said: "AI is delivering huge technological benefits, not least in the field of cyber security. But AI systems present significant new security, privacy and societal risks. It's therefore crucial to build cyber security into the development life cycle, as hostile actors will inevitably use and exploit these new technologies to carry out new attacks.

"The Government's commitment to ensuring AI is rolled out safely and securely in today's AI Strategy is a significant step toward a more resilient AI ecosystem. In particular, we look forward to the forthcoming White Paper on the governance and regulation of AI and AI Assurance Roadmap, which we hope will outline the standards and regulatory interventions needed to ensure the responsible, secure and resilient use of AI across all sectors."

Ollie Whitehouse, chief technical officer at NCC Group continued: "Good data management practices and stringent security measures, including regular security testing and assurance activities, must be established across the AI ecosystem to ensure datasets are well maintained, vulnerabilities are addressed, and the latest threat landscape is understood and acted upon.

"There also needs to be a real focus on a skills programme that not only develops a world-leading technical workforce, but also upskills potential users and the public so that they can make better informed decisions about their use of AI-based technologies."

Attachments

  • Original document
  • Permalink

Disclaimer

NCC Group plc published this content on 23 September 2021 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 23 September 2021 11:41:07 UTC.