Cyber Security

Security for the AI age

Why every organisation working with AI tech will need AI Security and Governance skills, across all job roles.

Artificial Intelligence (AI) technology, while transformative, presents new risks. Every organisation using or building AI tech will need AI Security and Governance skills.

By underestimating the risk, and the need for security & governance, you leave your organisation vulnerable.

From HR to legal counsel, technical architects, product and marketing teams, data scientists and software engineering, most roles will intersect with AI at various touchpoints.

By using the QA AI Security & Governance pathway as a guide, you will be able to apply appropriate AI security and governance skills, to help improve your AI system’s:

  • privacy
  • safety
  • efficacy
  • resilience
  • fairness
  • reliability.

AI security skills

QA’s Cyber Security Practice Director, Richard Beck, highlights that “AI Security skills are needed urgently, as an extension of existing cyber and information security practices.”

New UK government-commissioned research identified dozens of AI security vulnerabilities, mostly credited to adversarial machine learning systems used in AI security attacks.

Richard’s advice for businesses is this: “Consider the shift in your 'attack surface' that results from building or consuming AI tech. This will change how you profile security risk, and the likelihood of an AI security exposure.”

“I believe every organisation building or consuming AI tech will need AI security and AI governance skilled professionals”

Richard Beck

Start by categorising and understanding your AI use cases, adapting threat modelling processes, and visualising risks and proposed guardrails, using tools like LangChain. Richard emphasises that “additional defences, controls and mitigations applied to your AI use cases are needed now” – so don’t delay. 

Remember that enhancing your cyber posture with AI can be a double-edged sword. Microsoft warned that AI algorithms used to spot malware may be vulnerable to data pollution attacks, in which malicious software is injected into the training data, leading the AI to incorrectly classify it as safe.

AI is transforming the way security teams work

AI isn't just dangerous. It's also transforming how security teams tackle cyber threats, accelerating their processes. By analysing vast amounts of data and spotting complex patterns, AI automates the early stages of incident investigation. This provides a thorough initial understanding, speeding up response times.

New UK legislation (PSTI Act) has been introduced, with teeth for ‘smart’ product manufactures now required to ensure security is designed into their products. Currently, AI security follows a voluntary code of practice.

Why not extend the Secure by Design / Default principles to embed Safety by Design for a responsible AI Software Bill of Materials (AI-SBOM) or Machine Learning Bill of Materials (ML-BOM)?

This involves transparency in the AI code that powers the model, and in the data used to train it, enabling better risk identification and proactive vulnerability management.

If a weakness exists only because of AI, it's usually classified as an AI vulnerability. However, if a software issue in the AI lifecycle exists without AI, do we classify this as a software issue?

The OWASP foundation’s machine learning top-ten project, is designed to raise awareness of AI software security issues like this. Such vulnerabilities need a holistic approach from design, development, deployment, and maintenance throughout the AI lifecycle.

The UK’s AI Safety Institute (AISI) researchers as reported by the Guardian, recently tested five ”unnamed large language models (LLMs) and found that they could easily bypass the models' safeguards:

“All tested LLMs remain highly vulnerable to basic jailbreaks, and some will provide harmful outputs even without dedicated attempts to circumvent their safeguards.”

The systems AISI’S researchers tested were “highly vulnerable” to jailbreaks; text prompts provoking responses that a model is designed to avoid.

On the back of this worrying research, a new agreement was announced on the 21st May, 2024 at the AI Seoul Summit. 10 EU countries will collaborate on an international network to accelerate the advancement of safe, trustworthy, and responsible AI.

In the US, the Department of Homeland Security issued new guidance on using AI within critical infrastructure. It includes workforce development, to enable employees to identify any issues in workflows, and to develop secure by design practices.

A new era of technology, a new type of threat

AI is allowing adversaries to launch smarter, faster, and larger attacks.

The US NIST study identifies types of cyberattacks that manipulate the behaviour of AI systems.

A recent Darktrace survey showed that 89% of IT security professionals think AI-enhanced cyber threats could seriously affect their organisations within the next two years. Worryingly, 60% feel unprepared to defend against these attacks. This highlights the need for increased security measures.

From an offensive security perspective, red teams, or your paid ‘pen testers’ will need to seek out vulnerabilities in the machine learning architecture, using Mitre ATLAS as a reference, before the LLM is exploited. Each model will have unique vulnerabilities and environments, so vulnerability assessment efforts will differ.

Having the skills to force a hallucination, misrepresent, manipulate, poison, or generate inappropriate data, leak sensitive information, jailbreak / inject and or hide a prompt response, provides the insights to learn and develop skills to mitigate and defend against these types of attacks.

Recent research shows that models can be tweaked during training to seem fine, but once in production, they follow hidden, risky, and unsafe instructions.

According to Richard, “your organisation should build its AI security skill base now. Protecting AI is crucial to avoid confidentiality breaches, organisational harm, reputation loss, and privacy breaches. This means evaluating potential threats, identifying attacks, and defending against them.”

Ready to learn more? Start building AI security skills today.

Stay tuned for the next instalment of our two-part series on AI and Security, series. Next time with Richard, we’ll focus on the need for AI governance, and how to get it right.