Overview

This in-depth hands-on Certified AI Security Engineer course, which includes an independent APMG exam voucher, delves into the AI security landscape. With over 30 practical exercise labs and scenarios, addressing vulnerabilities like prompt injection, denial of service attacks, model theft, and more. Learn how attackers exploit these weaknesses and gain hands-on experience with proven defence strategies and security for AI. Organisations must understand how to secure AI systems in their organisation or supply chain. Discover how to securely integrate LLMs into your applications, safeguard training data, build robust AI infrastructure, and ensure effective human-AI interaction. By the end of this course, you'll be equipped to protect your organization's AI assets and maintain the integrity of your systems.

Target audience includes, security professionals, AI & ML tech specialists, risk managers, AI governance professionals, data architects, technical consultants, IT professionals, software engineers.

Read more +

Prerequisites

No prerequisites, aside general understanding of AI principles.

Read more +

Delegates will learn how to

This Certified AI Security Engineer course will cover the following topics:

  • Introduction to AI Security
  • Types of AI Systems and Their Vulnerabilities
  • Understanding and Countering AI-specific Attacks
  • Ethical and Reliable AI
  • Prompt Injection
  • Model Jailbreaks and Extraction Techniques
  • Visual Prompt Injection
  • Denial of Service Attacks
  • Secure LLM Integration
  • Training Data Manipulation
  • Human-AI Interaction
  • Secure AI Infrastructure

Learning Outcomes

  • Gain a comprehensive understanding of AI technologies and the unique security risks they pose
  • Learn to identify and mitigate common AI vulnerabilities
  • Gain practical skills in securely integrating LLMs into applications
  • Understand the principles of responsible, reliable, and explainable AI
  • Familiarize themselves with security best practices for AI systems
  • Stay updated with the evolving threat landscape in AI security
  • Engage in hands-on exercises that simulate real-world scenarios
Read more +

Outline

Day 1

Introduction to AI security

  • What is AI Security?

    • Defining AI
    • Defining Security
    • AI Security scope
    • Beyond this course
  • Different types of AI systems
    • Neural networks
    • Models
    • Integrated AI systems
  • From Prompts to Hacks
    • Use-cases of AI systems
    • Attacking Predictive AI systems
    • Attacking Generative AI systems
    • Interacting with AI systems
  • What does 'Secure AI' mean?
    • Responsible AI
    • Reliable, trustworthy AI
    • Explainable AI
    • A word on alignment
    • To censor or not to censor
  • Lab Exercise: Using an uncensored model
    • Using an uncensored model

Using AI for malicious intents

  • Deepfake scam earns $25M
    • You would never believe, until you do
    • Behind deep fake technology
  • Voice cloning for the masses
    • Imagine yourself in their shoes
    • Technological dissipation
  • Social engineering on steroids
  • Levelling the playing field
  • Profitability from the masses
  • Shaking the fundamentals of reality
  • Donald Trump arrested
  • Pentagon explosion shakes the US stock market
  • How humans amplify a burning Eiffel tower
  • Image watermarking by OpenAI
  • Lab Exercise: Image watermarking
    • Real or fake?

The AI Security landscape

  • Attack surface of an AI system
    • Components of an AI system
    • AI systems and model lifecycle
    • Supply-chain is more important than ever
    • Models accessed via APIs
    • APIs access by models
    • Non-AI attacks are here to stay
  • OWASP Top 10 and AI
    • About OWASP and it's Top 10 lists
    • OWASP ML Top 10
    • OWASP LLM Top 10
    • Beyond OWASP Top 10
  • Threat modeling an LLM integrated application
    • A quick recap on threat modeling
    • A sample AI-integrated application
    • Sample findings
    • Mitigations
  • Lab Exercise: Threat modeling an LLM integrated application
    • Meet TicketAI, a ticketing system
    • TicketAI's data flow diagram
    • Find potential threats

Prompt Injection

  • Attacks on AI systems - Prompt injection
    • Prompt injection
    • Impact
    • Examples
    • Indirect prompt injection
    • From prompt injection to phishing
  • Advanced techniques - SudoLang: pseudocode for LLMs
    • Introducing SudoLang
    • SudoLang examples
    • Behind the tech
    • A SudoLang program
    • Integrating an LLM
    • Integrating an LLM with SudoLang
  • Lab Exercise: Translate a prompt to SudoLang
    • A long prompt
    • A different solution
  • Lab Exercise: Prompt injection - Get the password for levels 1 and 2
    • Get the password!
    • Classic injection defense
    • Levels 1-2
    • Solutions for levels 1-2

Day 2

Prompt Injection

  • Attacks on AI systems - Model jailbreaks
    • What's a model jailbreak?
    • How jailbreaks work?
  • Jailbreaking ChatGPT
    • The most famous ChatGPT jailbreak
    • The 6.0 DAN prompt
    • AutoDAN
  • Lab Exercise: Jailbreaking - Get the password for levels 3, 4, and 5
    • Get the password!
    • Levels 3-5
    • Use DAN against levels 3-5
  • Tree of Attacks with Pruning (TAP)
    • Tree of Attacks explained
  • Attacks on AI systems - Prompt extraction
    • Prompt extraction
  • Lab Exercise: Prompt Extraction - Get the password for levels 6 and 7
    • Get the password!
    • Level 6
    • Level 7
    • Extract the boundaries of levels 6 and 7
  • Defending AI systems - Prompt injection defenses
    • Intermediate techniques
    • Advanced techniques
    • More Security APIs
    • ReBuff example
    • Llama Guard
    • Lakera
  • Attempts against a similar exercise
    • Gandalf from Lakera
    • Types of Gandalf exploits
  • Lab Exercise: The Real Challenge - Get the password for levels 8 and 9
    • Get the password!
    • Level 8
    • Level 9
  • Other injection methods
    • Attack categories
    • Reverse Psychology
  • Lab Exercise: Reverse Psychology
    • Write an exploit with the ChatbotUI
  • Other protection methods
    • Protection categories
    • A different categorization
    • Bergeron method
  • Sensitive Information Disclosure
    • Relevance
    • Best practices

Visual Prompt Injection

  • Attack types
    • New Tech, New Threats
    • Trivial examples
    • Adversarial attacks
  • Tricking self-driving cars
    • How to fool a Tesla
    • This is just the beginning
  • Lab Exercise: Image recognition with OpenAI
    • Invisible message
    • Instruction on image
  • Lab Exercise: Adversarial attack
    • Untargeted attack with Fast Gradient Signed Method (FGSM)
    • Targeted attack
  • Protection methods
    • Protection methods

Denial of Service

  • Chatbot examples
    • Attack scenarios
    • Denial of Service
    • DoS attacks on LLMs
    • Risks and Consequences of DoS Attacks on LLMs
  • Prompt routing challenges
    • Attacks
    • Protections
  • Lab Exercise: Denial of Service
    • Halting Model Responses

Model theft

  • Know your enemy
    • Risks
  • Attack types
    • Training or fine-tuning a new model
    • Dataset exploration
  • Lab Exercise: Query-based model stealing
    • OpenAI API parameters
    • How to steal a model
  • Protection against model theft
    • Simple protections
    • Advanced protections

Day 3

LLM integration

  • The LLM trust boundary
    • An LLM is a system just like any other
    • It's not like any other system
    • Classical problems in novel integrations
    • Treating LLM output as user input
    • Typical exchange formats
    • Applying common best practices
  • Lab Exercise: SQL Injection via an LLM
  • Lab Exercise: Generating XSS payloads
  • LLMs interaction with other systems
    • Typical integration patterns
    • Function calling dangers
    • The rise of custom GPTs
    • Identity and authorization across applications
  • Lab Exercise: Making a call with invalid parameters
  • Lab Exercise: Privilege escalation via prompt injection
  • Principles of security and secure coding
  • Racking up privileges
    • The case for a very capable model
    • Exploiting excessive privileges
    • Separation of privileges
    • A model can't be cut in half
    • Designing your model privileges
  • A customer support bot going wild
  • Lab Exercise: Breaking out of a sandbox
  • Best practices in practice
    • Input validation
    • Output encoding
    • Use frameworks

Training data manipulation

  • What you train on matters
    • What data are models trained on?
    • Model assurances
    • Model and dataset cards
  • Lab Exercise: Verifying model cards
  • A malicious model
  • A malicious dataset
    • Datasets and their reliability
    • Attacker goals and intents
    • Effort versus payoff
    • Techniques to poison datasets
  • Lab Exercise: Let's construct a malicious dataset
  • Verifying datasets
    • Getting clear on objectives
    • A glance at the dataset card
    • Analysing a dataset
  • Lab Exercise: Analysing a dataset
  • A secure supply chain
    • Proving model integrity is hard
    • Cryptographic solutions are emerging
    • Hardware-assisted attestation

Human-AI interaction

  • Relying too much on LLM output
    • What could go wrong?
    • Countering hallucinations
    • Verifying the verifiable
    • Referencing what's possible
    • The use of sandboxes
    • Building safe APIs
    • Clear communication is key
  • Lab Exercise: Verifying model output

Secure AI infrastructure

  • Requirements of a secure AI infrastructure
    • Monitoring and observability
    • Traceability
    • Confidentiality
    • Integrity
    • Availability
    • Privacy
  • Privacy and the Samsung data leak
  • LangSmith
  • Lab Exercise: Experimenting with LangSmith
  • BlindLlama

EXAM

The independent APMG Certified AI Security Engineer exam is taken post class, using an exam voucher code via the APMG proctor platform.

If you experience any issues, please contact the APMG technical help desk on 01494 4520450.

Duration: 60 Minutes

Questions: 60, multiple choice (4 multiple choice answers only 1 of which is correct)

Pass Mark: 50%

Read more +

scademy ai courses

In partnership with our Secure Coding partner Scademy.

Click here to view all our Scademy courses.

Why choose QA

Dates & Locations

Cyber Security learning paths

Want to boost your career in cyber security? Click on the roles below to see QA's learning pathways, specially designed to give you the skills to succeed.

= Required
= Certification
AI Security
Application Security
Cloud Security
Cyber Blue Team
DFIR Digital Forensics & Incident Response
Industrial Controls & OT Security
Information Security Management
NIST Pathway
Offensive Security
Privacy Professional
Reverse Engineer
Secure Coding
Security Auditor
Security Architect
Security Risk
Security Tech Generalist
Vulnerability Assessment & Penetration Testing
Need to know

Frequently asked questions

How can I create an account on myQA.com?

There are a number of ways to create an account. If you are a self-funder, simply select the "Create account" option on the login page.

If you have been booked onto a course by your company, you will receive a confirmation email. From this email, select "Sign into myQA" and you will be taken to the "Create account" page. Complete all of the details and select "Create account".

If you have the booking number you can also go here and select the "I have a booking number" option. Enter the booking reference and your surname. If the details match, you will be taken to the "Create account" page from where you can enter your details and confirm your account.

Find more answers to frequently asked questions in our FAQs: Bookings & Cancellations page.

How do QA’s virtual classroom courses work?

Our virtual classroom courses allow you to access award-winning classroom training, without leaving your home or office. Our learning professionals are specially trained on how to interact with remote attendees and our remote labs ensure all participants can take part in hands-on exercises wherever they are.

We use the WebEx video conferencing platform by Cisco. Before you book, check that you meet the WebEx system requirements and run a test meeting (more details in the link below) to ensure the software is compatible with your firewall settings. If it doesn’t work, try adjusting your settings or contact your IT department about permitting the website.

How do QA’s online courses work?

QA online courses, also commonly known as distance learning courses or elearning courses, take the form of interactive software designed for individual learning, but you will also have access to full support from our subject-matter experts for the duration of your course. When you book a QA online learning course you will receive immediate access to it through our e-learning platform and you can start to learn straight away, from any compatible device. Access to the online learning platform is valid for one year from the booking date.

All courses are built around case studies and presented in an engaging format, which includes storytelling elements, video, audio and humour. Every case study is supported by sample documents and a collection of Knowledge Nuggets that provide more in-depth detail on the wider processes.

When will I receive my joining instructions?

Joining instructions for QA courses are sent two weeks prior to the course start date, or immediately if the booking is confirmed within this timeframe. For course bookings made via QA but delivered by a third-party supplier, joining instructions are sent to attendees prior to the training course, but timescales vary depending on each supplier’s terms. Read more FAQs.

When will I receive my certificate?

Certificates of Achievement are issued at the end the course, either as a hard copy or via email. Read more here.

Let's talk

By submitting this form, you agree to QA processing your data in accordance with our Privacy Policy and Terms & Conditions. You can unsubscribe at any time by clicking the link in our emails or contacting us directly.