Secure your digital future with Syndis

Our team of seasoned professionals is dedicated to understanding your unique challenges and providing tailored solutions that drive success.

halftone-1776266740717
Parallax Image

Topics

Module 1: The AI attack surface

  • Understanding the architecture of LLMs and AI agents: We will explore how Large Language Models process tokens and how "Agents" interact with external tools and APIs.
  • Identifying trust boundaries in AI-integrated applications: Adding AI expands an organization's attack surface. Developers must identify new trust boundaries, such as the prompt interfaces and data streams, just as they would scrutinize exposed endpoints in traditional applications.
  • Similarities and differences in vulnerabilities: There are strong similarities between traditional vulnerabilities and AI security concerns, as both often stem from the core failure to "never trust user input". However, the primary difference is the blurred lines between functionality and data in the case of AI. Unlike traditional injection flaws (like SQLi) that have a clear syntactic boundary between code and data, AI models process instructions and user data together as natural language, meaning there is no strict syntactic separation.



Module 2: Basics of prompt injection & jailbreaking

  • Mechanisms of direct prompt injection: We will look at how attackers craft malicious inputs that trick the model into ignoring its developer-provided instructions, which is the AI equivalent of failing to sanitize user input.
  • Basic techniques for bypassing safety filters (Jailbreaking): Just as relying solely on traditional input filtering is insufficient, we will cover the foundational methods attackers use to sneak malicious prompts past AI safety guardrails.

Participants will walk away with:

  • A solid grasp of the AI Attack Surface: Learn how Large Language Models process tokens and how active "Agents" interact with external tools and APIs.
  • The ability to identify new trust boundaries: Discover how to rigorously evaluate prompt interfaces and ingested data streams just as you would scrutinize exposed endpoints in traditional applications.

Key benefits for your organization include:

  • Proactive AI Infrastructure Protection: By teaching developers how attackers craft malicious inputs to bypass developer instructions—the AI equivalent of failing to sanitize user input—your company can proactively defend its applications against direct prompt injection.
  • Resilient Security Guardrails: Relying solely on traditional input filtering is insufficient for AI systems. Your team will learn the foundational "jailbreaking" methods attackers use to sneak malicious prompts past safety filters, enabling your company to build more robust defenses.
  • Securing Backend Integrations: Because AI agents connect to external tools and APIs, they introduce massive new risks to your infrastructure. This course equips your developers to identify and lock down these critical new trust boundaries before they can be exploited.

Ready to secure your AI-integrated applications?

Ensure your developers can proactively defend against direct prompt injection and secure the massive new attack surface introduced by AI agents connecting to external tools and APIs. Contact Syndis today to schedule the Basics of AI Pentesting for Developers course.

You can also reach us here:

☎️     +354 415 1337
✉️     syndis@syndis.com

Let’s Talk

Frequently Asked Questions

What is the "Basics of AI Pentesting for Developers" course about?

This is an intensive course designed by Syndis experts to bridge the gap between traditional software security and the new frontier of offensive AI. It focuses on the significantly expanded attack surface that comes from adding Artificial Intelligence to your applications.


What is the duration of the training?

The course lasts 4 hours. This includes a 45-minute lecture and 3 hours and 15 minutes of hands-on exercises using a dedicated platform developed by Syndis.


What core concepts are taught in the course?

The training is broken into two modules:

  • The AI Attack Surface: Understanding LLM architecture, how active "Agents" interact with external tools and APIs, identifying new trust boundaries (like prompt interfaces and data streams), and comparing traditional vulnerabilities to AI security concerns.
  • Basics of Prompt Injection & Jailbreaking: Mechanisms of direct prompt injection (the AI equivalent of failing to sanitize user input) and foundational techniques attackers use to bypass AI safety filters (jailbreaking)

What will participants be able to do after completing the course?

Participants will walk away with a solid grasp of the AI Attack Surface, the ability to identify new trust boundaries in prompt interfaces and ingested data streams, and an insight into the "blurred lines" of AI security where instructions and user data are processed together as natural language.

What are the key organizational benefits of sending our developers to this training?

  • Proactive AI Infrastructure Protection against direct prompt injection by teaching developers how attackers craft malicious inputs to bypass developer instructions.
  • Resilient Security Guardrails by having your team learn foundational jailbreaking methods used to sneak malicious prompts past safety filters.
  • Securing Backend Integrations by equipping developers to identify and lock down the massive new risks introduced because AI agents connect to external tools and APIs.