HOSTED BY
LED AI SECURITY & PROMPTING WORKSHOPS AT
OVERVIEW
Our AI Systems Are Vulnerable... Learn how to Secure Them!
About the Course
About Your Instructor
Expert Guest Instructors
Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target
Jason Haddix: Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.
Richard Lundeen: Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.
Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber's Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI's ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn't yet published on his blog, embracethered.com.
Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.
Valen Tagliabue: An AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. His expertise includes LLM evaluation, safety, and alignment, with a strong focus on human-AI collaboration. He was part of the winning team in HackAPrompt2023, an AI safety competition backed by industry leaders like Hugging Face, Scale AI, and OpenAI.
David Williams-King: As a co-founder of Elpha Secure and a research scientist at MILA, David Williams-King seamlessly bridges the gap between AI Security and pure cybersecurity. He has researched under the Turing Prize winning Joshua Bengio on his Safe AI For Humanity team.
Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.
Leonard Tang: Founder and CEO of Haize Labs, an AI safety and evaluation startup based in New York City. Leonard holds Math and CS Bachelor's and Master's degrees from Harvard University and left his Stanford University CS PhD program to start Haize. His team is building next-generation tools for evaluating, red-teaming, monitoring, guardrailing, and robustifying AI systems. Haize Labs' technology is already being used by OpenAI, Anthropic, AI21 Labs, and other leading companies. He was also just awarded Forbes 30 under 30!
Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance
Donato Capitella: A researcher with over 12 years of experience in offensive security and security assurance, has gained a following for his AI security work. Alongside his years of research and blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).
Limited-Time Offer
Limited Spots Available
Money-Back Guarantee
What you'll get out of this course
Master Advanced AI Red-Teaming Techniques
Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.
Design and Execute Real-World Red-Teaming Projects
Apply your knowledge by designing and executing a red-teaming project to exploit vulnerabilities in a live chatbot or your own AI application. This practical experience prepares you for real-world AI security challenges.
Develop and implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems
Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.
Analyze Real-World AI Security Breaches
Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.
Learn from Industry Leaders
Benefit from mentorship by Sander Schulhoff and guest lectures from top AI security experts like Akshat Parikh. Gain insider knowledge from professionals at the forefront of AI security.
Network with Like-Minded Professionals
Connect with cybersecurity professionals, AI safety specialists, developers, and executives. Expand your network, collaborate on projects, and join a community committed to securing AI technologies.
Earn an Industry-Recognized Certification
Upon completing the course and passing our AI Red Teaming Professional Certification exam, you'll become AIRTP+ Certified, which validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.
Future-Proof Your Career in AI Security
Equip yourself with cutting-edge skills to stay ahead in the evolving tech landscape. Position yourself at the forefront of AI security, opening new career opportunities as AI transforms industries.