Introduction to Prompt Hacking
•
3 Days
Learn about the basics of Prompt Hacking, one of the biggest vulnerabilities in Large Language Models (LLMs), and Prompt Defense techniques.
Taught by
Course Overview
Learn about the basics of Prompt Hacking, one of the biggest vulnerabilities in Large Language Models (LLMs), and Prompt Defense techniques.
What will you learn?
Prompt Hacking
How to Prompt Inject and Jailbreak Large Language Models.
Skills you will gain
Meet Your Instructors
Sander Schulhoff
Founder & CEO, Learn Prompting
His guide (Learnprompting. org) is listed as a recommended resource by OpenAI and Google and has been featured in Forbes as the best course to learn Prompt Engineering. As a researcher, he has authored multiple award-winning papers alongside experts from OpenAI, Microsoft, ScaleAI, the Federal Reserve, HuggingFace, and others.
Fady Yanni
Co-founder & COO at Learn Prompting
Fady Yanni is the Founder & COO of Learn Prompting, the leading Prompt Engineering resource which has taught over 3 million people how to effectively communicate with AI. Previously, he was the Head of Fundraising at the Farama Foundation, the open-source maintainers of every major Reinforcement Learning library, including OpenAI's flagship project, Gym.
Course Syllabus
Introduction
What is Prompt Hacking?
What is the difference between Prompt Hacking and Jailbreaking?
Introduction to Prompt Injection Attacks
What is Prompt Injection?
Potential Threats
How we get Prompt Injected
Preventing Injections in LLMs
Not Trusting User Input
Post-prompting and the Sandwich Defense
Few-Shot Prompting Defense
Non-Prompt-based Techniques
Other Prompt Hacking Concepts
Prompt Leaking
Jailbreaking