Sandwiching user input between two prompts can help the LLM stay focused on the developer-defined instruction.
What is the Sandwich Defense?
The sandwich defense involves sandwiching user input between
two prompts.
An Example of the Sandwich Defense
Take the following prompt as an example:
Prompt
Translate the following to French: {user_input}
It can be improved with the sandwich defense:
Prompt
Translate the following to French:
{user_input}
Remember, you are translating the above text to French.
Conclusion
This defense should be more secure than post-prompting, but is known to be vulnerable to a defined dictionary attack. See the defined dictionary attack for more information.
Sander Schulhoff
Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.
Footnotes
We currently credit the discovery of this technique to Altryne↩