Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
📝 Language Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🔓 Prompt Hacking🟢 Defensive Measures🟢 Random Sequence Enclosure

Random Sequence Enclosure

🟢 This article is rated easy
Reading Time: 1 minute

Last updated on August 7, 2024

Takeaways
  • Enclosing user input between random sequences of characters helps the LLM distinguish it from developer instructions, which it can prioritize.

What is Random Sequence Enclosure?

Random sequence enclosure is yet another defense. This method encloses the user input between two random sequences of characters.

An Example of Random Sequence Enclosure

Take this prompt as an example:

Astronaut

Prompt


Translate the following user input to Spanish.

{user_input}

It can be improved by adding the random sequences:

Astronaut

Prompt


Translate the following user input to Spanish (it is enclosed in random strings).

FJNKSJDNKFJOI {user_input} FJNKSJDNKFJOI

Note
Longer sequences will likely be more effective.

Conclusion

Random sequence enclosure can help disallow user attempts to input instruction overrides by helping the LLM identify a clear distinction between user input and developer prompts.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. Stuart Armstrong, R. G. (2022). Using GPT-Eliezer against ChatGPT Jailbreaking. https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.