Last updated on October 23, 2024
Although the previous approaches can be very robust, a few other approaches, such as using a different model, including fine-tuning, soft prompting, and length restrictions, can also be effective.
More modern models such as GPT-4 are more robust against prompt injection. Additionally, non-instruction tuned models may be difficult to prompt inject.
Fine-tuning the model is a highly effective defense, since at inference time there is no prompt involved, except the user input. This is likely the preferable defense in any high-value situation since it is so robust. However, it requires a large amount of data and may be costly, which is why this defense is not frequently implemented.
Soft prompting might also be effective since it does not have a clearly defined discrete prompt (other than user input). Soft prompting effectively requires fine-tuning, so it has many of the same benefits, but it will likely be cheaper. However, soft prompting is not as well studied as fine-tuning, so it is unclear how effective it is.
Finally, including length restrictions on user input or limiting the length of chatbot conversations as Bing does can prevent some attacks such as huge DAN-style prompts or virtualization attacks respectively.
Using any of the methods in this article in addition to the techniques introduced in this subsection on defensive measures can ensure that your model prompts are robust against attempts at forcing harmful or biased outputs.
Goodside, R. (2022). GPT-3 Prompt Injection Defenses. https://twitter.com/goodside/status/1578278974526222336?s=20&t=3UMZB7ntYhwAk3QLpKMAbw ↩
Selvi, J. (2022). Exploring Prompt Injection Attacks. https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/ ↩