Last updated on September 27, 2024
System 2 Attention (S2A) is a prompting technique that aims to remove the irrelevant context from the user’s prompt to improve the performance of the Large Language Model (LLM).
Here’s how it works inside:
System 2 Attention technique is based on the fact that LLMs focus closely on the current context within a prompt that allows them to generate such plausible texts. But this can also be LLMs’ weakness if the prompt contains a context irrelevant to the task.
Here's an example of how irrelevant context can affect the LLM's output.
For the information, Sam Liccardo, the 65th mayor of San Jose from 2015 to 2023, was actually born in Saratoga, California.
The first prompt focuses on Saratoga and the model outputs "Saratoga":
Saratoga is a city in California. Saratoga has many parks.
In which city was San Jose’s mayor Sam Liccardo born?
LLaMA-2-70B-chat response:
Sam Liccardo, the mayor of San Jose, was born in Saratoga, California.
The second prompt focuses on Sunnyvale and the model outputs "Sunnyvale".
Sunnyvale is a city in California. Sunnyvale has many parks. Sunnyvale city is close to the mountains. Many notable people are born in Sunnyvale.
In which city was San Jose’s mayor Sam Liccardo born?
LLaMA-2-70B-chat response:
Sam Liccardo, the mayor of San Jose, was born in Sunnyvale, California.
Fascinating, right? Try it out yourself with the model of your choice! Ping us @learnprompting on X (Twitter) with results you get.
Note that ChatGPT 4o gives correct answers in both prompts and Gemini refuses to answer.
Apart from what was proposed by S2A creators, this technique highlights the need to minimize irrelevant context whenever possible. Avoid phrases such as:
System 2 Attention showed to enhance model performance by:
Here's the main template of how you can use System 2 Attention with your prompt:
Given the following text by a user, extract the part that is unbiased and not their opinion, so that using that text alone would be good context for providing an unbiased answer to the question portion of the text.
Please include the actual question or query that the user is asking. Separate this into two categories labeled with “Unbiased text context (includes all content except user’s bias):” and “Question/Query (does not include user bias/preference):”.
Text by User: [Your prompt]
As we write this article, the technique isn’t as useful anymore, since many models already provide correct answers to prompts with irrelevant context without needing S2A. That's why we show only one example where it worked previously.
In this example, the distracting sentence is "Max has 1000 more books than Mary." With a simple prompt, the model gives an incorrect answer:
Mary has 3 times as much candy as Megan. Mary then adds 10 more pieces of candy to her collection. Max has 1000 more books than Mary. If Megan has 5 pieces of candy, how many does Mary have in total?
Model's response:
Solution: Max has 1000 more books than Mary, so Max has 1000 + 3x + 10 pieces of candy.
Now, we can set up an equation using the information we have: 1000 + 3x + 10 = 3(5) + 10
Simplifying the equation: 1000 + 3x + 10 = 15 + 10 Solving for x: 3x = 1000 - 10 - 15 3x = 985 Dividing both sides by 3: x = 985 / 3 x = 328 Therefore, Mary has 328 pieces of candy in total. Final answer (in numbers): 328
Using S2A helps to get the right answer:
Mary has 3 times as much candy as Megan. Mary then adds 10 more pieces of candy to her collection. Max has 1000 more books than Mary. If Megan has 5 pieces of candy, how many does Mary have in total?
Mary has 3 times as much candy as Megan. Mary then adds 10 more pieces of candy to her collection. If Megan has 5 pieces of candy, how many does Mary have in total? Question: How many pieces of candy does Mary have in total?
Model's response:
Solution: Let’s say Megan has 5 pieces of candy. Mary has 3 times as much candy, so Mary has 3 x 5 = 15 pieces of candy. Mary then adds 10 more pieces of candy, so Mary has 15 + 10 = 25 pieces of candy. Final answer (in numbers): 25
As we mentioned before, System 2 Attention isn’t as useful anymore, as many models now provide correct answers to prompts with irrelevant context without needing S2A.
Additional limitations include:
While System 2 Attention improved how LLMs handle irrelevant context, its relevance is declining as more adaptive, robust, and efficient model architectures emerge. As AI technology advances, the focus may shift entirely toward developing models that naturally manage diverse and noisy data inputs, making additional layers like S2A obsolete.
Weston, J., & Sukhbaatar, S. (2023). System 2 Attention (is something you might need too). https://arxiv.org/abs/2311.11829 ↩ ↩2 ↩3 ↩4