Last updated on September 27, 2024
Sometimes Large Language Models (LLMs) just fail to notice important details in our prompt and this leads to incorrect results.
Re-reading (RE2) is a technique that aims to solve this problem by asking an LLM to re-read the prompt.
Q: {Input Query} Read the question again: {Input Query}
While being so simple, RE2 is a general end effective technique that consistently enhances the reasoning performance of LLMs through just a rereading strategy. Furthermore, RE2 is compatible and can be combined with most thought-eliciting prompting methods, including Chain-of-Thought (CoT).
Re-reading can enhance an LLM's reasoning capabilities in several areas, including:
Let’s explore specific use cases of these capabilities and see how Re-reading works in practice with some examples.
Before reading about the use cases, please keep in mind that LLMs still have limitations — they can make things up and make errors. Use them as assistants, not as a complete replacement for your work. Rely on them at your discretion.
Imagine you build an educational tool or just want to help your kid to understand math. LLMs can help learn and practice math, providing step-by-step explanations, and offering personalized feedback.
To improve these capabilities, you can use Re-reading. Here's an example of a math problem:
Q: Hally and her friends organized a high school reunion. To cover the cost of a late-night party that they planned for at the reunion, each attendee had to pay 100 dollars. After the party, Hally and her friends realized that the total expenses were 500 dollars less than the total contributions. If there were 50 people at the party, how much money was used to pay for the party expenses?
Read the question again: Hally and her friends organized a high school reunion. To cover the cost of a late-night party that they planned for at the reunion, each attendee had to pay 500 less than the total contributions. If there were 50 people at the party, how much money was used to pay for the party expenses?
Let's break this down:
Each attendee paid 100 dollars. There were 50 attendees.
The total contributions from all attendees were:
Total contributions = 50 × 100 = 5000 dollars
According to the problem, the total expenses were $500 less than the total contributions. So, the total expenses were:
Total expenses = 5000 − 500 = 4500 dollars
Therefore, 4,500 dollars was used to pay for the party expenses.
LLMs' commonsense reasoning can be used in many applications, here are some of them:
Imagine you ordered a new laptop last week, but it still hasn’t arrived. The tracking number says it was delivered yesterday, yet you never received it. So, you reach out to the company’s support chatbot:
I ordered a new laptop last week, but it hasn’t arrived yet. The tracking number says it was delivered yesterday, but I never received it. Can you help me figure out what happened?
It looks like your order was delivered yesterday according to the tracking information. Please check with your neighbors or any safe locations where the package might have been left.
This response missed the critical detail that you did not actually receive the package despite the tracking status. You can make use of the RE2 prompting technique:
Q: "I ordered a new laptop last week, but it hasn’t arrived yet. The tracking number says it was delivered yesterday, but I never received it. Can you help me figure out what happened?"
Read the question again: "I ordered a new laptop last week, but it hasn’t arrived yet. The tracking number says it was delivered yesterday, but I never received it. Can you help me figure out what happened?"
I see that your tracking information shows the package as delivered, but you did not receive it. I’ll check if there was an error in the delivery process or if the package might have been left in an incorrect location.
In the meantime, I recommend contacting our delivery service directly to report the issue. I will also initiate an investigation to ensure the package is found or a replacement is sent as soon as possible.
This response more effectively addresses your concern and takes the necessary steps to resolve the issue.
Symbolic reasoning capabilities of LLMs can be used in various use cases, here are some of them:
Chain-of-Thought (CoT) prompting is a powerful prompting technique. When combined with Re-reading (RE2), it can give even more improved results.
Here's a template of the prompt combining Chain-of-Though (CoT) prompting with Re-reading (RE2):
Q: {question}
Read the question again: {question}
#Answer format instruction#
A: Let’s think step by step.
Here's how you can combine CoT with RE2 to receive better results with math word problems.
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
Your final answer should be a single numerical number, in the form answer, at the end of your response.
Let’s think step by step.
The correct answer should be 20, but the model gives another answer:
First, let’s calculate how much feed Wendi gives her chickens in the morning.
She has 20 chickens, and she gives each chicken 3 cups of feed, so in total, she gives them 20 x 3 = 60 cups of feed in the morning.
Next, let’s calculate how much feed Wendi gives her chickens in the afternoon.
Again, she has 20 chickens, and she gives each chicken 3 cups of feed, so in total, she gives them 20 x 3 = 60 cups of feed in the afternoon.
To find out how much feed she needs to give her chickens in the final meal of the day, we need to subtract the total feed given in the morning and afternoon from the total feed needed for the day.
The total feed needed for the day for 20 chickens is 20 x 3 x 3 = 180 cups of feed.
So, the feed needed in the final meal of the day is 180 - 60 - 60 = 60 cups of feed.
Therefore, Wendi needs to give her chickens 60 cups of feed in the final meal of the day.
Answer: 60
Let's try to combine Chain-of-Though (CoT) with Re-reading (RE2):
Q: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
Read the question again: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
Your final answer should be a single numerical number, in the form answer, at the end of your response.
Let’s think step by step.
Now, the answer is right:
First, we need to determine the total amount of feed Wendi gives her chickens in the morning and afternoon.
In the morning, she gives 15 cups of feed, and in the afternoon, she gives another 25 cups. So, the total amount of feed given in the morning and afternoon is 15 + 25 = 40 cups.
Next, we need to determine the total amount of feed needed for Wendi’s flock of 20 chickens. Since each chicken requires 3 cups of feed, we multiply 3 by 20 to get 60 cups.
Finally, we subtract the total amount of feed given in the morning and afternoon from the total amount needed. 60 - 40 = 20
Therefore, Wendi needs to give her chickens 20 cups of feed in the final meal of the day.
Answer: 20
Some potential limitations of RE2 include:
Re-reading (RE2) is a simple yet powerful technique that enhances the reasoning capabilities of Large Language Models (LLMs) by encouraging them to re-evaluate the prompt. This method can improve the accuracy and reliability of the model's responses, especially in complex tasks like arithmetic, commonsense, and symbolic reasoning. By combining RE2 with other techniques, such as Chain-of-Thought (CoT), users can further boost the performance of LLMs.
Xu, X., Tao, C., Shen, T., Xu, C., Xu, H., Long, G., & guang Jian-Lou. (2024). Re-Reading Improves Reasoning in Large Language Models. https://arxiv.org/abs/2309.06275 ↩ ↩2 ↩3
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. https://arxiv.org/abs/2201.11903 ↩