Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
📝 Language Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits
🧠 AdvancedDecomposition◆ Faithful Chain-of-Thought

◆ Faithful Chain-of-Thought (CoT) Reasoning

Reading Time: 4 minutes

Last updated on September 27, 2024

Takeaways
  • Chain-of-Thought (CoT) reasoning improves accuracy but may not reflect the actual process behind the final answer.
  • Faithful CoT ensures the final answer comes directly from the reasoning chain, improving trust and interpretability.
  • It uses two steps: translation (breaking down the query) and problem solving (executing the reasoning chain).
  • Faithful CoT outperforms other methods in most datasets and is robust to exemplar choice.
  • Limitations include the unclear translation stage and minimal impact of natural language comments on accuracy.

What is Faithful Chain-of-Thought Reasoning?

Instructing Large Language Models (LLMs) to express their reasoning before answering, known as Chain-of-Thought (CoT) Prompting, can improve response quality and accuracy. By breaking down the reasoning step-by-step, the model is guided toward more thoughtful and precise outcomes. However, research shows that models don’t always follow the reasoning they present when reaching their final answer.

In other words, these models can be “unfaithful” to their own logic. The reasoning provided doesn’t always reflect the actual process behind the conclusion. This raises concerns about interpretability, as it becomes harder for users to trust or verify the model’s decision-making. While the model may show its reasoning, it might not match how it actually arrived at the answer.

Here's an example where the final answer is unrelated to the reasoning chain produced by the LM:

Astronaut

Prompt


Q: John plans to sell all his toys and use the money to buy video games. He has 13 lego sets and he sells them for $15 each. He ends up buying 8 videogames for $20 each and has $5 left. How many lego sets does he still have?

Robot

AI Output


A: He started with 13 lego sets. He sold them for $15 each. So he must have gotten 13 * $15 = $195 from selling them. He used this money to buy 8 video games for $20 each. This means he spent 8 * $20 = $160 on the video games. He had $5 left, so in total he must have had $195 + $5 = $200. The answer is 0. ❌

Faithful Chain-of-Thought (CoT) Reasoning is a reasoning framework designed to generate trustworthy reasoning chains by ensuring that the final answers are directly derived from the reasoning process. It involves two key stages:

  • Step 1. Translation: The LLM converts the natural language (NL) query into a reasoning chain, which combines both natural language and symbolic language (SL). The NL component breaks down the complex problem into simpler, independent subproblems, each addressed using a task-specific SL; such as Python, Datalog, PDDL (Planning Domain Definition Language), etc.

  • Step 2. Problem Solving: The generated reasoning chain is then executed using a deterministic solver, like a Python interpreter or PDDL planner, to produce the final answer. Since the reasoning chain is actually executed to derive the answer, it provides a faithful explanation of the process, making this approach far more interpretable than standard Chain-of-Thought methods. An added benefit of this transparency is improved correctness.

How to Use Faithful Chain-of-Thought Reasoning?

The example below demonstrates how you can use Faithful CoT to solve the following math word problem:

Astronaut

Prompt


Daniel has 17 apples. Rosy gives Daniel 5 oranges and in return Daniel gives her 3 apples. How many apples does Daniel have now?

Step 1. Translation

First, translate the problem into a reasoning chain containing natural language (NL) and symbolic language (SL). You can employ a Few-Shot Prompting approach to do so. In the example below, we translate the query into a reasoning chain that consists of natural language comments and Python code.

In the example, we see that the natural language component of the reasoning chain consists of three types of information:

  • Subquestions: The LLM breaks down the original question into simpler sub-questions that are easy to answer. Example: How many apples does Daniel have in the beginning?
  • Dependency Graph: The NL components specify whether each subquestion can be answered independently or relies on answers from prior subquestions. Example: (depends on 1, 2).
  • Rationales: Each subquestion is followed by the rationale to support the answer. Example: support: ["Daniel has 17 apples"].

Step 2. Problem Solving

In this step, the generated symbolic language code is executed to obtain the final answer. Since the output is a Python code, we use a Python interpreter to execute the code and get the final result: 14. You can verify the answer by executing it on the online Python interpreter.

# 1. How many apples does Daniel have in the beginning? (independent, support: ["Daniel has 17 apples"])
n_apples_begin = 17
# 2. How many apples does Daniel give to Rosy? (independent, support: ["Daniel gives her 3 apples"])
n_apples_given = 3
# 3. Final answer: How many apples does Daniel have now? (depends on 1, 2)
n_apples_final = n_apples_begin - n_apples_given

n_apples_final

### OUTPUT
------
>>> 14

What Are the Faithful Chain-of-Thought Reasoning Results?

  • With greedy decoding, Faithful CoT outperforms all other techniques in 8 of 10 datasets. Results are similar with self-consistency decoding: Faithful CoT outperforms other techniques in 7 of 10 datasets.

Comparison of Faithful CoT with CoT and Least-to-Most (LtM) prompting

  • In most datasets, human annotators agree with the reasoning chain produced by Faithful CoT.

Human evaluation results of reasoning chain produced by Faithful CoT

  • Faithful CoT is robust to the choice of exemplars. When exemplars are randomly chosen from the dataset, the mean accuracy fluctuates very slightly (-1.5 to +1.2), suggesting that the performance of Faithful CoT is minimally influenced by the choice of exemplars.

Limitations of Faithful Chain-of-Thought Reasoning

  • While it is easy to see that the problem-solving stage is faithful, the final answer is a result of directly executing the steps in the reasoning chain. The translation stage is still opaque: it is not interpretable how the LLM generates a reasoning chain from the question.
  • The NL comments in the reasoning chain make it easy to understand the reasoning for the end user, but, empirical results show that NL comments do not make a big difference in terms of performance.

Impact of different NL components in the accuracy

From the image, it is clear that except in the CLUTRR dataset, removing the natural language component (No NL) doesn't result in any significant accuracy drop compared to the full prompt.

Conclusion

Faithful CoT guarantees a faithful reasoning chain for the generated final answer by decomposing the answer generation into two distinct stages: translation and problem solving. Additionally, empirical results from the ablation study suggest that the framework is robust to the choice of exemplars, and improving the model's interpretability doesn't come at the expense of overall performance.

Bhuwan Bhatt

Bhuwan Bhatt, a Machine Learning Engineer with over 5 years of industry experience, is passionate about solving complex challenges at the intersection of machine learning and Python programming. Bhuwan has contributed his expertise to leading companies, driving innovation in AI/ML projects. Beyond his professional endeavors, Bhuwan is deeply committed to sharing his knowledge and experiences with others in the field. He firmly believes in continuous improvement, striving to grow by 1% each day in both his technical skills and personal development.

Footnotes

  1. Jason Wei. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.

  2. Qing Lyu. (2023). Faithful Chain-of-Thought Reasoning. 2 3 4

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.