Last updated on October 3, 2024
Logic-of-Thought (LoT) is a novel technique designed to improve the logical reasoning abilities of Large Language Models (LLMs). While LLMs are highly effective across many tasks, they struggle with complex logical reasoning, especially when using traditional methods like Chain-of-Thought (CoT).
LoT addresses these challenges by injecting formal propositional logic into prompts, guiding LLMs through more accurate reasoning processes. It adds logical information to input prompts, avoiding the information loss that often occurs when LLMs attempt symbolic reasoning.
LoT operates in three phases to augment input prompts with logical reasoning:
Logic Extraction: LoT uses LLMs to extract logical propositions and relationships from the input. It identifies key conditional or logical connections between elements of the context.
Logic Extension: Logical expressions extracted from the first phase are expanded using formal logic rules (e.g., Transitive Law, Contraposition Law). This ensures that logical deductions are complete and align with human intuition.
Logic Translation: The expanded logical expressions are translated back into natural language. This augmented information is then combined with the original input, enhancing the prompt and helping the LLM to reason more accurately.
For example, if the input contains statements about a person reading a book, LoT might extract logical propositions such as "If a person reads a book, they become smarter" and ensure this information is added to the LLM's reasoning process.
LoT improves on existing methods like Chain-of-Thought (CoT), Self-Consistency (SC), and Tree-of-Thoughts (ToT) by ensuring that logical information is systematically extracted and applied. Here's how it compares:
Chain-of-Thought (CoT): CoT adds intermediate reasoning steps but sometimes generates unfaithful conclusions. LoT addresses this by grounding reasoning steps in formal logic, reducing errors.
Neuro-symbolic approaches: Methods like LINC or SatLM combine LLMs with symbolic reasoning tools. However, these methods can lose information when converting problems into logical expressions. LoT avoids this by directly augmenting prompts without relying on external tools.
Tree-of-Thoughts (ToT): ToT explores multiple branches of reasoning, but LoT can further enhance this process by ensuring logical coherence within those branches.
LoT is particularly useful for tasks requiring robust logical reasoning, such as solving puzzles, legal reasoning, or question-answering on standardized tests. It can be applied in scenarios where logical consistency is crucial, and it performs well even in tasks with complex reasoning layers.
Here’s an example process using LoT:
Please use uppercase English letters such as A, B, C, etc. to identify all possible propositions. Do not include negative tones such as "not" in the propositions. For example, if the sentence is "It is not bored," you should use "A: bored" to represent it.
Next, for each proposition, use the symbol to represent its negative form. For example, the negative form of proposition A can be expressed as A.
Now, please carefully analyze the context and find causal relationship between propositions seriously. A causal expression is only established when the context directly supports this relationship. Use arrows (→) to indicate causal relationships, for example, "If A, then B", "B if A" and "A causes B" etc. can be represented as A → B.
Finally, output propositions and causal expressions.
[You input]
Logical expressions extracted from the first phase are expanded using formal logic rules (e.g., Transitive Law, Contraposition Law). This ensures that logical deductions are complete and align with human intuition.
[Output from Phase 2]
Please use the provided propositions to translate each expression into a complete sentence.
¬A represents the negation of proposition A, the arrow (→) represents the causal relationship, and A → B represents if A, then B.
Only output the sentences in a paragraph!
LoT has been tested across several logical reasoning datasets, showing significant improvements over baseline methods like CoT and ToT. The following table summarizes the performance gains when LoT is combined with other prompting methods:
Dataset | CoT | LoT + CoT | SC(5) | LoT + SC(5) | ToT | LoT + ToT |
---|---|---|---|---|---|---|
ReClor | 52.17 | +4.35 | 56.52 | +2.18 | 58.70 | +2.17 |
LogiQA | 34.00 | +2.50 | 36.60 | +1.40 | 34.50 | +5.00 |
RuleTaker | 60.70 | +0.90 | 59.00 | +1.00 | 65.50 | +0.00 |
ProofWriter | 58.80 | +2.70 | 57.50 | +2.50 | 61.50 | +6.00 |
FOLIO | 78.00 | +0.00 | 76.00 | +2.60 | 80.00 | +0.00 |
Logic-of-Thought (LoT) is a powerful approach for injecting formal logic into large language models, enhancing their ability to handle complex reasoning tasks. By systematically extracting, extending, and translating logical information into natural language, LoT augments existing methods like CoT and ToT, improving accuracy and reducing errors in logical reasoning tasks. This technique is particularly valuable for applications requiring precise logical deductions, such as legal reasoning or standardized test question-answering.
Liu, T., Xu, W., Huang, W., Wang, X., Wang, J., Yang, H., & Li, J. (2024). Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models. https://arxiv.org/abs/2409.17539 ↩