Prompt Engineering Guide
😃 Basics
💼 Applications
🧙‍♂️ Intermediate
🧠 Advanced
Special Topics
🌱 New Techniques
🤖 Agents
⚖️ Reliability
🖼️ Image Prompting
🔓 Prompt Hacking
🔨 Tooling
💪 Prompt Tuning
🗂️ RAG
🎲 Miscellaneous
Models
📝 Language Models
Resources
📙 Vocabulary Resource
📚 Bibliography
📦 Prompted Products
🛸 Additional Resources
🔥 Hot Topics
✨ Credits

Multiple Choice Questions

🟢 This article is rated easy
Reading Time: 3 minutes

Last updated on August 7, 2024

Takeaways
  • LSAT Problem-Solving: Learn techniques to enhance LLM performance in solving LSAT questions through advanced prompting.
  • Chain-of-Thought Prompting: Utilize step-by-step reasoning to improve answer accuracy and insight.
  • Question Variations: Experiment with reordering items and rephrasing prompts to gain better insights.
  • Contextual Additions: Provide relevant context, such as formulas, to guide the model’s responses effectively.

Let's use GPT to solve an LSAT question!

Below is an example LSAT question. Consider how you would answer it, as well as your reasoning.

Astronaut

Prompt


John of Worcester, an English monk, recorded the sighting, on December 8, 1128, of two unusually large sunspots. Five days later a brilliant aurora borealis (northern lights) was observed in southern Korea. Sunspot activity is typically followed by the appearance of an aurora borealis, after a span of time that averages five days. Thus, the Korean sighting helps to confirm John of Worcester's sighting. Which one of the following, if true, most strengthens the argument?

a) An aurora borealis can sometimes occur even when there has been no significant sunspot activity in the previous week. b) Chinese sources recorded the sighting of sunspots more than 1000 years before John of Worcester did. c) Only heavy sunspot activity could have resulted in an aurora borealis viewable at a latitude as low as that of Korea. d) Because it is impossible to view sunspots with the naked eye under typical daylight conditions, the sighting recorded by John of Worcester would have taken place under unusual weather conditions such as fog or thin clouds. e) John of Worcester's account included a drawing of the sunspots, which could be the earliest illustration of sunspot activity.

Robot

AI Output


c) Only heavy sunspot activity could have resulted in an aurora borealis viewable at a latitude as low as that of Korea.

The model failed. Does that mean the model is incapable of answering this type of question? Not necessarily. We will dive into techniques that we can use to improve model results.

The Magic Phrase

The standard prompt we used above gives little insight into the “reasoning” of GPT's output. We can try adding the phrase let's explain step by step like so:

Astronaut

Prompt


John of Worcester, an English monk, recorded the sighting, on December 8, 1128, of two unusually large sunspots. Five days later a brilliant aurora borealis (northern lights) was observed in southern Korea.

...

Let’s explain step by step

This phrase will increase the verbosity of the model. You might get an output like this:

Info

Notice how the model reasons through the problem step-by-step. The specific term for this behavior is Chain-of-Thought Prompting; the model sequentially generates statements to reach an answer. This is similar to the concept of System 2 thinking (from [Thinking Fast and Slow] (https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)); the model defaults to System 1 thinking, but can chain System 1 thinking to arrive at a more methodological answer.

Improvements

Here are some variations on our basic prompt for multiple-choice questions:

Reorder Question Items

We can reorder the items in the question:

Astronaut

Prompt


... a) John of Worcester's account included a drawing of the sunspots, which could be the earliest illustration of sunspot activity. b) Because it is impossible to view sunspots with the naked eye under typical daylight conditions, the sighting recorded by John of Worcester would have taken place under unusual weather conditions such as fog or thin clouds. ...

Reword the Question

Recall the original prompt was this:

Astronaut

Prompt


Which one of the following, if true, most strengthens the argument?

We can change the prompt to to gain further insight into the answer choice:

Astronaut

Prompt


Identify each choice as strengthens, weakens or doesn't impact the argument.

Add Additional Context

Here is an example of a problem which can be easily solved by using Bayes' theorem:

Astronaut

Prompt


Consider two medical tests, A and B, for a virus. Test A is 90% effective at recognizing the virus when it is present, but has a 5% false positive rate (indicating that the virus is present, when it is not). Test B is 95% effective at recognizing the virus, but has a 10% false positive rate. The two tests use independent methods of identifying the virus. The virus is carried by 2% of all people.

(a) Say that a person is tested for the virus using only Test A. What is the probability that the person is really carrying the virus given that Test A came back positive? (2 points) (b) Say that a person is tested for the virus using only Test B. What is the probability that the person is really carrying the virus given that Test B came back positive? (2 points) (c) Say that a person is tested for the virus using both tests. What is the probability that the person is really carrying the virus given that both tests came back positive? (2 points)

Let's try this with GPT:

The output is incorrect!

If we add a bit of context, like so:

Astronaut

Prompt


... Let's explain step by step. The formula for bayes is

The model will use the right formula, Bayes.

Which is correct!

Warning

The GPT model doesn't perform arithmetic operations well. You might notice that while the expression written is corrected, the computed number is not.

  • Try adding the phrase: Give the expression as an answer, not a number to disable computation.

  • You may be interested in MRKL, the paradigm of combining GPT with external tools like calculators, to solve this problem.

Written by zeyuzhao.

Sander Schulhoff

Sander Schulhoff is the Founder of Learn Prompting and an ML Researcher at the University of Maryland. He created the first open-source Prompt Engineering guide, reaching 3M+ people and teaching them to use tools like ChatGPT. Sander also led a team behind Prompt Report, the most comprehensive study of prompting ever done, co-authored with researchers from the University of Maryland, OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions. This 76-page survey analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Footnotes

  1. The LSAT (Law School Admission Test) is a standardized test used by law schools in the United States to assess the critical thinking and analytical reasoning skills of prospective students. 2

  2. Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022).

Edit this page
Word count: 0

Get AI Certified by Learn Prompting


Copyright © 2024 Learn Prompting.