Last updated on October 1, 2024
We have explored many intermediate and advanced prompting techniques so far. In this section, we’ll dive into advanced methods where LLMs interact with external tools to solve complex reasoning tasks.
These methods are called agents; agents are just GenAIs (usually LLMs) that can use tools and take actions.
Although this area of research is still developing, it has already sparked significant innovations in prompting techniques. These methods broaden the range of problems that prompting can address. By performing tasks such as conducting internet searches, querying an external calculator, or executing code externally, the LLM can incorporate information it wasn’t trained on into its context.
These techniques often emerge to compensate for LLM limitations in areas like mathematical calculations, reasoning, and factual accuracy. For instance, when asked a question like, "What is 19 percent of 5619?", an LLM may struggle to provide an accurate answer. To simplify this, the LLM could choose not to generate the answer directly but instead call on a calculator tool. A response might look like this:
CALCULATOR[(0.19) * (5619)]
In this example, instead of producing the answer itself, the LLM provides the structure, while the calculator performs the actual computation. This effectively offloads tasks that are challenging for LLMs to tools specifically designed for them. It’s clear how these techniques can be essential for developing GenAI agents.
While this is a simple example, these techniques become significantly more complex when API calls, code execution, and reasoning come into play. Some methods we’ll cover include MRKL Systems, ReAct, and PAL, though many more are emerging as this field evolves.
Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., Shoham, Y., Bata, H., Levine, Y., Leyton-Brown, K., Muhlgay, D., Rozen, N., Schwartz, E., Shachaf, G., Shalev-Shwartz, S., Shashua, A., & Tenenholtz, M. (2022). ↩ ↩2
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). ↩
Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., & Neubig, G. (2022). ↩