Last updated on August 7, 2024
ChatGPT has blown up in the past month, gaining a million users in just a week. Surprisingly, the underlying model, GPT-3 debuted in 2020 and was released for public access over a year ago!
For those who don't know, ChatGPT is a new language model from OpenAI that was finetuned from GPT-3 to be optimized for conversation. It has a user-friendly chat interface, where you can give input and get a response from an AI assistant. Check it out at chat.openai.com.
While the early versions of GPT-3 weren't as advanced as the current GPT-3.5 series, they were still impressive. These models have been available through an API and a playground web UI interface that lets you tune certain configuration hyperparameters and test prompts. GPT-3 gained significant traction, but it did not approach the virality of ChatGPT.
What makes ChatGPT so successful, compared to GPT-3, is its accessibility as a straightforward AI assistant for the average person, regardless of their knowledge of data science, language models, or AI.
In this article, I overview how chatbots like ChatGPT can be implemented using a Large Language Model like GPT-3.
This article was written in part because of a tweet by Riley Goodside, noting how ChatGPT could have been implemented.
How to make your own knock-off ChatGPT using GPT‑3 (text‑davinci‑003) — where you can customize the rules to your needs, and access the resulting chatbot over an API. pic.twitter.com/9jHrs91VHW
— Riley Goodside December 26, 2022
Like other models in the GPT-3.5 series, ChatGPT was trained using RLHF, but much of its effectiveness comes from using a good prompt.
Full Skippy chatbot prompt from the article header
Prompting is the process of instructing an AI to do something.
As you have probably seen in ChatGPT examples online, you can prompt it to do just about anything. Common use cases are summarizing text, writing content based on a description, or creating things like poems, recipes, and much more.
ChatGPT is both a language model and a user interface. The prompt input by a user to the interface is actually inserted into a larger prompt that contains the entire conversation between the user and ChatGPT. This allows the underlying language model to understand the context of the conversation and respond appropriately.
Example insertion of user prompt before sending to model
The language model completes the prompt by figuring out what words come next based on probabilities it learned during pre-training.
GPT-3 is able to 'learn' from a simple instruction or a few examples in the prompt. The latter is called Few-Shot or in-context learning. In the chatbot prompt above, I create a fictitious chatbot named Skippy and instruct it to provide responses to users. GPT-3 picks up on the back-and-forth format, USER: {user input}
and SKIPPY: {skippy response}
. GPT-3 understands that Skippy is a chatbot and the previous exchanges are a conversation, so when we provide the next user input, "Skippy" will respond.
Past exchanges between Skippy and the user get appended to the next prompt. Each time we give more user input and get more chatbot output, the prompt expands to incorporate this new exchange. This is how chatbots like Skippy and ChatGPT can remember previous inputs. There is a limit, however, to how much a GPT-3 chatbot can remember.
Prompts can get massive after several exchanges, especially if we are using the chatbot to generate long responses like blog posts. Prompts sent to GPT-3 are converted into tokens, which are individual words or parts of them. There is a limit of 4097 tokens (about 3000 words) for the combined prompt and generated response for GPT-3 models, including ChatGPT.
There are many different use cases of chatbot prompts that store previous conversations. ChatGPT is meant to be an all-purpose general assistant and in my experience, it rarely asks follow-ups.
It can be helpful to have a chatbot that actively asks questions and gets feedback from the user. Below is an example therapy chatbot prompt that will ask questions and follow-ups to help a user think about their day.
Therapy chatbot prompt
Michelle Huang used GPT-3 to have a chat with her younger self. The prompt uses some context, in this case, old journal entries, paired with a chatbot style back and forth format. GPT-3 can mimic a personality based on these entries.
I trained an AI chatbot on my childhood journal entries - so that I could engage in real-time dialogue with my "inner child"
some reflections below:
— Michelle Huang
November 27, 2022
Prompt from the Tweet:
The following is a conversation with Present Michelle (age [redacted]) and Young Michelle (age 14).
Young Michelle has written the following journal entries:
[diary entries here]
Present Michelle: [type your questions here]
The author does note that diary entries can reach the token limit. In this case, you could pick a select few entries or try to summarize several entries.
I will walk through coding a simple GPT-3 powered chatbot in Python. Including GPT-3 in an app you are building is incredibly easy using the OpenAI API. You will need to create an account on OpenAI and get an API key. Check out their docs here.
Overview of what we need to do:
Here is the prompt I will use. We can use Python to replace <conversation history> and <user input> with their actual values.
chatbot_prompt = """
As an advanced chatbot, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. To effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions.
<conversation history>
User:
<user input>
Chatbot:"""
I keep track of both the next user input and the previous conversation. New input/output between the chatbot and the user is appended to each loop.
import openai
openai.api_key = "YOUR API KEY HERE"
model_engine = "text-davinci-003"
chatbot_prompt = """
As an advanced chatbot, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. To effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions.
<conversation history>
User:
<user input>
Chatbot:"""
def get_response(conversation_history, user_input):
prompt = chatbot_prompt.replace("<conversation history>", conversation_history).replace("<user input>", user_input)
# Get the response from GPT-3
response = openai.Completion.create(
engine=model_engine, prompt=prompt, max_tokens=2048, n=1, stop=None, temperature=0.5)
# Extract the response from the response object
response_text = response["choices"][0]["text"]
chatbot_response = response_text.strip()
return chatbot_response
def main():
conversation_history = ""
while True:
user_input = input(">")
if user_input == "exit":
break
chatbot_response = get_response(conversation_history, user_input)
print(f"Chatbot: {chatbot_response}")
conversation_history += f"User: {user_input}\nChatbot: {chatbot_response}\n"
main()
Access the full code for a simple chatbot.
Now all that's left is to build a nice front-end that users can interact with!
Written by jayo78.
OpenAI. (2022). ChatGPT: Optimizing Language Models for Dialogue. https://openai.com/blog/chatgpt/. https://openai.com/blog/chatgpt/ ↩
Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice Hall. ↩
Brown, T. B. (2020). Language models are few-shot learners. arXiv Preprint arXiv:2005.14165. ↩