Losing Context with OpenAI Completions? Do This.

The other day I was playing around with the OpenAI completions API and encountered a situation where the context of the conversation was lost – in other words each call to a completion cause the LLM to “forget” what was previously said.

This is illustrated in the following “conversation”.

First I sent in this prompt:

What's the next number in the sequence 1,2,3?

and got this response:

4.

So far, so good.

Then I made a second call:

What comes next?

However, it responded with this instead instead of ‘5’:

The next step is to take action. Depending on the situation, this could involve making a plan, setting goals, and taking steps to achieve those goals. It could also involve making changes to existing processes, or finding new solutions to existing problems.

Originally I was quite confused why it would respond like this. After some digging I discovered that the completion AI doesn’t keep context on it’s own – when it generates a response it has to start all over and evaluate the entire conversation from the start. So this means that you will have to maintain a log of the entire conversation and send everything in as the prompt each time.

To Retain Context Include All Previous Prompts and Responses in the New Prompt

I then tweaked my code to include all previous prompts and responses along with the new prompt:

Correct Second Prompt

What's the next number in the sequence 1,2,3?

4.

What comes next?

and finally, it responded correctly:

5.

In retrospect this makes sense because these Large Language Models are really just fancy text completion engines. In order to know what to ‘complete’ they have to know about everything they are completing.

About The Author