When you set the load previous session option to from. Every chat has its own memory called the context limit. The context limit is measured in tokens and for chatgpt includes things like system instructions, so it's hard to put.
You should be able to change your postgress chat memory to do this. Then have some logic between the trigger and agent to find/generate an id key. My guess from all this is that even chatgpt uses the last few hundred tokens and then loses context over time (4096 limit).
If you want specific help, you need to post your. Can you try to use two separate memory buffer nodes, where the chat trigger one uses the “take from previous node” mode? I want to provide both the user's previous input and the model's previous response in the request. I'm trying to feed chat history to the google gemini api using a curl request.
Something to note here is. Conversationtokenbuffermemory (llm=llm,max_token_limit=) here it will. That should hopefully resolve your issue. We can see that by passing the previous conversation into a chain, it can use it as context to answer questions.
This isn't available out of the box from what i can find, however extracting, doing some transformation and leveraging power bi will potentially give you what you are asking about. If you need to continue talking about a long form. In chat triggers, the load previous session option retrieves previous chat messages for a session using the sessionid.