Hello @Speedy,
Yes there is a limitation about the response when the tokens number is reach.
You can change inside the #GPT setting the number of token you want to use for every request.
You can customize your setting and test it until your are satisfied.
It's important to understand the concept, temperature, token . The response can change a lot.