This is probably a well known issue but let quickly present my issue regarding an application that I’m developping.
When I’m using Bolt.diy with most of the LLM and I submit a prompt, I get the following error message: “This endpoint’s maximum context length is 131072 tokens. However, you requested about 243776 tokens (235776 of text input, 8000 in the output” which prevent me from using the best LLM models for coding.
In Bolt.new I don’t have this issue. I’m wondering why?
Is there any plan to solve this issue with Bolt.diy?