Problem switching to Deepseek from Gemini in existing project

In one of my bolt.diy projects, i have been using google flash 2.0. However i just got the api for deepseek coder and now i want to use it instead of google. When i change the coder within the project with proper APi for deepseek, it doesn’t work after starting a prompt. It gives me an error- there was error processing your request. Another thing is that, when i launch a new project with deepseek api, it works great.

so please help me how to use it in existing project.

The context size of deepseek is to small to work with default settings.

go to settings and enable context optimization and try again.

1 Like

Thanks so much. I will try and get back to you…

Tried what you suggested although when i switch from google gemini to deepseek it still doesn’t work.

Is there any error in Terminal and/or the DEV-Tools?

no there is no particular error in the terminal.

Sorry- this following is the terminal error-

 DEBUG   api.chat  Total message length: 97355, words
 INFO   stream-text  Sending llm call to Deepseek with model deepseek-coder
 ERROR   api.chat  AI_APICallError: This model's maximum context length is 65536 tokens. However, you requested 93467 tokens (85467 in the messages, 8000 in the completion). Please reduce the length of the messages or completion.
 DEBUG   api.chat  usage {"promptTokens":98598,"completionTokens":6023,"totalTokens":104621}

within the bolt terminal inside my project- this is the following error-

npm install && npm run start
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See ev/blog/math-random for details.
npm WARN deprecated urix@0.1.0: Please see github/lydell/urix#deprecated
npm WARN deprecated source-map-url@0.4.1: See /lydell/source-map-url#deprecated
npm WARN deprecated source-map-resolve@0.5.3: See https://github.com/lydell/source-map-resolve#deprecated
npm WARN deprecated rimraf@3.0.2: Rimraf versions prior to v4 are no longer supported
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated puppeteer@3.3.0: < 22.8.2 is no longer supported
npm WARN deprecated opn@6.0.0: The package has been renamed to `open`
npm WARN deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm WARN deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported

Thanks, are you on the stable or the main branch? If stable, try to switch to the main branch and make sure you gut latest updates.

Then try again with all features on (context optimization, optimized prompt).

As seen here in your log, the context length is to big and I think it stops fixing stuff, when you ask it to do:

 ERROR   api.chat  AI_APICallError: This model's maximum context length is 65536 tokens. However, you requested 93467 tokens (85467 in the messages, 8000 in the completion). Please reduce the length of the messages or completion.

i transferred a normal 6 page website project from windsurf to bolt.diy to save money. If this is considered big then how can we create big apps or websites on bolt.new if there will be restrictions. Im on the main branch and updated to the latest version of bolt.diy. It’s not only just for this project, i tried it on a small tool app and it was throwing the same error.

Think I wrote it in another topic with you, but same here. Did you set these settings, as shown in the screenshot below?

Yes i changed the settings as per your instructions. however it throws the same error in terminal-

INFO stream-text Sending llm call to Deepseek with model deepseek-coder
ERROR api.chat AI_APICallError: This model’s maximum context length is 65536 tokens. However, you requested 199330 tokens (191330 in the messages, 8000 in the completion). Please reduce the length of the messages or completion.

ok, this is strange. Are you sure, you are on the latest changes of the main branch? It´s not working in the stable branch (release)

Honestly i am not a big tech guy. i don’t know the actual difference between stable and main branch. i just updated to the latest repository from the following url- GitHub - stackblitz-labs/bolt.diy: Prompt, run, edit, and deploy full-stack web applications using any LLM you want!

I actually updated the bolt.diy version to the latest, now i am getting the following error in any new or old chat- Analyzing request… (continues)

There was an error processing your request: Custom error: Invalid JSON response

1 Like

I was actually getting this error myself! It seems like something wrong with the DeepSeek API specifically. Could you try going through OpenRouter or Together AI instead?

1 Like

Thanks Cole. I will try openrouter and get back to you. Is openrouter like ollama? Does it provide free local llm? I actually have bought the paid api from deepseek. Let me know the next step.

1 Like

OpenRouter is a paid API like OpenAI or DeepSeek, but you have access to basically any LLM you could possibly want, including some that are open source and some that are free to use.

1 Like