Bolt DIY + Deepseek r1 not working

As the topic says, i can’t make the Deepseek 1 model work in the Bolt DIY project.

This is my setup and all the things i already made.

  1. Reinstall Ollama to get last version (0.5.7)

  2. Pull the Deepseek r1:14b model in my system. (deepseek-r1)

  3. Run “ollama serve” to run a server that allows me access to the models. (It runs in: 127.0.0.1:11434" - localhost:11434)

  4. Clone Bolt DIY project and run “npm install”

  5. Create .env file and update the OLLAMA_API_BASE_URL to the recommended setting (127.0.0.1:11434):

  6. Run the project with “npm run dev”

6.1) Ollama is correctly set in the providers settings.

6.2) Model is correctly selected as Deepseek r1: 14b


Once i run my prompt, I receive these logs:

DEBUG api.chat Total message length: 2, words
INFO LLMManager Getting dynamic models for Ollama
INFO LLMManager Got 4 dynamic models for Ollama
WARN stream-text MODEL [claude-3-5-sonnet-latest] not found in provider [Ollama]. Falling back to first model. deepseek-r1:14b
INFO stream-text Sending llm call to Ollama with model deepseek-r1:14b
DEBUG Ollama Base Url used: http://127.0.0.1:11434
ERROR api.chat AI_RetryError: Failed after 3 attempts. Last error: Internal Server Error

there is an issue with ollama and .env file in stable

can you use the main branch ?

Edit:
i see its already the main branch. can you provide details of your system configuration ?

@thecodacus as far as I see its loading the model and trying to answer, but it failed on 3 attempts as on GPT-4o when you reach the token limit.
So I dont think its a connection problem if this is your thought

Of course!

Intel Core i7-9700 CPU

16GB of RAM

Windows 10 Version 22H2, Compilation 19045.5371

Node version: v20.14.0

Ollama version: 0.5.7

I was thinking id the system memory is getting full

@Luckillash
whats the gpu you are using ?

@Luckillash no GPU? I dont think this model will run without a GPU and these low specs.

Yeah, just raw doggin´ it lol. My fault after all.

But when i run it with “ollama run deepseek-r1:14b” on a cmd it runs smooth tho.

I’ll try 2 things, with lower size model and a different pc with a gpu.

ok, but in cmd is no context in place. with bolt you got a context with a few thousend tokens just for the system prompt.

Anyway, did you set a default context size in the env file? If not try something low and see if it helps:
DEFAULT_NUM_CTX=2048 or even 1024 or 512

just try it out. but I would not hope for much. even you get it to run it will be that slow so you dont want to use it.

I see no point anyway to use it for bolt if you have not a high performance GPU … (24GB+ … to the top models). But just my opinion :smiley:

1 Like

Thanks, i really didn’t know about this type of limitations.

I’ll try the mentioned approaches you recommended and see how it goes.

1 Like

I just tried it on my system and its not writing code anyway:

Looks like not working good with bolt.diy at the moment. So maybe would other system prompt or I dont know. strangely the deepseek-r1 from OpenRouter works.

1 Like

In my case R1 8B model works fine with bolt.diy where r1-14B thinks a lot (I can hear the fans actively working to control the temperature) and after a while just starts typing few words and hangs.
I am trying on Mac Pro chip M4 10-Core CPU 10-Core GPU
16GB Unified Memory
1TB SSD Storage
Not sure if the issue relative to GPU or anything else. However, it works fine via cmd

14B is a bit too much for 16GB unified memory. it usually works fine for low context but bolt uses on average 8k-10k context (while context optimization is enabled) which is a bit overkill for 16GB with 14b model