Deepseek R1 Terminal Errors

Hi all, I was looking around seeing if anyone ran into this issue or not.

Seems like people have problems with even getting Deepseek to work. It seems to be working well for me apart from one issue which I will get to in a moment.

I have an RTX 4090 and 128GB of ram.

Here is what I am doing to get it to work.

I am using LMstudio to connect DeepSeek R1 Distill Qwen 14B, 32B and Llama 70B. Make sure you max out the context Length GPU offload and minimize the CPU Thread Pool size when loading it in LMstudio. Makes it generate a lot faster.

Anyway the issue I am running into is that it seems to have issues installing the dependencies in the Bolt Terminal.

I am just doing a simple prompt “build a todo app in react and tailwind” just to test it out.

And I get terminal errors like this:

~/project
❯ npm run dev

> todo-app@1.0.0 dev
> vite

jsh: command not found: vite

~/project
❯ 

~/project
❯ npm install react-spring
npm ERR! code ETARGET
npm ERR! notarget No matching version found for @tailwindcss/postcss7-compat@^4.0.6.
npm ERR! notarget In most cases you or one of your dependencies are requesting
npm ERR! notarget a package version that doesn't exist.

Since I am a new user I can’t upload the chat yet, but I don’t think it would help troubleshoot this issue.

Any help would be appreciated. Thanks!

Hi,
try to use the advanced features if not enabled:

with a starter template it should work better, but this is a problem a lot of LLMs have, so they dont do proper dependencies. Mostly after asking some time to fix it, it works better.

You can also switch between Default and Optimized Prompt. Try out which works better in your case.

Thanks, I did get the website output, however a few details missing though.

Disabling context Optimization seems to work. Was testing the 14B model and ran into some this issues, but I think it might be the limitations of the LLM will try 32B and 70B.

1 Like

From what I’ve heard, R1 isn’t trained well to output to code environments like Artifacts / Bolt. Roo Code & Cline seems to have a better grasp at leveraging the Think vs Code workflows. I believe someone in the community is working to train an Instruct model with R1 for better outputs. :crossed_fingers:

2 Likes

PR to enhance Deepseek and show Thinking Process: feat: added support for reasoning content by thecodacus · Pull Request #1168 · stackblitz-labs/bolt.diy · GitHub

Youtube Video @thecodacus showing how to work with it

1 Like

Would you let us know what is your system configuration being able to run these large models?

He is not running it locally. Its directly deepseek provider.

I did run it locally, and I did mention my system config and steps I took to run the models in my original post. If you want to see all my system hardware you can take a look here. Assembling My AI Computer