Problems running commands after prompt (node index.js)

This is a example of bolt breaking after it starts to code. Why am I getting this error?

Screenshot from 2024-12-18 18-15-01

It also breaks with trying to use Vite.

I tried running ‘npm install’ in the terminal before starting bolt.diy
I get the error:

npm error Cannot read properties of null (reading 'matches')
npm error A complete log of this run can be found in: /home/xxx/.npm/_logs/2024-12-19T02_19_20_719Z-debug-0.log

Not sure what to look for in the debug file, but I did grab this:

1416 verbose stack     at async Install.exec (/usr/local/lib/node_modules/npm/lib/commands/install.js:150:5)
1417 error Cannot read properties of null (reading 'matches')
1418 silly unfinished npm timer reify 1734574761079
1419 silly unfinished npm timer reify:loadTrees 1734574761081
1420 silly unfinished npm timer idealTree:buildDeps 1734574764647
1421 silly unfinished npm timer idealTree:node_modules/.pnpm/@eslint-community+regexpp@4.12.1/node_modules/@eslint-community/regexpp 1734574773142

You just need to use ‘pnpm’ instead of npm.

its llm hallucination, which model you are using?

1 Like

I was using Ollama 3.1 (8B), I wasn’t aware the it could hallucinate for these types of tasks.

yes it only wrote 2 file and trying to run vite after that

What is the preferable local LLM to run?

I’m assuming Llama 3.1 (8B)? lol. That’s funny. Are you using Ollama to run it, or an API… because if an API, I would choose a more capable model, like Qwen2.5 72B Instruct, Qwen2.5 Coder 32B, Llama 3.3 70B Instruct, etc.

And for local, if you are running Ollama or LMStudio, I would suggest the largest quantized your machine will support, likely the Qwen2.5 Coder models (14B, etc.)

2 Likes

And you can actually get Google Gemini exp-1206 free, and ChatGPT 4o free through Microsoft’s Azure (GitHub) models. Both are very good.

Yea, I am using Ollama locally. Thanks for the points I will update my local model size and check out the ones you suggested.