Not strictly hardware intensive per se. Depends on what you are asking it to do, because the actual work is still processed on your machine. What branch are you using (stable)? And have you updated the deployment recently (because there’s been a lot of updates)?
Can you test one of my instances (it’s up to date and uses the stable branch)?
Update: I’m actually seeing the same behavior, even after turning off all the extra providers (including local) in the settings and clearing the chat history. And all my token usage is returning NaN
Update #2: Not sure what the issue was, but I ran in incognito and disabled all the extra options I didn’t want, and it worked fine.
Still seems to be a problem. Not sure the issue really. I wonder if the 8K context window is being reached or response is being abruptly ending. Not sure, but I do have a lot of console errors going on (the usually for local providers not being found, but they are “disabled” in settings).
It works fine at first but eventually if you keep prompting the AI, it craps out. Tells me some limit or something is being reached. Idk.
I will play with this some more later and try to figure out at least why it is happening.
In my case, the chat would just stop or if on a step just keep spinning. I could tell if it was updating files while the scroll basically didn’t work in preview. I was mostly using DeepSeek-V3 through Openrouter.
I’ve been watching the console while it happens. Clear everything and it seems good for a little while… but then when it gets “stuck” it returns the line Token usage: {completionTokens: NaN, promptTokens: NaN, totalTokens: NaN} back. Basically, it appears to simply close the connection abruptly.
@aliasfox Definitely still an issue! Extremely frustrating!
I see there are similar errors being reported on Github too! It seems like their description is better than mine, it seems to be overwriting the existing files from scratch every time!
I have tried with multiple different models and seem to be experiencing the same.
Running on the Stable branch, with the latest updates.
I’ve given up using Bolt.diy until a fix can be found.
Yesterday I pulled the latest man branch, which included PR #1006 and it seemed promising, but didn’t seem to do much, issue still persists. As of today, I hadn’t tested yet but noticed the main branch was behind a bit, so it looks like some new PR’s were merged (but stable is the same). But I don’t imagine it will be any better.
I am still stuck, not using Bolt.diy because of the overwriting / stuck issue. I see there have been no new releases since my installation (many times) on Cloudflare.
I have reverted to Windsurf and Loveable for now; waiting for some positive feedback.
@stereohype This is not exactly the same issue we were discussing in this thread. We were meaning an issue where Bolt.diy on Cloudflare specifically seems to time out and stop responding. It seems to be related to the context Window or something but I never quite tracked down a valid “solution”.
It looks like you are having an issue with your LLM of choice, not Bolt.diy itself. It’s failing on building your project, basically there is no folder in your project directory that matches that pattern. The LLM created bad code somewhere and it caused things to fail. Try a better/bigger model. This is just an inherent issue with LLM’s, and generally worse with smaller/cheaper ones. Anthropic’s Claude Sonnet 3.5 works the best for coding tasks, followed by Google Gemini Flash 2.0 and DeepSeek R1.
And if you look at the Aider benchmarks, DeepSeek R1 + Sonnet 3.5 is the “best” combo right now, using a hybrid “Archetect” mode that passes the reasoning step to the coding model:
I already tried the best models but with the same results. Also running it locally on windows 10 it gets stuck.
Does it happen to you or can you continue making changes and finish a more complex tool?
My browser console shows this every time i launch bolt:
[vite] connecting...
client:618 [vite] connected.
previews.ts:134 [Preview] Error setting up watchers: TypeError: watcher.addEventListener is not a function
at #init (previews.ts:113:15)
#init @ previews.ts:134
Error: ENOENT: no such file or directory, watch '/home/project/**/*'
at __node_internal_captureLargerStackTrace2 (builtins.ddb8d84d.js:101:5335)
at __node_internal_uvException2 (builtins.ddb8d84d.js:101:5863)
at FSWatcher.<computed> (builtins.ddb8d84d.js:117:2758)
at Object.watch (builtins.ddb8d84d.js:31:23952)
at Object.watch ([eval]#cjs:1:2092)
at MessagePort._0x13f9ae (blitz.d20a0a75.js:19:199182)
Do i need to make any changes after npm install as it shows some vulnerabilities? I tried Node.js v22 and v18 LTS and both act the same.
I’ve seen this too, are you on the “stable” branch and do you know the version? Maybe try to pull the latest version, which should trigger the deployment to rebuild automatically.