Rewrites all the files

Appologies if this has been asked and answered already i tried to search and did not find it.

using the newest windows ollama it rewrites every file when i add new sections. im using qwen 2.5 -coder 14b model. using the 0.5 vsrion of bolt.

making a web app and it keeps making the index and style files when making changes to the js file. its the default version of the model from ollama do i need to mod the model file or is a setting in bolt i need to change.

got 16gb of vam and 32gb of ddr4

Welcome @nicksphone0161,

I am not sure if I understand the problem correctly.

What I understand => The problem is that bolt.diy always writes every file again, after each prompt, instead of just changing the needed files.

If this is the question/problem, then the answer is, that this is the current state of bolt.diy. It cannot do diffs at the moment, but the feature is planned for the future.

If I understand it wrong, let me know :slight_smile:

Ok so its a feature not a bug at the moment thanks for the reply. I would image hitting a token limit at some point for a local llm

1 Like

yes, take also a look at the General chat were the same was discussed today.

from the chat my answer:

I dont now but I think its a combination of multiple things.
In my feeling it happens more often when using a specific llm (google in my case) a longer time and it feels like it reaches some rate limits and then tries to reduce the output

second one is just hallucinating I guess

and maybe also context size, because if you exceed it looses information what the requirement was

what I do sometimes is to tell it to summarize the current project state and list all implemented features

so it has it in the lastest history/prompt

thanks again for the quick reply requested an auto prompt update to reduce the input tokens every so often and to keep the llm on track.

Nick - can I ask please why are you specifically using Qwen. Was it because you’re running a local build…?

I wanted to do the same but until I have a machine with sufficient hardware I’m running Bolt.diy from the popular GIT / Cloudflare installation and using Google Gemenini Flash 2.0. Very good, very fast and all free. You can use Deepseek V3 also and $10 of tokens could build you multiple apps as it’s super cheap and very capable.

If you want the links to the build guides just let me know.

1 Like

i was using qwen2.5-coder14b as it was one of the few local llms that works 99% of the time with bolt with 16gb of vram. If i run the 32b one it crawls using system ram. i was going to use flash as its mostly free but i cannot get it to work on my pc so local it is. and yes if you have a guide for me to do this remote for free i welcome it.

OK - sweet. Let’s get you online instead. You will thank me.

Deploying Bolt.diy with Cloudflare Pages

I’ll be up for a few hours yet so if you’re going to have a go at this now, I can help. And, there are some components that you may make mistakes on.

Trust me - this won’t hurt a bit…!!!

@nicksphone0161 what is not working with Google Gemini for you? Any errors?
This should work without problems. Did not hear from someone else it does not.

i cant get it to work with any online llms it pulls the model lists but i get an error when i submit on all of them i have a .env and an env.local as is not clear which one is right in the setup. as well as added the api keys in the software. i had the same error with local till i changed the address to the ip.

maybe post a few screenshot your setup and I can see a problem (terminal, bolt.diy, …)

Also you can take a look at my youtube videos where I do step by step guide and you find the problem yourself / what is missing.

thanks ill check out those and the online guide when i get home from work

Thomas - he’s using it locally. Not online. I assume you missed that. Does Google Gemini have a local download like Qwen 2.5 coder and Ollama.

I think you missinterpreted the request or I did, but I did not understand that he dont want to use gemini online, just that its not working for him at the moment.

He just tried to use local, cause cloud not working for him.

@nicksphone0161 correct if i am wrong

1 Like

your both a bit confused google and other online llms i was running bolt local and none worked past pulling the models i could use on there service that is why i fixed the local issue and switch to ollama and ran local llms

Let me ask some questions because I am confused with your situation.

You’re running a local copy of Bolt.diy on your machine locally and downloaded the Ollama LLM…?

I’ve just fired-up my local copy and running on localhost:5173

Do you get output like below when you start yours up ?

I selected Google Gemini Flash 2.0, put my Google key in and a simple prompt and away it went. You see no errors on the developmer console too.

image

Tried again same prompt using Groq and Llama 3.3 70b and worked with one small error. (had to refresh the API key for some reason too - weird)

You used the correct API key for the online LLM’s yeah ?

Take a look at the following link to get an understanding of the capabilities and edit process of the leading LLM’s (ie - uses ‘diffs’ for partial file update or entire file) - might help you understand which are worth spending your time and money on - remembering Deepseek V3 is almost free - very cheap per request. And Google Gemini Flash 2.0 is free and very good.

At this stage unless you have a realy strong machine (lots of DRAM) you’d be better running fully online on Cloudflare. Did you take a look at that ??

1 Like