Appologies if this has been asked and answered already i tried to search and did not find it.
using the newest windows ollama it rewrites every file when i add new sections. im using qwen 2.5 -coder 14b model. using the 0.5 vsrion of bolt.
making a web app and it keeps making the index and style files when making changes to the js file. its the default version of the model from ollama do i need to mod the model file or is a setting in bolt i need to change.
I am not sure if I understand the problem correctly.
What I understand => The problem is that bolt.diy always writes every file again, after each prompt, instead of just changing the needed files.
If this is the question/problem, then the answer is, that this is the current state of bolt.diy. It cannot do diffs at the moment, but the feature is planned for the future.
I dont now but I think its a combination of multiple things.
In my feeling it happens more often when using a specific llm (google in my case) a longer time and it feels like it reaches some rate limits and then tries to reduce the output
second one is just hallucinating I guess
and maybe also context size, because if you exceed it looses information what the requirement was
what I do sometimes is to tell it to summarize the current project state and list all implemented features
so it has it in the lastest history/prompt
Nick - can I ask please why are you specifically using Qwen. Was it because you’re running a local build…?
I wanted to do the same but until I have a machine with sufficient hardware I’m running Bolt.diy from the popular GIT / Cloudflare installation and using Google Gemenini Flash 2.0. Very good, very fast and all free. You can use Deepseek V3 also and $10 of tokens could build you multiple apps as it’s super cheap and very capable.
If you want the links to the build guides just let me know.
i was using qwen2.5-coder14b as it was one of the few local llms that works 99% of the time with bolt with 16gb of vram. If i run the 32b one it crawls using system ram. i was going to use flash as its mostly free but i cannot get it to work on my pc so local it is. and yes if you have a guide for me to do this remote for free i welcome it.
@nicksphone0161 what is not working with Google Gemini for you? Any errors?
This should work without problems. Did not hear from someone else it does not.
i cant get it to work with any online llms it pulls the model lists but i get an error when i submit on all of them i have a .env and an env.local as is not clear which one is right in the setup. as well as added the api keys in the software. i had the same error with local till i changed the address to the ip.
I think you missinterpreted the request or I did, but I did not understand that he dont want to use gemini online, just that its not working for him at the moment.
He just tried to use local, cause cloud not working for him.
your both a bit confused google and other online llms i was running bolt local and none worked past pulling the models i could use on there service that is why i fixed the local issue and switch to ollama and ran local llms
You used the correct API key for the online LLM’s yeah ?
Take a look at the following link to get an understanding of the capabilities and edit process of the leading LLM’s (ie - uses ‘diffs’ for partial file update or entire file) - might help you understand which are worth spending your time and money on - remembering Deepseek V3 is almost free - very cheap per request. And Google Gemini Flash 2.0 is free and very good.
At this stage unless you have a realy strong machine (lots of DRAM) you’d be better running fully online on Cloudflare. Did you take a look at that ??