Model unaware it can build itself?

Hello!

Problem: Im using bolt.diy. The model uploaded is perfectly active in bolt.diy, connected through ollama. Chat works, responses are good. Except that the model wont create the app, it tells me the code, but doesn’t create the files. It sometimes creates an empty html or jason file but that’s it.

Is model not aware that it can create the app itself?

Specs:
bolt.diy v0.0.6 (run throught pnpm run dev)
ollama models tried: deepseekr1 7B, command-r7b.
Nvidia gforce rtx 3060 32GB ram

Hi,

the problem is that the models is most likely to small to work good with bolt.diy and other stuff.

As mentioned a lot in other topics by me, I dont see any reason you should use Ollama/local models at all, cause there are a lot cloud models free and 1000 times better then any model you can run on private/consumer hardware.
So even if you get it running/make it write code, the results will never be any comparable with the cloud models.

There is also a reason, Ollama is marked as experimental provider within bolt.diy. I know a lot of youtuber hype this to get clicks and the only thing they achieve is to confuse newby users and promise them that it works as well, but then theyself use always the cloud models to demonstrate something, if you take a closer look :wink:

@itqandas and I talked a lot about that private chat and he can tell you his thoughts on this maybe :slight_smile:

1 Like

Thank you! That makes a lot of sense. Those sneaky youtubers…

I will see what I can find and try using more powerful, free models in bolt.diy.

1 Like

Best choice at the moment is Google Gemini 2.0 Flash (what I also use and show in my videos)

1 Like

oh, I just noticed I follow you on YT already. WIll check it out. Thanks!

1 Like

Yes tested and confirmed, despite even having an rtx 4090 nvidia gpu!

yes if you are using ollama then you better use a 32b model at minimum like qwen 32b coder or deepseek distil ones
but for that you will need lot of vran and one consumer grade gpu might not be enough

and if you are using 7b or 8b then… its just for experimenting with small simple apps if at all it works , just to see that it works and have some fun

1 Like