The model can't generate or run files on Bolt.diy

Hi, so I recently fork Bolt from github repo.

At first, it was okay, I mean it is still okay right now when I use an LLM like OpenAi GPT 4o or 4o mini, and other models that are public / paid.

But then I tried running a local LLM which is Qwen2.5 coder from .5B to 14B Param.

I am not sure if it’s because I’m using a smaller param but it cannot create files and execute anything at all, only giving me step by step and some code to write.

Is there any solution to this?

Did you set the context size in the .env file? in .env.example at the end of the file are examples, like

DEFAULT_NUM_CTX=6144

Depending on your hardware.

Other then that, the lower the LLM (params), the higher the chance it does not work with bolt (the code editor), also they need to be instruct models.

See also FAQ: Frequently Asked Questions (FAQ) - bolt.diy Docs (the lowest recommended one is 32B)

2 Likes

Hi Nana

Check this for an understanding on capability / model.

1 Like

Oh, thank you for that.

I have use a an instruct model too.

By any chance did you know the minimum requirements for a 32B? I have tried using it but my ram couldn’t Handle it.

Thank you so much, this will help, but I have a limited hardware problem :sweat_smile:

Nana

Have a good read down thru this - I had the same issue a bit over a month ago and opted to wait till I have a much higher spec machine before attempting this. The Cloudflare route is the way to go though.

Around 24th Dec, Thomas mentions Col’s build and has a link to it.

New install Bolt.diy for PC - best guide available - Help me build it?

Also if I haven’t already shared with you - look through the following…

1 Like