At first, it was okay, I mean it is still okay right now when I use an LLM like OpenAi GPT 4o or 4o mini, and other models that are public / paid.
But then I tried running a local LLM which is Qwen2.5 coder from .5B to 14B Param.
I am not sure if it’s because I’m using a smaller param but it cannot create files and execute anything at all, only giving me step by step and some code to write.
Have a good read down thru this - I had the same issue a bit over a month ago and opted to wait till I have a much higher spec machine before attempting this. The Cloudflare route is the way to go though.
Around 24th Dec, Thomas mentions Col’s build and has a link to it.