Bolt.diy - docker, mac, Ollama, open webui

Hello, I’m experimenting with Bolt.diy. I’ve got Bolt.diy running in docker on a MacStudio. Ollama w deepseek-coder running and responding via open webui.
Yet, when I ask about creating files in the Files area, it states it doesn’t have the ability to create files.
Is my set up incorrect or not supported? Anybody have a good guide to resolve?

I had the same issue with a local installation where the o ly option is to cut and paste each file and recreate it all on my pc
Yesterday i tried to install using docker ut problem i ran into is probably related to the locally i stalled insatnce of bolt is still running on a different port

Hey @jo-cpa,

the problem is that these local models not really working good together with bolt at the moment. So they code is not interpreted as code correctly and then not written into the files.

What Model exactly did you try? At the moment I dont know any local model that is actually working very well and reliable.

I’ve just tried 7B models and my Ollama works fine with them, but Bolt.diy is too slow with even such small models. My GPU has no support for all the models and most run of my i5 9th get on Windows 11. I have 40GB ram and GPU 1660Ti. It’s too sluggish to code.

I’ll be installing it probably this week - if lucky on my server DELL R550 with Xeon CPU and 64GB RAM, will compare the performance. I don’t have Gold Xeon there, but 8 cores with 16 threads, so more capable than my i5. Hope will be useful. I’ll try to remember to update here after my tests.

And, I don’t understand why you’re using open webui and Bolt. That’s weird for me. I’d use either or.