Hello, sorry if this issue has been raised before, but if I “open folder” or if I create a project with bolt through an initial prompt, the AI, which ever, cannot edit the files directly, it is writing the code in the chat and asking me to copy/paste in the actual file. Not sure if this is the way it should be, thank you for any help here.
Usually this is due to the model you have chosen. Only 7B+ "Instruct’ models generally have support for Artifacts, which is required for Bolt.
Another issue is sometimes you need to clear your chat history, and it’s also useful to change to optimized prompts. You can do both these in the Bolt.diy settings.
@azaki10 - Azaki. How did you go. Did you get any further progress.
It takes a bit of time and effort to fully understand how this works and which providers and models that will work for you.
Some models will only write code and maybe tell you what to do. Others will write the files and issue commands to start servers and display the content working.
Also you have to understand Bolt.diy uses a web container which is like a virtual environment. You can’t open a folder and browse the files as such. I didn’t understand this at first and it did my head because it wasn’t stipulated anywhere.
It’s an amazing platform that’s in it’s infancy but if you put in the effort and get curious about it all. It will pay off. Once you get your head around it you’ll benefit. Don’t give up.
Thank you all, it is great to have such a community and supportive people around
Here’s my story, I built a website using Windsuf, i have access to sonnet 3.5, but it started hallucinating after a couple of days, it continue to repeat itself, and after hours and hours of attempts to fix its mistakes, i consumed by credit and now i have to pay again.
Knowing that I am not a coder, I know a bit of Python and Html, but that’s all. Windsurf managed to reach a good point with my website, but then this happened. I thought maybe I could turn to bolt.diy and use my local Ollama models, with my Nvidia 3060 / 6Gb of VRAM, I first used Qwen 7b but I can use the Qwen 2.5 14b construct, it is slow but at least it is local and maybe it will not hallucinate and allow me to complete my website.
I installed bolt.diy, tried to use it, it worked the first time, using Qwen 2.5 7b, but it cannot access the local folder (as you rightly said) to continue working with my project. I tried to create it from scratch, it did something but to modify a file, it suggest the code and I will have to copy, search and understand where to insert, and then insert, which is really out of my capacity.
Now I installed the 14b Qwen, but I cannot run bolt.diy again, it keep saying “error without explanation” or something like this, I tried very hard to make it work, no use. it works well with Gemini if I insert the API, but not the local model.
I am an artist and my project is very important to me, I cannot pay a lot, now I am trying to look for the best IDE + Copilot or AI to help me, couldn’t decide yet, and bolt.diy with Qwen is not working at all in my setting.
thank you Aliasfox, you mean 7b instruct only? not 14b instruct?
I did not know, will try
Why do you then not go with that and finish your project with Gemini, if this works?
I mean only “Instruct” models 7B parameters or higher work with Artifacts. Imo, it’s not really worth running models locally when DeepSeek-v3 is so cheap (very good) and Gemini 2.0 Flash (also very good) is free. You can also get ChatGPT 4o for Free through GitHub/Azure, but it’s not as good.
My Rating:
- S Tier: Claude 3.5 Sonnet, DeepSeek-V3, Gemini 2.0 Flash, o1
- A Tier: Llama3.3 70B, Phi-4, QwQ, QvQ, Qwen2.5 70B,
- B Tier: Llama-3.1, Qwen 2.5 Coder 32B, Grok-2, Codestral
- C Tier: Mistral (all variations), 3.5 Haiku,
- D Tier: Llama-3.2, GPT-4o
- E Tier: Gemma-2
- F Tier: Too many to list.
As a note, QwQ/QvQ don’t work with Bolt.diy right out of the box.
hi Lexx, because it stops saying limit reached, i am now searching for a tier 1 AI coder that has a paid plan, not too expensive, but without limits that I can use. for some reason Deepseek is not letting me top up, no idea why, so out of question now. my local ollama are extremly slow. I am not an experience programmer so I use AI for amost everything
@azaki10 I would recommend using Google Gemini 2.0 Flash as it is free or OpenRouter which has a lot of models where cheap.
I just startet testing Mistral => Codestral as well and seems not bad at all (also free)
See also FAQ: Frequently Asked Questions (FAQ) - bolt.diy Docs