Is this fork working?

I just gave this fork another try, but it can’t create a single file successfully.

Tested with several providers, grok, Gemini, hf, anthropic, deepseek, and others. But I couldn’t create a basic html.

It never finish to write the files, and when prompted to fix or continue to create the file it just reply that will, but actually does nothing.

I just tested on the hosted version by ruzga, and it is the same thing, can’t create anything, as basic it could be.

So, by any means is it working?

1 Like

Yes I used the latest version and this prompt ‘make a html, css and js webpage of tic tac toe’ worked first time

2 Likes

Working for me as well, just tested now to make sure no recent changes broken anything somehow.

Is there a specific error message you are seeing anywhere?

1 Like

No errors, or messages, it just don’t finish the code it started, or even don’t start at all.

Is this some of the time or all the time? Pretty hard to debug without an errors or messages!

Yes, I feel same, only way that it provided an application was using Claude Sonnet 3.5 for me with previous version. With the new version, I could not create anything with Yi Model, DeepSeekCoder etc. When using Yi Model, it forgets to create or modify the files after a while, you need to manually change. Similar with DeepSeekCoder.
Therefore I’ve revert back to the old version (this commit: Merge pull request #372 from wonderwhy-er/import-export-individual-chats · coleam00/bolt.new-any-llm@1cb836a · GitHub ) and able to work with Claude Haiku model works more stable.
I hope this also works for you.

Again, I was hopping that after the announcement of bolt.diy we would have a working version of it, but nothing changed.
I test the same ‘build a todo app’ with all models, some write all files, some just plan it, but they all fail to install the dependencies and run the app, even Sonnet have difficult.
Not to mention that the repo doesn’t work straight after cloning or creating the docker, there are tons of environment errors that still need to be fixed, just open the issues page of github and you will see number of similar issues.
I realy hope this project do greate on the long run, but right now the developers should focus on the core to it work such as fixing this issues with the environment and the api keys, the docker builder, features like github integration and loading folder doesn’t have much use if we can’t do anything with it.

Just pulled the latest code from the main branch for Bolt.diy and gave it a spin.

  • LM Studio on MacOS
  • Ran MLX version of Qwen2.5-coder-32b-instruct-8bit

Ran the build a todo app prompt you suggested.

Outcome:

  • AI model responded to prompt
  • Bolt setup codebase and dependencies
  • Bolt used Qwen to generate code
  • Bolt then ran the code and showed me the preview

NOTE 1: Lower your expectations for open source model’s capabilities. They’re not at Claude 3.5 or Gemini 2.0 levels of output quality yet.

NOTE 2: when you are running local models, your hardware needs to meet the spec not to run out of memory (which may be why you’re facing stalled prompts and incomplete work). AI models are RAM hungry.

On the note of the dependency issues, running MacOS here and bolt.diy setup is super easy now (for developers). This is still not yet a simple executable file for non-engineers to install.

After pulling codebase down, just had to do three things to get it running:

  1. Run Docker
  2. In VSC terminal, run command npm run dockerbuild
  3. In VSC terminal again, run command docker-compose --profile development up

Bolt.diy is up and running locally on my macbook. No fuss no issues.

On external AI models running via Bolt , I’ve only just tested Gemini 2.0 Flash experimental (whilst responding to another user’s issue here). It seems to work just fine:

If you share your hardware setup and how you ran bolt it would definitely help the community be better suited to assisting you.

TL;DR: Yep, Bolt.diy is working!

2 Likes

Thank you @Yusuf.Ismail.Shukor for this deep dive, I really appreciate it a lot!

@bigboss A lot of the issues are related to the LLM hallucinating not actually an issue with bolt.diy. I know it is certainly frustrating though when the LLMs make the whole process feel useless!

We have been continuing to squash issues for API keys, Docker, etc. If there is anything specific you are encountering right now, please do let me know!

@ColeMedin I think I would be very good to mention some Models which are working fine with bolt.diy in the docs and that they are recommended for unexperienced users.
Maybe then there is less frustration :slight_smile:

2 Likes

Yeah good suggestion! I actually just added some model recommendations to the pinned post in the troubleshooting/issues category for bolt.diy. Probably belongs in the main README somewhere too! I’ll have to think about where to put that.

2 Likes

I’ve waited to give it another try, and after the latest release (v0.0.3), I’m still having issues, a lot less issues, but they are still there, specially on the first message with the dependencies installation and configuration.

Regardless the models I’ve used on previous versions of bolt.diy, I’ve tested with all models I could, and even Sonnet models failed the basic tasks.

Now, with the latest version, the only I issue i’m facing is when the model install a new dependency, and it fail on a certain way.

Here some videos showing how even sonnet fail the first prompt, and have to resolve a ‘React is not defined’ error, this is easily replicable, an also happens when a new dependency is added to the app.
There is also this ‘infinity’ npm install bug that happens from time to time.

On all videos I was using Sonnet 3.5.

Latest Version Errors

A few things here:

  • You shouldnt use the main branch if you are facing problems at all, because this is not testet the most => stay on the stable branch which is the release (you mentioned release on top but I see in your videos that you are using the main branch)
  • “Sonnet 3.5 (new)” is known to not work best with bolt.diy as discussed in some other topics here in the community => in the FAQ the old model is mentioned explicitly
  • If you want to stay on the main branch, use the newest feature, which is using base templates, you can see them on the startpage. Then it will use a basic setup you can start from, that will work