I am using bolt.diy, but after I finish and want to check the progress, I always see this message:
404 - This page could not be found.
And sometimes it’s just a blank page.
Any possible solution?
I am using bolt.diy, but after I finish and want to check the progress, I always see this message:
404 - This page could not be found.
And sometimes it’s just a blank page.
Any possible solution?
Can you export and share the chat
Would be easier to understand then
Button to export chat as json is in bottom right side, screenshot is not enough to understand what happens
Error: Failed To Start Application
which prevent the inspection from working because the application isn’t running at all. Do you have a solution for that?
Take a look at all files. Google messed up writing in your files, as you can see here:
It just as … in, I guess, a lot of templates/files. So no code to execute.
You need the AI to tell to fix this and write always the full code into the files.
He shared with me the chat
I did quick try, there are bunch of issues
So part of the issue is that next uses native addons and web containers do not seem to support them…
I wonder what happens in bolt.new in that sense
Here is my chat state
And error
When the projects get bigger, it always fu***s it up … Had this a lot with different syntax problems.
It could also be a problem with the current version, as we facing some problems as you know.
I would recommend going back to v0.0.3 and try with this. Should give better output.
Well last problem is more of webcontainers problem. Not bolt/prompt/llm itself.
Okay, got it. Thanks for your response. But what I understood is that I should revert to version 0.0.3, okay. But I just installed it two days ago, so I don’t know what that version is. Or can you provide a link to that version?
Another point: these problems weren’t occurring with bolt.now. Is this because I’m using Google’s AI model? So, what model would give results similar to bolt.now?
bolt.diy versions: Releases · stackblitz-labs/bolt.diy · GitHub
the problems should be the same in bolt.new. Did you export it from bolt.new and import into bolt.diy or did you completely regenerate with your initial prompt?
The problems aren’t the same; I’m doing similar things on ‘new’ and it’s faster and almost error-free.
This is entirely built on DIY.
Actually, exporting from ‘new’ to DIY doesn’t work for me
I think I’m on version five now. Is there a way to use version three? Will I have to delete all of this and reinstall it? If there’s a video explanation, I would really appreciate it.
After I download version three, should I use Gemini 2.00 or use another model?
=> This depends a lot on the model you are running and if you used a starter template, what is bolt.new automatically doing (experimental in bolt.diy in the newest main, but this version is not stable at the moment and worth using/trying).
=> This should work. I did this often e.g. with my showcased task-list-advance project (see youtube or Tutorials & Resources)
=> maybe you have to delete the .bolt-folder before importing
Download from releases: Release Release v0.0.3 · stackblitz-labs/bolt.diy · GitHub
=> extract, pnpm install
, pnpm run dev
=> make sure you fully clean your browser cache/cookies
=> I would recommend trying it, just because of the large context size. But you can switch any time in between your models, as bolt is anyway sending the whole project everytime.
I think that using bolt.new is more appropriate because it doesn’t have as many problems and is almost suitable for beginners, but the issue is that I consume a lot of tokens. Is there a solution for this? I’m considering using bolt.new locally with Gemini, which I believe would be easier for me as a beginner.
I am not sure if I understand you correctly.
Bolt.new is commercial and you have to pay for it and use the online stuff.
Bolt.diy is the official open source project from bolt.new, but does not have all the features of bolt.new yet (they also need a business usecase and added value). Bolt.diy is driven by the community and still in early development as you can see on the version number.
Conclusion: There is no free available version that matches 1:1 bolt.new which you could install locally and use with another model.
Even if you could, it wouldnt make sense and lead to the desired results, because bolt.new is fully optimized to work with Antrhopic Claude Sonnet 3.5 and the complete environment bolt.new uses in their offer.
My problem with the current bolt .new is that it spends a lot of tokens compared to the same delicious tasks in the past, and I think the model has changed this month, so I am looking for solutions only
Yes, had this feeling to. Guess they switched to the new Claude Sonnet 3.5 model instead of the “old”.
The alternative is using bolt.diy, but you have to find out how it works best for you (which model, how to prompt, which settings (default prompt, optimized prompt (experimental)).
Alternatively you could use “Cursor” or another AI IDE to programm. Cursor has also Claude Sonnet 3.5 and its free.
Okay, so we’re back to bolt.diy again.
So, I should use bolt.diy with Claude Sonnet 3.5 old.
Can you provide a link to Claude Sonnet 3.5 old? Because there are multiple models with the same version.
This seems like the best way to achieve the performance of bolt.new from last month.
I think the other available solution is lovable; I believe it can perform well
Can you give me a link of this model on openRouter ?