New with Bolt.diy, sharing some thoughts

Yesterday I found out about Bolt.new and Loveable. Very exciting but kinda expensive I would say. I tried creating an app idea I had recently with bot, and got some pretty nice results, ended up liking more the Lovable one. But pretty fast I went out of tokens.

Well then I found Bolt.diy, now we are talking.

I experimented a bit with different APIs… well only google’s option was ideal since it was free, but still didn’t like the results. I then went ahead and bought some DeepSeek API tokens since they are so cheap. That’s a quite of of step up I would say.

I also experimented a bit with some local ollama models I have, but most of them would give me a blank preview and since I have an 4070 12gb GPU I cannot run any good enough models to say that it’s worth it anyway.

But then I had the idea to download the Loveable code I generated and feed it to DeepSeek. Well, it couldn’t handle it context… damn… what a turn off.

This got me thinking…would it be possible to somehow break down the project files in parts and feed it in a way that doesnt go pass the context window of DeepSeek? Or maybe use one model for analysizng then providing the DeepSeek the proper relevant information needed to generate the code and write it where needed.

Well, I would like to know if anyone has found any better solutions than the ones I am currently using… On a video I read that bolt.diy doesnt work with ChatGPT, is this corrrect? What have you found to work the best so far?

3 Likes

Welcome @Jakkaru,

thanks for sharing your thouhgts.

First here are a few helpful links to get informations:

The results are dependend on your prompt. Compared to lovable and bolt.new we dont have a system prompt tied to specific frameworks and styles that just do it. you have to be precise and tell it what you want to have, then you get better results. See FAQ Prompting hints.

Should work with Gemini 2.0 Flash, cause it has big context size of 1M+. Also you can try experimental features like context optimizations to activate within the settings.

ChatGPT is just the webui from openai => openai integration does work, but also does not have the biggest context, so doest work, like the most, with bigger projects at the moment.

I personally just use Gemini 2.0 flash and if it cant handle things, I ask chatgpt o1 to find solutions and provide it to bolt/gemini.

3 Likes

Thomas

Probably better to have that chat here instead of CHAT. :slight_smile:

Get on that and try with Google Gemini 2.0. You can download the folder you’ve been working on and should be able to restart OK.

I make a habit of storing my prompts in a doc and if I have to roll-back or use them in future I can. That way starting over is quite easy and efficient. If I find prompts are leaing the wrong way and getting a bad result, I modify them, roll-bcak and try again.

Gemini 2.0 is free so experiment to your hearts content.

Let us know how you go.

3 Likes