To be honest…. I’m having a nightmare… the token limits are a real pain…
I bought deepseek v3 api access and still have OpenAI and can’t use either.
If I import a folder to work in that was created with lovable, it seems to create larger token usage each time to make changes and so excludes both OpenAI and deepseek… deepseek is locked to 8k in most cases and that seriously limits the output of anything worthwhile and hugely disrupts flow… google Gemini seems greater but it actually seems pretty dumb… I’ve literally had to tell it exact components and areas to find what I want to edit, it is rubbish at working things out and other times it just refuses to the task siting token max.
It also sometimes improves and area on a page and often erases content that has not asked to be removed.
I don’t want to be down on the product because I really want it to work, I’ve had success with lovable and bolt.new but it feels like the AI is nowhere as smart or refined when attached to bolt.diy.
Any suggestions that could help?
It also seems to like recreating files or creating continuous looped folder structures and also starts editing files and leaves them half finished or started with no real code.
Agree with most of it but this is the current state of the project. bolt.diy is just a few months old and just starting up with most of the adavancements.
There are some experimental features already on the main branch to tackle the context limitation problems, but we are not there, that is just works everything out of the box and is super good.
Gemini works good, but yes, you have to tell it what to do.
The team is working on these things, but its still a open source project and needs time, as all contributers do that within their free time
If you take a look a the PRs in github you can see the progress and upcoming features. The team also also working on cleaning everything up at the moment (github issues) and organizing the further development.
If you are a developer and can help developing features, feel free to contribute
I’m not at all bagging on the amazing work with Bolt DIY, well I’m certainly not trying too, if I’m honest, I thought it was the AIs linked that were the issue.
Any way around these token limits as when I did the maxToken increase I get an error saying it has to be between 0-8100 or something… even if I change 8000 number to 300,000.
It seems like with my only 8k tokens it can really only implement very very small sections at once, like half a page to a page maybe
what do you mean by that? You cant just change how much output token the model is limited to, that on the provider and they limit it. So doest matter if you change this within bolt, when the provider not support it.
I also dont understand why you can just “do half of a page”. Maybe you can provide your prompts you trying to do, so we talk about the same.
I dont know what this is. Link?
If you mean other forks of bolt.new => I just work with bolt.diy what this community is about