Hi and thank you for this great project and huge work.
I installed this fork a few days ago and i can’t solve a problem.
I reinstalled react 2 times without any change.
I’m on mac M1, using a local Ollama LLM , running without docker, using chrome canary, converted my ollama LLM in order to extend the context with the provided Modefile.
Code is generated well, but whatever I try to generate, I keep having errors at execution, in terminal and no page preview. Errors are different depending on what I ask to code, but no way to get any preview. What am I doing wrong. Please if anyone could help. i’m struggling for days now.
ressources :
- Inital PNPM starting
initial server start
- Bolt with converted Ollama 3.2 LLM
Bolt
- Error2 by example “to do app”
-ttps://ibb.co/k2dfcgw
-ttps://ibb.co/SRJ92hk
-ttps://ibb.co/zV4ydDC
-ttps://ibb.co/bWxMxmj
- Error by “Cookie material” in provided prompts
-ttps://ibb.co/bF7S7t5
- Error by a own prompt for a simple landing page
-ttps://ibb.co/jMS3jf4
I tried with Ollama 3.1 lager model and Ollama bigger LLM thru Grok api without better results.
Any idea? I confess I don’t know much about react, npm etc…
Thank you in advance.
1 Like
Some Infos added
When I try to replicate exactly the same example given by M. Cole Medin in a youtube video called "How to use Bolt.new for FREE with local LLMs (and NO Rate Limits).
So
- pulling qwen2.5 coder 7b
- exending the context as explained
- using the exact same prompt to crated a chat in Next.js
No code windows is even opend, it’s acting as being generated by a normal LLM interface
Prompt
Result
But strangely applying the To-Do with Tailwind prompt is giving some improvements compared to my first post with Ollama3.2. Code generated, no errors in terminal but nothing in windows preview (but at least a Http link this time)
-ttps://ibb.co/fDqbmBP
-ttps://ibb.co/t8Dv49W
(sorry have to use this trick in order to being able to publish more than 2 hypelinks)
can you put the images directly on the post not the short link
1 Like
you need to use more bigger models… i have tested with llama 3.2 on smaller side it works but does not get good result on other smaller models like 7b ones
1 Like
I agree with @thecodacus (thank you for replying btw!), smaller LLMs just aren’t able to handle the larger Bolt prompting at this point. There are a lot of opportunities though to make it work better for smaller LLMs by creating prompts just for them, using agents behind the scenes for chain of thought, etc.
Hi, no unfortunetely, new users aren’t allowed to put them directly and only 2 links allowed
Hi and thank you.
Well , reliability didn’t seem to be proportionnal to LLM size. I had some ok result was small LLMs and no result at all with bigger ones.
But I will keep trying. Having 16Go memory i will be limited anyway
Thank you and sorry for the answering delay.
Ok, but basically do you had some positive results, including previewing page even with simple examples with any local LLM, because as I showed, code is nicely generated in may case but previewing is constantly showing errors in terminal whatever local LLMs i’m using.
Yeah I’ve had some success even with 7b parameter models before, including the preview showing.
What are the more recent errors you have seen in the terminal? Are you saying it is generating good code but giving bad commands maybe?