Error: bolt.diy not producing results (standard error message)

Hello.

I managed to install the program locally (using pnpm) and am able to access it via localhost:5173. I also managed to connect the API (I connected the OpenAI - I have a paid account - and my Ollama - which correctly lists my two llamas that I have installed locally)

However, when I want to interact with the program, after I give in my prompt, nothing happens - the “…” loading animation is in an neverending loop, and I get no messages from the system or console that there are any errors.

Please help.

1 Like

Welcome @urban.RBDR,

sounds a bit strange. Should at least give an error anywhere.

Can you post an screenshot?

Also => did you verify your OpenAI API works, e.g. with curl:

Hello.

This is a screenshot, not sure if it will do any good. The process is stuck on this point:

when i click on the logo of bolt.diy, there is a quick flash of an error (…something something fetch something…), i have just seen that, but it is gone too quick to take a screenshot)

I’ll gladly give you any further info, thank you so much for help

I have “http://127.0.0.1:11434/” under Ollama link settings. I checked the endpoint with curl and, as far as i am understanding, it works - i get a response)

my llama is running and if I run the model in cmd I can talk to it. is there something else that i am missing or not understanding correctly ?

thanks,

the model you are using is, I think, not good and not working with bolt. Please take a look at the FAQ:


https://stackblitz-labs.github.io/bolt.diy/FAQ/

As often mentioned and discussed in other topics => I am not a great fan of local models unless you got a very powerfull pc to run bigger models, which then can produce useful code within a aceptable time.

I would recommend using OpenRouter.ai with very cheep models, or use free models like on Google atm.

Also see the General Chat. @wonderwhy.er just did a test with ab 14B Qwen Coder, which worked.

Understood, but the exact same thing is happening when I use OpenAI API - on the OpenAI platform, under dashboard, I can see that the API i am using was accessed by bolt.diy (as no other program could access the freshly made API).
but the result is the same - endless animation of “…”

do i still need to create a server and test the API, I think that dashboard indicator should signal that the connection was made after all…

The dashboard just says that you tried to request something, but could end up in an error, because of maybe you are not on a paid plan and dont have the permission to access this model or whatever.

For testing the curl you dont need to create an server. Just do it in git bash or do an Invoke-Webrequest instead of curl in Powershell.

Powershell Example:

Invoke-RestMethod -Uri "https://api.openai.com/v1/chat/completions" `
  -Headers @{
    "Content-Type" = "application/json";
    "Authorization" = "Bearer YOUR_API_KEY"
  } `
  -Method Post `
  -Body '{
    "model": "gpt-4o",
    "max_tokens": 10,
    "temperature": 0,
    "messages": [
      {
        "role": "user",
        "content": "What is your knowledge cut off?"
      }
    ]
  }' | ForEach-Object { $_.choices[0].message.content }

Response/Output:

I have tested the API and learned something new - funds for API usage are not included in the payment playn for ChatGPT (noob mistake, but I learned). Switching to Anthropic anyhow.

Still cannot get over the fact that Ollama is outright not working for me in bolt. I can understand that it is sub-optimal, but I need to run my project via the local LLM (school project). I have tested the llama via curl and am getting a result, also the port iscorrectly linked as far as the trouleshooting on this page tells me.

Thanks for helping a noob out. I am out of ideas what to do or which model to choose

Maybe lets start with => What Hardware Specs do you have?

Here are my specs:

CPU:
Info: 8-core model: AMD Ryzen 7 7700X bits: 64 type: MT MCP cache: L2: 8 MiB
Speed (MHz): avg: 1394 min/max: 545/5573

Graphics:
Device-1: NVIDIA AD104 [GeForce RTX 4070 SUPER] driver: nvidia v: 550.120
Device-2: AMD Raphael driver: amdgpu v: kernel

Drives:
Local Storage: total: 931.51 GiB used: 72.53 GiB (7.8%)

Memory: total: 32 GiB note: est. available: 30.47 GiB used: 6.07 GiB (19.9%)

14B model should not melt my PC, but it is a absolute upper limit as I understand

Give it a try:

Downloaded it a second ago.
i can converse with it via bash and via OpenWebUI (running thru Docker).
It was listed immediately in bolt.diy - however, same results as with other models - no response

I think you got some CORS problems or something like this because of the docker.

Can you try to run it all nativly on your system without docker?

Or just use “LM Studio” and start it from there. You can configure this as well as provider in the settings.

excuse me for the confusion - only OpenWebUI is running thru Docker

Ollama is installed locally, bolt.diy was also installed locally (with pnpm).

I will now look into LM Studio, but I as far as I understand it is the same type of service as OpenWebUI, no?

Ah ok, then Ollama should be fine.

But as we dont know whats the problem. Give LM Studio a try.
Its a combination of ollama and openwebui. 2in1 :smiley:


You can start a server there and load your models which then will be displayed in bolt.
You also can enable cors and as I say this, maybe you missing some params for ollama that allows access from outside the terminal.

I will do this now and report back with results.

Many thanks!! Quickest support I seen in years!

1 Like

I managed to install lm-studio.
not sure why it is not picking up my models when i direct it to the folder, but then when I download the model it can access it - i think i do not get the fact where and why my llamas are downloaded…

anyhow, LM studio is successfully booting qwen2.5-coder:14b, I have set up the localhost:1234 and linked it to bolt.diy. The model is recognised, but the system is returning no results - same as before

I can see the requests getting made to ollama (via bash) and to LM Studio (via integrated cmd line)

the cmd of LM-Studio looks like this:

2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2024-12-29 00:01:08 [INFO]
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] Supported endpoints:
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] → GET http://localhost:1234/v1/models
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] → POST http://localhost:1234/v1/chat/completions
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] → POST http://localhost:1234/v1/completions
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] → POST http://localhost:1234/v1/embeddings
2024-12-29 00:01:08 [INFO]
2024-12-29 00:01:08 [INFO] [LM STUDIO SERVER] Logs are saved into /home/urban-ambrozic/.cache/lm-studio/server-logs
2024-12-29 00:01:08 [INFO] Server started.
2024-12-29 00:01:08 [INFO] Just-in-time model loading active.
2024-12-29 00:01:24 [INFO] Received GET request to /v1/models with body: {}
2024-12-29 00:01:24 [INFO]
Returning {
“data”: [
{
“id”: “qwen2.5-coder-14b-instruct”,
“object”: “model”,
“owned_by”: “organization_owner”
},
{
“id”: “text-embedding-nomic-embed-text-v1.5”,
“object”: “model”,
“owned_by”: “organization_owner”
}
],
“object”: “list”
}
2024-12-29 00:01:29 [INFO] Received GET request to /v1/models with body: {}
2024-12-29 00:01:29 [INFO]
Returning {
“data”: [
{
“id”: “qwen2.5-coder-14b-instruct”,
“object”: “model”,
“owned_by”: “organization_owner”
},
{
“id”: “text-embedding-nomic-embed-text-v1.5”,
“object”: “model”,
“owned_by”: “organization_owner”
}
],
“object”: “list”
}

does this help anyhow ?

Strange,
is CORS enabled in LM Studio as seen in my Screenshot?

Also maybe just take some screenshots and past it (bolt + dev tools open, LM Studio, terminal where bolt is running,…)
I dont see where the problem is yet.

Adding the screens now.
Please inform me if you need any other information throught screenshots




The DEV-Console (Bolt + F12) is missing :wink:
In the provided screenshot I dont see anything strange. Looks normal to me.

I did the console export to txt when loading the page, and the screenshot of the console when trying with a prompt:

i don’t think i can attach a txt file here - tell me if you need any nifo from the page loading (there a re a couple of “Hydration” errors)

hope this helps us further