No response with chat windows and not code written

i will be happy if the problem solved I’m using bolt.diy with ollama installed locally but and i follow all the steps the problem when i write the prompt there is no response in chat


window and there is no code also , im using mac air m1 and i installed all requirement

Anthropic and OpenAI require account credits and payment method, do you have these set up? Also, are you using the stable branch?

Yes, I have the necessary setups:

  1. Anthropic: While I know it requires account credits and a payment method, I am not using Anthropic. Instead, I’m running Ollama locally for my needs.

  2. OpenAI: I have the paid version of OpenAI, with the API key and payment method correctly configured.

  3. Branch: I’m using the stable branch of the bolt.diy repository.

  4. Execution Attempts:

• I tried running the project both with Docker and without Docker using pnpm run dev.

• However, I’m encountering the same issue in both cases.

  1. Error Observed:

I noticed this specific error in my terminal:

emix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {

at internalConnectMultiple (node:net:1122:18)

at afterConnectMultiple (node:net:1689:7)

at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {

code: ‘ECONNREFUSED’,

[errors]: [ [Error], [Error] ]

}

It seems like there’s a network connection issue (ECONNREFUSED) while trying to connect to some service during the setup. Could this be related to the environment configuration or something missing in the Vite setup? I would appreciate any guidance on resolving this. Let me know if you need more details!

I mentioned Anthropic because that’s what in the screenshot. You need to choose the Ollama option in the drop down (you probably know this, I imagine the screenshot was more of a placeholder?)

And the ECONNREFUSED is telling you that there was no response from the URL or it was explicitly blocked.

Did you confirm that Ollama was running and that it works from the command line?

Also, make sure your Bolt.diy debug settings show connected.

I don’t think the issue is with Bolt.diy per se.

Thanks for the reply! I appreciate the clarification and the guidance. Here’s my debug information and some observations:

Debug Information: {
“System”: {
“os”: “macOS”,
“browser”: “Firefox 133.0”,
“screen”: “1680x1050”,
“language”: “en-US”,
“timezone”: “Europe/Berlin”,
“memory”: “Not available”,
“cores”: 8,
“deviceType”: “Desktop”,
“colorDepth”: “30-bit”,
“pixelRatio”: 2,
“online”: true,
“cookiesEnabled”: true,
“doNotTrack”: true
},
“Providers”: [
{
“name”: “LMStudio”,
“enabled”: true,
“isLocal”: true,
“running”: false,
“lastChecked”: “2024-12-31T03:01:34.149Z”,
“responseTime”: 73.69999999999709,
“url”: “http://127.0.0.1:1234
},
{
“name”: “Ollama”,
“enabled”: true,
“isLocal”: true,
“running”: true,
“lastChecked”: “2024-12-31T03:01:34.153Z”,
“responseTime”: 76.34000000001106,
“url”: “http://localhost:11434
},
{
“name”: “OpenAILike”,
“enabled”: true,
“isLocal”: true,
“running”: false,
“lastChecked”: “2024-12-31T03:01:34.149Z”,
“responseTime”: 72.18000000000757,
“url”: " http://localhost:4000"
}
],
“Version”: {
“hash”: “4844db8”,
“branch”: “main”
},
“Timestamp”: “2024-12-31T03:01:46.604Z”
}… Observations:

  1. Ollama Provider:

• It’s running and responding on http://localhost:11434 with a response time of ~76 ms.

• I have selected Ollama in the dropdown as you mentioned.

  1. LMStudio and OpenAILike Providers:

• Both are not running, as indicated by running: false.

• Their response times and status indicate they are not actively serving requests.

  1. Connection Issue:

• It seems like the ECONNREFUSED error might not be related to Bolt.diy but instead an issue with network connectivity to one of the services or an improper fallback when LMStudio or OpenAILike is enabled but not running.


@smadihitham you are still on the “main” branch. Switch to the stable branch and try again.

git checkout stable
pnpm run dev
1 Like

I’m using now pinokio how to add git checkout stable

I dont know how this exactly works and if you can change it easily on your on.

As I see the provider (pinocio) uses the wrong branch within their install.js.

I opened a Issue on their git, so they maybe fix it:

I’ve tried this and many other approaches, but I’ve had no success so far. I’ve installed Bolt via Docker, Pinokio, and pnpm, yet the chatbox still isn’t functioning, and no code is being displayed. Any further suggestions or insights would be greatly appreciated!




If you are on windows, you can follow the way I am doing it in my video:

Otherwise there are some more tutorials from dustin:

Just saw you are using Ollama. Maybe try Google. I think its not an bolt issue. Its just the communication between bolt and Ollama and this can be tricky as you can see on how much topics are open here with ollama problems.

I just can say the same as always => If you are not able to run quite big models 32B, cause of your hardware is not capable of doing it, you should not use localllm. Theres no point in my view and you just get bad results, even if you manage to get it write some code into files.

I think the problem is resolved now. I used OpenAI-like and generated a key from GitHub. I applied it with OpenAI-like, and everything seems to be working fine now. Thank you for your input!

2 Likes

Very cool. I have to know, is github/gpt-4o being dynamically set there? And pulled directly from the API endpoint? Or did you set it somewhere?