Error : There was an error processing your request: An error occurred

First of all , Thank you OTTODEV |OTTOMATOR team for your hard work :muscle: :v: :love_you_gesture:
My issue:
I wanted to use bolt.diy locally then I followed the instruction , in this video of @Cole Medin in youtube How to Use Bolt.new for FREE with Local LLMs (And NO Rate Limits) step by step, and I had ollama is running in port:11434
The steps I did :
Note : I have Node and Git in my machine:

git -v
// git version 2.43.0
 node -v
// v18.20.4  // this the defalut i have also downloaded v 22

1- Clone the repo:

git clone https://github.com/stackblitz-labs/bolt.diy.git
cd bolt.diy

2 - Doownload Ollama:

curl -fsSL https://ollama.com/install.sh | sh

3- Pull the LLM to the target folder

ollama pull qwen2.5-coder
ollama create -f Modelfile qwen2.5-coder:7b

4 - Create modelfile using Vscode in the target repo:

 # bolt.diy/Modelfile
FROM qwen2.5-coder:7b
PARAMETER num_ctx 32768

4 - Ensure LLM is runnig :

ollama list 
// output is: 
//  NAME                    ID              SIZE      MODIFIED       
//qwen2.5-coder:latest    2bxxxx37    4.7 GB    11 seconds ago    
//qwen2.5-large:7b        c00xxxxa0    4.7 GB    8 days ago  
$ ollama pull qwen2.5-coder:latest
pulling manifest 
pulling 60exxxxxx07... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 4.7 GB                         
pulling 66bxxxxxb5b... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–   68 B                         
pulling e94xxxxx327... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– 1.6 KB                         
pulling 83xxxxxxa68... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  11 KB                         
pulling d9xxxxxx869... 100% β–•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–  487 B                         
verifying sha256 digest 
writing manifest 
success 

http://localhost:11434  #open in browser
# output is : Ollama is running 

Then I followed these steps to install it: from Bolt.diy documentation

Run Without Docker

  1. Install dependencies using Terminal (or CMD in Windows with admin permissions):
pnpm install

If you get an error saying β€œcommand not found: pnpm” or similar, then that means pnpm isn’t installed. You can install it via this:

sudo npm install -g pnpm
  1. Start the application with the command:
pnpm run dev

but I still get error message:

There was an error processing your request: An error occurred.

The debug info:

Also note that: if I use api key for other models , it works perfectly with no errors :v:
But When I want to use local model that i downloaded [qwen2.5-coder:7b] , it return the error shown in the image :

There was an error processing your request: An error occurred.

Anyone could help, i woul be so grateful for the help, thank you for your great work you do to help the world :blush:

My hardware:
Ubuntu 24.04 | Ram: 32 GB DDR4| CPU : i7-11800H | GPU: 4GB Nividia (T1200)

1 Like

Hi @FarisAlsolmi,
at first, thank you for your awesome, detailed post.

But still to things missing :stuck_out_tongue:
=> What branch you are on? (main oder stable, you can also get from debug tab in settings)
=> Did you test ollama without bolt and does it respond well? I would assume that your hardware is way to bad to use local hosted models which work well with bolt

1 Like

What is your baseUrl for Ollama set to? Because I notice the console logs is saying β€œFetch loading failed: GET http://localhost:5174”, which seems wrong because that is the port of Bolt.diy and not Ollama.

It looks like you have everything set correctly, so idk… Just seems wrong.

2 Likes

First , Thank you @leex279 @aliasfox for your valuable feedback and approaching community to help:

The version is stable :

Version

eb6d435(v0.0.3) - stable

The log content :

[ERROR]
12/22/2024, 5:46:05 PM
API connection failed
{
  "endpoint": "/api/chat",
  "retryCount": 3,
  "lastAttempt": "2024-12-22T14:46:05.403Z",
  "error": {
    "message": "Connection timeout",
    "stack": "Error: Connection timeout\n    at http://localhost:5173/app/components/settings/event-logs/EventLogsTab.tsx:63:48\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:24987:11)\n    at flushPassiveEffectsImpl (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:26368:11)\n    at flushPassiveEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:26325:22)\n    at commitRootImpl (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:26294:13)\n    at commitRoot (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=cc72600e:26155:13)"
  }
}
[WARNING]
12/22/2024, 5:46:05 PM
Resource usage threshold approaching
{
  "memoryUsage": "75%",
  "cpuLoad": "60%"
}
[DEBUG]
12/22/2024, 5:46:05 PM
System configuration loaded
{
  "runtime": "Next.js",
  "features": [
    "AI Chat",
    "Event Logging"
  ]
}
[INFO]
12/22/2024, 5:46:05 PM
Application initialized
{
  "environment": "development"
}

the Debug info:

{
  "System": {
    "os": "Linux",
    "browser": "Chrome 131.0.0.0",
    "screen": "1920x1080",
    "language": "en-US",
    "timezone": "Asia/Riyadh",
    "memory": "4 GB (Used: 63.97 MB)",
    "cores": 16,
    "deviceType": "Desktop",
    "colorDepth": "24-bit",
    "pixelRatio": 1,
    "online": true,
    "cookiesEnabled": true,
    "doNotTrack": false
  },
  "Providers": [
    {
      "name": "Ollama",
      "enabled": true,
      "isLocal": true,
      "running": true,
      "lastChecked": "2024-12-22T14:47:14.277Z",
      "responseTime": 35.2350000012666,
      "url": "http://localhost:11434"
    },
    {
      "name": "OpenAILike",
      "enabled": true,
      "isLocal": true,
      "running": false,
      "error": "No URL configured",
      "lastChecked": "2024-12-22T14:47:14.242Z",
      "url": null
    },
    {
      "name": "LMStudio",
      "enabled": true,
      "isLocal": true,
      "running": false,
      "error": "No URL configured",
      "lastChecked": "2024-12-22T14:47:14.242Z",
      "url": null
    }
  ],
  "Version": {
    "hash": "eb6d435",
    "branch": "stable"
  },
  "Timestamp": "2024-12-22T14:47:32.367Z"
}

Yes, I did many changes , but now I set it back to http://localhost:11434 as you can see in the debug screen below, but still doesn’t working , :sweat_smile:

How much VRAM memory is available…?

Ollama might be running but the LLM you use like Qwen2.5 locally needs to be loaded into VRAM and while you may be able to load the smaller .5B or 3B, they don’t have the complexity to complete simple coding requests from my experience. And 32B is right in the sweet spot apparently, but you need like 22G VRAM free to run it.

Can you show your system resources please ?

1 Like

7B+ β€œInstruct” models are the lowest models that generally interact with artifacts (Bolt.diy system, files and terminal). With the exception of QwQ-LCoT-3B-Instruct-GGUF which only uses 1.5GB of RAM and seems to work with Bolt.diy.

1 Like

i have same problem with my Bolt running on my local Windows PC

@azamat.kayirov If you expect help, you need to provide much more information and please create a seperate topic in [bolt.diy] Issues and Troubleshooting