Any prompt gives me: There was an error processing your request: An error occurred

I haven’t been able to get bolt.diy to work. I am trying with both OpenAI and DeepSeek. I’ve tried setting the API key in the chat window, and in the .env file. I’ve disabled all the local models in settings.

So far, nothing has worked. Please help me!

Here are my logs:

[DEBUG] 2024-12-27T23:39:18.624Z - System configuration loaded
Details: {
  "runtime": "Next.js",
  "features": [
    "AI Chat",
    "Event Logging"
  ]
}

[WARNING] 2024-12-27T23:39:18.624Z - Resource usage threshold approaching
Details: {
  "memoryUsage": "75%",
  "cpuLoad": "60%"
}

[ERROR] 2024-12-27T23:39:18.624Z - API connection failed
Details: {
  "endpoint": "/api/chat",
  "retryCount": 3,
  "lastAttempt": "2024-12-27T23:39:18.624Z",
  "error": {
    "message": "Connection timeout",
    "stack": "Error: Connection timeout\n    at http://localhost:5173/app/components/settings/event-logs/EventLogsTab.tsx:63:48\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)\n    at flushPassiveEffectsImpl (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:26368:11)\n    at flushPassiveEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:26325:22)\n    at commitRootImpl (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:26294:13)\n    at commitRoot (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:26155:13)"
  }
}

[INFO] 2024-12-27T23:39:18.623Z - Application initialized
Details: {
  "environment": "development"
}

[ERROR] 2024-12-27T23:29:12.960Z - Failed to get Ollama models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getOllamaModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:380:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[ERROR] 2024-12-27T23:29:12.960Z - Failed to get LMStudio models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getLMStudioModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:438:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[ERROR] 2024-12-27T23:29:12.889Z - Failed to get Ollama models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getOllamaModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:380:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[ERROR] 2024-12-27T23:29:12.889Z - Failed to get LMStudio models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getLMStudioModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:438:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[INFO] 2024-12-27T23:29:12.886Z - Application initialized
Details: {
  "theme": "dark",
  "platform": "MacIntel",
  "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
  "timestamp": "2024-12-27T23:29:12.886Z"
}

[INFO] 2024-12-23T00:37:51.986Z - Application initialized
Details: {
  "theme": "dark",
  "platform": "MacIntel",
  "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
  "timestamp": "2024-12-23T00:37:51.986Z"
}

[ERROR] 2024-12-23T00:27:20.441Z - Failed to get LMStudio models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getLMStudioModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:438:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[ERROR] 2024-12-23T00:27:20.440Z - Failed to get Ollama models
Details: {
  "error": {
    "message": "Failed to fetch",
    "stack": "TypeError: Failed to fetch\n    at Object.getOllamaModels [as getDynamicModels] (http://localhost:5173/app/utils/constants.ts:380:28)\n    at http://localhost:5173/app/utils/constants.ts:468:22\n    at Array.map (<anonymous>)\n    at initializeModelList (http://localhost:5173/app/utils/constants.ts:468:9)\n    at http://localhost:5173/app/components/chat/BaseChat.tsx:107:5\n    at commitHookEffectListMount (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:23793:34)\n    at commitPassiveMountOnFiber (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25034:19)\n    at commitPassiveMountEffects_complete (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:25007:17)\n    at commitPassiveMountEffects_begin (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24997:15)\n    at commitPassiveMountEffects (http://localhost:5173/node_modules/.vite/deps/chunk-RMBP4JJR.js?v=a2728b97:24987:11)"
  }
}

[INFO] 2024-12-23T00:27:20.421Z - Application initialized
Details: {
  "theme": "dark",
  "platform": "MacIntel",
  "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36",
  "timestamp": "2024-12-23T00:27:20.421Z"
}

image


{
  "System": {
    "os": "macOS",
    "browser": "Chrome 131.0.0.0",
    "screen": "1512x982",
    "language": "en-US",
    "timezone": "America/Mazatlan",
    "memory": "4 GB (Used: 136.63 MB)",
    "cores": 11,
    "deviceType": "Desktop",
    "colorDepth": "30-bit",
    "pixelRatio": 2,
    "online": true,
    "cookiesEnabled": true,
    "doNotTrack": false
  },
  "Providers": [
    {
      "name": "Ollama",
      "enabled": true,
      "isLocal": true,
      "running": false,
      "error": "No URL configured",
      "lastChecked": "2024-12-23T06:55:39.099Z",
      "url": null
    },
    {
      "name": "OpenAILike",
      "enabled": true,
      "isLocal": true,
      "running": false,
      "error": "No URL configured",
      "lastChecked": "2024-12-23T06:55:39.099Z",
      "url": null
    },
    {
      "name": "LMStudio",
      "enabled": true,
      "isLocal": true,
      "running": false,
      "error": "No URL configured",
      "lastChecked": "2024-12-23T06:55:39.099Z",
      "url": null
    }
  ],
  "Version": {
    "hash": "eb6d435",
    "branch": "stable"
  },
  "Timestamp": "2024-12-23T06:55:48.958Z"
}
2 Likes

Thanks for posting :slight_smile:

Can you please also add an screenshot of the terminal output where you startet bolt with “pnpm run dev”, as well as an screenhot of the DEV-Tools Console (F12)

Also please check if OpenAI works without bolt, just with a curl request:

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "max_tokens": 8192,
    "temperature": 0,
    "messages": [
      {
        "role": "user",
        "content": "Your prompt here"
      }
    ]
}'

Put in your api key.

Here they are:


That was me editing the .env to add the api keys there

thanks, dont see anything bad in here so far. maybe just this one:
image

@aliasfox @thecodacus you know if this has an negative impact?

@yendi test out the curl please (I updatet it with a prompt also)

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer MYKEY" \
  -d '{
    "model": "gpt-4",
    "max_tokens": 8192,
    "temperature": 0,
    "messages": [
      {
        "role": "user",
        "content": “What is 2+2”
      }
    ]
}'

curl output is this:

{
    "error": {
        "message": "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}
1 Like

you sure you copied it correctly in the terminal?

here again with the original curl from openai itself:

curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
        "model": "gpt-4o",
        "store": true,
        "messages": [
            {"role": "user", "content": "write a haiku about ai"}
        ]
    }'

here also as “oneliner”, if this maybe makes problems for you:

curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer YOUR_OPENAI_API_KEY" -d '{"model": "gpt-4o", "max_tokens": 8192, "temperature": 0, "messages": [{"role": "user", "content": "how are you today?"}]}'

https://platform.openai.com/docs/overview?lang=curl

You were right, now this is the response:

{
  "id": "chatcmpl-AjF4oBSdtQ1z8CTGKKpkzuowaV9fD",
  "object": "chat.completion",
  "created": 1735346078,
  "model": "gpt-4o-2024-08-06",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Silent circuits hum,  \nWisdom flows from code's deep glow—  \nDigital sunrise.",
        "refusal": null
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 19,
    "total_tokens": 32,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "audio_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0,
      "audio_tokens": 0,
      "accepted_prediction_tokens": 0,
      "rejected_prediction_tokens": 0
    }
  },
  "system_fingerprint": "fp_d28bcae782"
}

You’re running out of system memory because you only have 4gb of RAM looking at the logs. And the system likely uses nearly all of it, so you don’t have overhead to run Bolt.diy development server (Remix + Vite).

"memory": "4 GB (Used: 136.63 MB)",
[WARNING] 2024-12-27T23:39:18.624Z - Resource usage threshold approaching
Details: {
  "memoryUsage": "75%",
  "cpuLoad": "60%"
}

So I’d suggest trying my deployment steps for publishing to Cloudflare Pages which will run an optimized/compiled version in the cloud and not use your system resources.

image
I have 18GB… why is it not seeing this?

1 Like

Hmm, curious. Are you running this in a container?

Edit: It appears that this may be an issue with Mac when running docker, see How can I increase this limit of 4gb. Might have to dig into this deeper, but it might be easier try a different method.

“On Mac, Docker is not native. it is a lighter virtual machine, a hyper visor. It has overhead in execution, memory, switching, somewhere in between native and a virtual machine.”

Appears to be a common issue on Mac with Docker and the way it works through a VM.

Interesting, I’ll dig deeper into that as well. Meanwhile, I guess I’ll work tomorrow on publishing in Cloudflare Pages, I’ve seen your instructions and they seem very thorough, thank you! I’ll let you know how it goes, and if I find anything else that can make it work locally.

1 Like

You should just be able to run it with:
git clone https://github.com/stackblitz-labs/bolt.diy
npm install pnpm
pnpm install
pnpm run dev

And either copy the .env.example to .env.local and add your keys there or through the UI.

This is what I did to install it locally. Pretty straightforward.

So you didn’t use Docker?

I’ve been reading more about it, and no, I definitely didn’t use it. I used the easy install method that you mentioned in the previous comment. Today I restarted my computer, made sure to close every app and monitored memory usage. Then restarted the server.

No other warnings or errors displayed. Same result. Every time I submit anything to the chat.

Can you please try adding the API-Key in the UI and test.

I quickly testet on my instance with the API-Key in .env file and it did not work. If I put it in the UI, it works.

Yes, I have tested it both ways, just now even, and still the same result.

Hm really strange.

I dont see where the problem is, so we can just try&error

=> Try another Provider, like Google with “Gemeni 2.0 Flash”. It´s free at the moment. You can just choose the Google-Provider and click “Get API Key”. So we maybe find out if it is an openai problem or a problem at all for you.
=> Try another Browser, if not already done
=> Try the “main” branch instead of stable with “git checkout main”

Thank you for your inputs! I’ve tried DeepSeek, also purchased some credits there, same issue. However, I just tried Google, as you suggested, and it actually gave me a response, not the error. :thinking:

Do you think it has anything to do with low credit thresholds? That is the only thing I can think of. Is there a minimum amount of credits that need to be available for bolt to do something? I have around 2USD left in my OpenAI account, same as in DeepSeek. I bought the smallest amount possible there for testing purposes.