Project Load Issue: New Chat Fails with Uploaded Files but Works Without Them

** Describe the bug**

I’ve been using Chrome Canary as my primary browser. I tried uploading files from a project I started with Bolt to create a new chat, hoping to avoid any issues with AI hallucinations. However, when I load the project files to start the new chat, it doesn’t work. Interestingly, if I start a chat without loading any files, everything works perfectly. I’m not sure what’s causing the problem

To repeat the issue

  1. Start a chat and create a project.
  2. Download the project to your desktop.
  3. Start a new chat by loading the downloaded project.
  4. Ask the LLM a question in my case (Open AI , and Deepseek).
  5. It will not process the question, and then I get an error.

*I do want to note this is not the case with gemini 2.0 flash. *
And My API calls do work if it is a new chat without uploading files, I also cant embed a photo or I would provide a screenshot

Expected Behavior

For LLM to process questions as if a new chat.

Model Used

  1. gpt-4o
  2. deepseek-coder
 INFO   stream-text  Sending llm call to Deepseek with model deepseek-coder
 DEBUG   api.chat  usage {"promptTokens":null,"completionTokens":null,"totalTokens":null}
 DEBUG   api.chat  usage {"promptTokens":null,"completionTokens":null,"totalTokens":null}
 DEBUG   api.chat  usage {"promptTokens":null,"completionTokens":null,"totalTokens":null}
 INFO   stream-text  Sending llm call to OpenAI with model gpt-4o
 INFO   stream-text  Sending llm call to OpenAI with model gpt-4
 INFO   stream-text  Sending llm call to OpenAI with model gpt-4o
 INFO   stream-text  Sending llm call to OpenAI with model gpt-4o
 DEBUG   api.chat  usage {"promptTokens":null,"completionTokens":null,"totalTokens":null}
 INFO   stream-text  Sending llm call to OpenAI with model gpt-4o

Here is the Debug Information :

{
“System”: {
“os”: “Windows”,
“browser”: “Chrome 133.0.0.0”,
“screen”: “2560x1440”,
“language”: “en-US”,
“timezone”: “America/New_York”,
“memory”: “4 GB (Used: 69.45 MB)”,
“cores”: 24,
“deviceType”: “Desktop”,
“colorDepth”: “24-bit”,
“pixelRatio”: 1,
“online”: true,
“cookiesEnabled”: true,
“doNotTrack”: false
},
“Providers”: [
{
“name”: “LMStudio”,
“enabled”: false,
“isLocal”: true,
“running”: false,
“error”: “No URL configured”,
“lastChecked”: “2025-01-05T16:52:57.280Z”,
“url”: null
},
{
“name”: “Ollama”,
“enabled”: false,
“isLocal”: true,
“running”: false,
“error”: “No URL configured”,
“lastChecked”: “2025-01-05T16:52:57.280Z”,
“url”: null
},
{
“name”: “OpenAILike”,
“enabled”: false,
“isLocal”: true,
“running”: false,
“error”: “No URL configured”,
“lastChecked”: “2025-01-05T16:52:57.280Z”,
“url”: null
}
],
“Version”: {
“hash”: “be7a754”,
“branch”: “stable”
},
“Timestamp”: “2025-01-05T16:52:59.567Z”
}

1 Like

Welcome @themostunorganized99,

I do want to note this is not the case with gemini 2.0 flash.

=> You answered it mostly your self. The other models have to small context length probably and can not handle this much. I had this too that only Gemini Models were able my bigger project to handle.

1 Like

@leex279

How would you recommend, contuining a previous project. That you may have deleted the original chat for?

I always synced it to the host and pushed it to git.

Then I just importet the as Folder and not with the chat. So clean session from the project files.

Maybe then, Im not understanding. That is what I am doing creating a clean session from the project files, and those specific LLM’s wont work, is there an idea if there is ever going to be a fix for this in the future ? Only because I feel like I got far in my project and I wont be able to complete it with the model I started with. Thank you for being responsive by the way!!

Yeah but its not working the the models you are using at the moment.
With Gemini it should work.

There are also some problems with the newer versions. You could try to go back to v0.0.3 and try again. maybe working better.

I think in the future its getting way better, because more and more providers provide prompt caching which we can then also implement in bolt, as well as DIFF editing.