Deploying Bolt.diy with Cloudflare Pages (the easy way!)

I think a lot of this comes down to the system prompts but I’m not 100% sure because it seems hit and miss. I’ve had small models do good at the prompts and large ones do poorly.

I have a “reasoning” system prompt I’ve been playing with. Might give it a test and comparison later. I think it makes a huge difference.

My APIs are not w> orking, they do not appear in the debug information. They have credits and are active. The ones that appear below, I do not use, I have never used, I do not know why they appear.

Can anyone help?

And you have set all your API keys in your .env.local file?

1 Like

This is working with openrouter. Thanks. It does not authenticate with github when I try to connect a repo.

When I load a project and open its chat again,

  1. It takes a very long time for bolt to load each file in the project. Any info? Is this because of the remote deployment to cloudflare?
  2. Where are my prior chats actually saved; is it in a local browser cookie? Since it remembers all of my project files and all. Any info?
1 Like

I was have all of these propleams you save my life God Pless You :slight_smile:
I have a problem with it
I buy api from deepseek the main website
but when i link it to my code sometime its not writing write code and sometime the website became very slow and alowes when my code became big its stop working

1 Like

@aliasfox
What a Rockstar! Thank you very much for all your time and effort, you have helped so many, always remember how valued you are. Thank you, Thank you, Micheal

1 Like

I am a beginner and I followed the tutorial, so I don’t know what .env.local is.

Besides, when it works it doesn’t create the codes but rather on purpose in the blot chat.

Is there a tutorial so I can fix this .env.local

I quickly ran into “cpu time limit” problems with the free worker deployment. I am not sure the bot is usable on the free plan. I upgraded to the $5/month plan at cloudflare that increases the cpu time limit from 10ms (very short) to 30 seconds. So far no more cpu time limit errors.

2 Likes

One detail, the project I’m continuing was previously created in lovable, and now I’m trying to finish it in Bolt.diy, do you think this might be interfering with something else, besides the API?

Great guide, thank you! :slight_smile: A couple of questions:

1 - how do we update the code?
2 - how can we prevent others to access our page?

1 Like

You can manually click the “Sync repo” button (would need to remove the two files mentioned in the docs for now), you can use an action to automate this (working on the instructions), and you can use Cloudflare Zero Trust to secure it with login. You can granularly control access with it and the free version allows up to 50 users.

I plan on making full documentation on this at some point.

2 Likes

I have bolt.diy deployed via cloudflare and it sort of works, but I am getting this problem that pretty much stops all progress:
bolt-cant-read-project-files-claim-cloudflareworker

What’s going on?

I honestly have no idea, but the way you deploy Bolt.diy (assuming latest) there should really be no difference in the way the AI interacts with it, being that’s all mostly done through an API call. What model are you using? And did you confirm ./_go/1.md exists and you can open it?

A test seemed to work for me (it returned the correct text) using Google, even though it has an X on the step (not sure what that’s about but seems to do that sometimes):

P.S. Interestingly, Llama 3.3 70B Instruct failed at this so hard, while Google Gemini exp-1206 did it without issues. Very curious…

I am using Claude Sonnet 3.5 via OpenRouter.

Yes the file I sought out exists. I had the AI test out a ‘cat’ command and the file’s text was displayed in the terminal, but bolt couldn’t read the terminal.

Basically the AI isn’t able to properly use the webcontainer to view the project files…

I loaded bolt.diy via VS Code dev server and the same thing is happening!

e.g.
projectboltcontinues

The bug with bolt not reading the project files is taking place after I start up a chat session by importing from Github repo or loading up a previous chat.

So I have gone on testing. I uploaded a small project folder to start a chat instead of importing from Github. Had the same problem:


So I have gone on testing my VS Code local deployment by simply creating a brand new app with Bolt, changing the contents of the App.jsx file, saving, and asking the bot to read the contents of the file:

Summary:
So bolt.diy in a cloudflare and VS Code deployment using an OpenRouter LLM is not reading the project context. It refers to bolt_file_modifications as the only place it can receive project context.

I can’t be the only one using bolt.diy and notice that it doesn’t properly read the project files. I figure the problem could be the OpenRouter provider? Unfortunately I am having trouble getting a login with Anthropic themselves or I would do further testing.

Seeking answers!

1 Like

I am an amateur in reading code, etc.

I noticed the file in the bolt.diy root project-context.ts is a cloudflare worker related file. It is possible the deletion of the project’s wrangler (a step in aliasfox’s cloudflare worker deployment for bolt) is causing bolt to be unable to read its project files. I tested this by cloning a fresh bolt.diy repo from the main branch and running bolt locally via VS Code. It turns out, the problem with Bolt being unable to read Project Files continues with the fresh build:

The only thing different from my bolt.diy vs all is that I am using OpenRouter. I have the API key in .env. My whole .env looks like this:

OPEN_ROUTER_API_KEY=sk-or-v1-9aosdopjoivjwe[fake API key]foijweofijiowef9

I hope we can figure out why Bolt isn’t able to read the project files. Again, seems the only difference between my install and others is that I am using OpenRouter. Unfortunately I can’t get any other keys to test with at the moment.

No, that file is for setup and shouldn’t cause an issue. It really doesn’t do anything, and the steps replicate many of the ones done in the wrangler.toml, with the exception of setting or overwriting the environment variables. So if you didn’t want to pre-set the API keys so that each user has to set them, then arguably you could skip several steps and just use the wrangler.toml file (Cloudflare automatically detects it, hence removing it).

That could be a use case for many users, so I’ll address it when I re-write the docs in an official wiki or something.

However, it is important to note that Cloudflare Pages run the static build code, so no remix server is running. And therefore there could be a difference between the two versions. The “compiled” version runs faster and with optimizations, but the development version uses vite+remix to handle things like updating files on the fly, etc. It’s just more resource intensive and therefore only used for near real-time development. Arguably your average use does not need this and would benefit from running the compiled version (which is why we should have “releases” at some point).

So my question to you would be is this behavior only on the Cloudflare version, or both. My testing showed both but with an inconsistency between models. Because if it’s both, then it is either Bolt.diy itself and/or the model being used. And not build vs dev.

Hi,

To answer your question. The error is happening in both my cloudflare worker bolt.diy and my local bolt.diy via VS Code. My local bolt.diy is a simple clone from the main branch of the official bolt.diy project git, with the only modification being two lines in the .env file where I put my API keys.

To update you further, I just used my clean local bolt.diy with a direct Anthropic key instead of OpenRouter and bolt has the same problem of being unable to read the project files:

Is it possible a bunch of people are working on their bolt projects while the app is broken and not loading project files (Except for the recently modified list bolt keeps talking about in my previous reply)?

I think your average users use case just works from memory, but this would need to be confirmed as well. I’m unfamiliar with how this might work tbh. So many people probably just hadn’t noticed.

Like this opens up more questions than answers? Is the “memory” by session? And is it specifically by model? Because that would mean you might experience odd behavior if you switch between them. And maybe even stranger behavior if that session continues from where you left off while it doesn’t know you changed context. Idk.

And the AI should have access to the Web Container content and files, so maybe it’s just “lying” because LLM’s generally don’t work like this.

Also, are using the standard or optimized system prompts?

I am using the standard prompts.

The way I understand bolt.new/diy is that the bolt system reads the whole available profile files (note: project files can be set to ignore) and the special bolt prompts to fill up Claude’s 200k context window.

So with each prompt to Bolt, bolt is programmed to send that whole project files and its special instructions bundle (upto 200k context length for Claude, which is what bolt.new uses).

We all expect the project files to be fed thru bolt to the AI. That apparently isn’t happening with bolt.diy and its pretty important for user projects because otherwise the AI won’t be aware of all of the project files’ critical systems and internal APIs.

Right now there is no way to make more sophisticated apps, widgets that hook into an app’s audio and UI engine, etc, so long as bolt.diy isn’t feeding its project context to the AIs.

One simple thing to stimulate any missing understanding of what I am talking about is that bolt.diy should be able to pull or client give it the number for how large its connected model’s context window is.

To expand on this with setting up email access policy and access login code using Cloudflare Zero Trust:

Go to setting>General>"Access Policy(Enabled)"Managed

Setup your Zero Trust Plan if not already done (You can use Zero Trust Free. Its Free :slight_smile: )

Next, add and configure your application:

Under Policies, you can edit the existing default policy:

To keep it simple you can just use Emails for testing.

Go back to Policies and Test:

So if you use email access policy type, you should have a page like this below for access:

Play around with other authentication types that fits your needs.
Read more about CloudFlare Zero Trust Add web applications · Cloudflare Zero Trust docs

and @aliasfox thanks for the guide!! worked for me, did not have any issues.

1 Like