New install Bolt.diy for PC - best guide available - Help me build it?

Hi all.

After abandoning Bolt.diy on my Macbookpro M1 because it didn’t have enough memory, I’ve re-purposed my gaming PC as a Bolt.diy stand-alone machine. Bought 64m DDR4 memory replacing the sad 16M PC4 previous. To run Qwen-32B you need at least 21M without overheads. I thought, let’s just get it done.

That said. I had help previously and realised there is no one guide that can help someone as there’s always little issues that can pop up at different steps of getting things up and running. So I thought I’d try and take everything I’ve learned and build a more complete guide with all the intricacies that may come into play at any step and any tips and tricks to sorting them as you go. You’ll see what I mean hopefully - there will be a high-level view with a few key steps, then deep-dive into issues that can be encountered along the way.

Oh, and I still have some issues needing resolving too I’m sure I’ll clear that up quickly though.

FYI - support gave me the following guide which doesn’t have enough detail to get up and running…

https://stackblitz-labs.github.io/bolt.diy/

Following principals for a Desktop build…
= = = = = = = = = = = = = = = = = = =
Installing GIT
= = = = = = = = = = = = = = = = = = =

https://git-scm.com/downloads

Use the windows installer and ensure comand line is ‘ticked’ as an option for ease of use.

Assuming a standard build using the C drive

c:/Program files/Git

After it’s installed from Powershell or Command line you can enter the following to se if it’s working pro[perly and accessable in multiple locations…

PS C:\Windows\system32> git --version

Should return like…

git version 2.47.1.windows.1

= = = = = = = = = = = = = = = = = = =
Install Node (for software package management)
= = = = = = = = = = = = = = = = = = =

Trust me, get used to this as it’s a good idea. Even if you’re not a developer, just live a little…

Go to your project folder you want to work from going forward. I chose an alternate drive to save space on the system drive.

d:/coding/projects/

Use the command line to make your life a little easier than the old Windows GUI process - like I said - live a little…

Open Windows Power Shell and navigate to your folder…

= = = = = = = = = = = = = = = = = = =
Cloning Bolt to your machine
= = = = = = = = = = = = = = = = = = =

From Git by entering the following…

git clone -b stable GitHub - stackblitz-labs/bolt.diy: Prompt, run, edit, and deploy full-stack web applications using any LLM you want!

go there…
cd d:/coding/projects/bolt.diy

Edit the following filename changing it from
.env.example to .env.local

NOTE: - currently working on this - Edit the bottom of the file and make an update for the ‘context length’ - I’ve tried 6144 because using the 32B version of QWEN2.5, I think this should be enough memory.

Example Context Values for qwen2.5-coder:32b

DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM

DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM

DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM

DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM

DEFAULT_NUM_CTX=

= = = = = = = = = = = = = = = = = = =
Installing PNPM
= = = = = = = = = = = = = = = = = = =

pnpm install

After installation you can run the development server but you must be in your working directory of Bolt.diy because the command looks for the package.json file and if you look inside you’ll see there’s a line called ‘dev’ - this line holds details as to how the server is to run. More on that later…

pnpm run dev

PS D:\coding\projects\bolt.diy> pnpm run dev

bolt@0.0.3 dev D:\coding\projects\bolt.diy
node pre-start.cjs && remix vite:dev

★═══════════════════════════════════════★
B O L T . D I Y
:zap: Welcome :zap:
★═══════════════════════════════════════★

:round_pushpin: Current Commit Version: eb6d4353565be31c6e20bfca2c5aea29e4f45b6d
★═══════════════════════════════════════★
warn Data fetching is changing to a single fetch in React Router v7
┃ You can use the v3_singleFetch future flag to opt-in early.
┃ → Future Flags (v2.13.1) | Remix

➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
WARN Constants Failed to get Ollama models: fetch failed

This is the error I’m currently getting - this is a work in progress.

Also - entered Ollama server location and shows it’s not running ?

Please is there someone that can help me through this and possibly build a proper guide as the ones at the top of this don’t supply enough details to get up and running.

1 Like

Hi @shawn,

did you test your ollama itself first? I think the guide should also cover the install and testing out ollama in the terminal to ensure it is working properly, before try to use it within bolt.diy.

Also try to get the available models via:

http://127.0.0.1:11434/api/tags (or localhost)

image

Hi - I don’t find anything in the guide about Ollama and how to check.

I have this…

I did see where there’s an expectation Ollama is running before starting the dev server - I’ll shut them down and restart with Ollama running first using this ?

PS D:\coding\projects\bolt.diy> ollama run qwen2.5-coder:32b
I get…

Send a message (/? for help)

After starting the dev server i get this…

But now I’m getting this also…

image

Please help if you have a proper guide as I’ve spent 3 days chasing this now.

1 Like

Hi,

as Ollama is not required for running bolt.new/biy its not descriped in their guide, cause they run it in the cloud with Claude sonnet.

I dont have a full guide at the moment, but I am working on one for windows.

Your Ollama seems to run fine and it normally shows also in the taskbar/icons:
image

Did you also configure the base url within bolt.diy in the settings?

I dont see this step in your screenshots.

image

And it isn’t clear in the guides I’ve seen that this is a required step.
If I check the logs by right-clicking the Ollama icon, I get a list of files and last one shows the host in the details.

In a PR I raised the other day I included a link which might help you with your guide too…

You will see the mod to the package.json file from…
“dev”: “node pre-start.cjs && remix vite:dev”,
to
“dev”: “remix vite:dev --host 0.0.0.0 --port 3000 --open”,

I had entered that verbatim but I don’t think I should have been using 3000 as it confused things.

Also the .env.example file change name to .env.local
Along with modifying the context at the bottom but I’m not sure on this…

Example Context Values for qwen2.5-coder:32b

DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM

DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM

DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM

DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM

DEFAULT_NUM_CTX= ???

Do I have to run any commands for Ollama - like…
ollama run qwen2.5-coder:32b

I’ve entered the server details now and see this screen has changed to earlier too.

But when I click on Features and switch on the Debug options. I get the following screen which shows the server isn’t running.

I’m not sure what I’ve done wrong but I’m happy to help others once you get me up and running. I’ve worked in IT for 45years and still get stumped on stuff like this sometimes. Just when it’s new but then you become the expert because of the issues.

1 Like

OK. This may help…

yes, you missing the http:// in front of your base-url in the settings :smiley:

OMG…

Still have some issue here tho

Check your dev-tools and terminal for errors. At least in the dev-tools console you should see something.

what is the dev-tools and which console please ?

Press F12 within the Browser-Tab for bolt.

You should see something like this (go to tab “console”):

Yes. This was the error I was seeing earlier. I can’t work it out and I’ve been at it for 16hrs today. Can’t go to sleep till i fix this.

thanks, please check your base-url again if there is a space at the end. if yes, remove it

image

I saw that and have no idea where it’s coming from. I thought I found an error in with an extra space in the package.json but that wasn’t it.

Any idea ??

I guess here:

(You need to reload the page, after you changed the settings)

WOW. I wouldn’t have found that. That’s just silly. Now it looks like it works partially but now getting another error.

Nice, we are one step closer :slight_smile:

And I agree, that this should not happen. The application should cover this. @thecodacus / @aliasfox I opened a Bug-Ticket for it: Trim spaces at the end of input text components · Issue #870 · stackblitz-labs/bolt.diy · GitHub

@shawn Do you see an error now on your terminal?

Also just saw in your initial post that you dont mention your GPU. What GPU you got? How much VRAM it has?

Think you have to set DEFAULT_NUM_CTX= in your .env.local.

1 Like

OMG. Thank you because I don’t feel like such a pain now.

Getting errors as below…

Check my last reply. I´ve added some questions. Maybe you didnt see :slight_smile:

Also check the windows terminal/shell, not the dev tools

So I tried switching to Groq as I have an API key and tried a simple request. It compled it but no preview.
Then I switched back to Ollama and check the console getting chat message error…