Retaining Settings on new startup for LLMs on

I do not know if there is something I am missing, but everytime on start up, I have to turn on the LLMs in the settings. Did I accidentally turn off memory or something?

Also I can not get glhf.chat to run as the OpenAI like LLM. If you do not know it runs a lot of different LLMs and is free.

Maybe we can add it to the models as another option for people that do not have the GPU power for larger local LLMs.

Hi Savage,

can you provide some more infos please, like:

  • OS
  • Browser
  • How you run bolt.diy
  • Are you on the main or stable branch

Also I can not get glfy.chat to run as the OpenAI like LLM. If you do not know it runs a lot of different LLMs and is free.
=> Never heard of that. Can you also provide more info (link to website)?
=> OpenRouter is a great alternative and has lots of models availabele (https://openrouter.ai/)

can you confirm the branch (main/stable) and the commit hash (you can get that from debug menu in setting window).
also please check if you have your browser cache disabled.

Leex, the link to glhf.chat is actually that HTTPs://glhf.chat

As far as the retaining settings issue, it seems to have resolved itself, or how I set up a batch, 1 click run has resolved the issue.

I do not know how that would resolve it, but today it seems it has retained the memory for the settings I made yesterday.
I am on the latest Windows developer build, so who knows maybe there was an issue with that build.

Here is the PS1 code I created. Not that it really matters now, but I use so many different repos that I like to setup 1 click runs and create the icon for them, to easily identify them. Here is the code I used for this one.

powershell

`# Navigate to project directory
Set-Location “S:\2025\Repos\bolt.diy”

Launch Chrome Canary and open localhost

Start-Process “C:\Users$env:USERNAME\AppData\Local\Google\Chrome SxS\Application\chrome.exe” -ArgumentList “–new-window”, “http://localhost:5173

Start the development server

npm run dev`

bolt_diy_icon

1 Like

Thanks for the link. I was sure I tested it and it provided me that the URL not exists.

Anyway, this website does not really reliable/professional to me. Dont want to test anything there. Maybe someone else :smiley:

Yeah I understand. My setup is not good enough to run local LLMS like the ones there, or at least not as fast. I quit downloading and trying LLMs on my system, and taking up resources better utilized on other things. This was a great FREE alternative.

It is OPENAI_LIKE so it is usually easy to throw into testing my coding.

In production, I use paid API Models. (Only Anthropic, Google, and OpenAI up to this point)

1 Like