I do not know if there is something I am missing, but everytime on start up, I have to turn on the LLMs in the settings. Did I accidentally turn off memory or something?
Also I can not get glhf.chat to run as the OpenAI like LLM. If you do not know it runs a lot of different LLMs and is free.
Maybe we can add it to the models as another option for people that do not have the GPU power for larger local LLMs.
Also I can not get glfy.chat to run as the OpenAI like LLM. If you do not know it runs a lot of different LLMs and is free.
=> Never heard of that. Can you also provide more info (link to website)?
=> OpenRouter is a great alternative and has lots of models availabele (https://openrouter.ai/)
can you confirm the branch (main/stable) and the commit hash (you can get that from debug menu in setting window).
also please check if you have your browser cache disabled.
As far as the retaining settings issue, it seems to have resolved itself, or how I set up a batch, 1 click run has resolved the issue.
I do not know how that would resolve it, but today it seems it has retained the memory for the settings I made yesterday.
I am on the latest Windows developer build, so who knows maybe there was an issue with that build.
Here is the PS1 code I created. Not that it really matters now, but I use so many different repos that I like to setup 1 click runs and create the icon for them, to easily identify them. Here is the code I used for this one.
powershell
`# Navigate to project directory
Set-Location “S:\2025\Repos\bolt.diy”
Yeah I understand. My setup is not good enough to run local LLMS like the ones there, or at least not as fast. I quit downloading and trying LLMs on my system, and taking up resources better utilized on other things. This was a great FREE alternative.
It is OPENAI_LIKE so it is usually easy to throw into testing my coding.
In production, I use paid API Models. (Only Anthropic, Google, and OpenAI up to this point)