Free LLM Installation Tutorial?

hello i’m new to this, is their documentation on how I can get Ollama and other free llms working in bolt.diy? i got bolt.diy running and i downloaded ollama and then what do I do after that?

Honestly, install a model to Ollama and make sure it’s running (ollama serve I believe). But I believe the default is to run as a service. Check the url/port it gives you (http://localhost:1234 I believe) and it should say ollama is running or something.

Launch Bolt.diy and they should just show up in the list. Should just kind of work.

If you have issues after that, there are a bunch of video resources, etc.

Good luck.

@j.techy take a look here, for Ollama tutorial itself.

If everything is working (standalone ollama), you can configure it in bolt

1 Like

ok it seems to respond now after running olama in the terminal and selecting it in bolt. but it takes 10 seconds per word, i’m assuming my computer is too slow to handle it at normal speeds… i got a mac book air m1

1 Like

Yes, I think thats way to less power. I would suggest using an external Provider to work with bolt.

1 Like

i got same problem too with my macbook air m1, i just got kernel panics then laptop restart by itself, i think the hardware not enough to running llm in local