Issue with loading ollama models

Hey guys,

I am having issues with bolt.diy when trying to run my downloaded LLMs through ollama, and would appreciate it if someone can point out what I need to do to fix my setup.

I started by cloning the [ai-agents-masterclass] repo, then modified the docker-compose.yml to include bolt.diy by adding these lines

  bolt-diy:
    build:
      context: "D:/LocalLLM/bolt.diy"
      dockerfile: Dockerfile
    deploy:
      resources:
        reservations:
          devices:
            - capabilities:
                - gpu
    ports:
      - 5173:5173
    volumes:
      - "D:/LocalLLM/bolt.diy:/app"
    environment:
      NVIDIA_VISIBLE_DEVICES: "all"
      NVIDIA_DRIVER_CAPABILITIES: "compute,utility"
    command: bash -c "pnpm install && pnpm run dev --host 0.0.0.0"

My docker container, in addition the n8n, ollama, flowise…etc. also includes bolt now.

After the container setup, I included the ollama base URL in the provider however when I check the Debug tab I see that ollama is enabled but is not running

In addition, when I choose ollama in my bolt page, there are no LLMs that get loaded even though I have already downloaded some LLMs

What am I doing wrong and how can I solve these issues?

Thanks

2 Likes

looks like you did not configure the host/ip/url to ollama correctly in the provider settings.
guess this needs to point to your docker hostname or ip

@namaenonai Basically, it’s because your docker host port doesn’t match.

Either modify the “ports” section to 11434:11434 (in/out). So 5173:11434 also works. And this is the default, so Bolt.diy should detect the port automatically.

OR

Modify the “Base URL” in the Bolt.diy settings to match (you could also set this in the .env.local):

2 Likes

@leex279
Thank you for the reply.
Do you mean to replace localhost by host.docker.internal? I tried both the URLs but I still had the same outcome.

@aliasfox
Thank you for the suggestion, this actually worked ^^

However now when checking the logs I can see that there are many issues related to API.
I also that there is no tags (probably a tags.ts??) file in my bolt.diy path app/routes/

This is how my powerShell looks like:

Then after few minutes I think my Ollama session just times out and Ollama on bolt is no longer running.

Could you please point me towards fixing these API issues?

I believe you put an extra slash (there shouldn’t be one at the end):
https://host.docker.internal:11434/ should be https://host.docker.internal:11434

I removed the extra slash you mentioned but it still has the same behavior as the previous one with the extra “/”

Maybe try killing Bold.diy and restarting it.

Also, can you provide the settings where you set this? Just for clarifications because there are several ways you can set things (Docker, Bolt.diy Settings, .env.local, a combination, etc.). And maybe also the debug log from Bolt.diy settings as well (has to be enabled).

As one more troubleshooting step, I would personally make sure Ollama is running, using the correct URL/PORT, and returns a response (command line). The quick test would be browsing to http://host.docker.internal:11434 in your browser should return “Ollama is running.”

Also, you may want to just test http://localhost:11434 because this is still all just running on the same machine (I assume).

2 Likes

Your last message pointed me to the right fix. I had an issue with my .env in bolt.diy.
By resetting the settings and running bolt again the issues were fixed and now I can use it no problem.

2 Likes