I have installed the docker compose stack profil development.
I have set the - OLLAMA_API_BASE_URL=http://ollama:11434
directly in the docker-compose.yml
From the bolt-ai-dev container I can curl http://ollama:11434
Sorry you are running into this @vincentk222!
So the model list in oTToDev for Ollama is just empty? Seems like you are setting up everything correctly here… do you have an Ollama instance outside of the container that you could test connecting to with http://localhost:11434? I’m curious if that works for you.
Yes, empty. I don’t understand what you mean by an instance outside the container. Ollama is running in a container as part of a Docker Compose setup with several other services. I used your development Docker Compose profile and added network configuration to connect the Ollama server to my existing Docker network. I also exposed the Ollama port.
My Docker server is a remote server. I set OLLAMA_API_BASE_URL=http://localhost:11434, then started the Docker Compose from the VSCode terminal.
VSCode makes the port forwarding, so I can reach the server at http://localhost:5173, and then I can see the Ollama model.
However, when I try to access the server at http://srvllmtest.ck.test:5173/, the Ollama model is still empty.
I ran the command: "Build a simple blog using Astro". I used a very small model, but it is really very slow (several minutes to get a response). For comparison, I ran the same command in OpenWeUI with the same model using the same ollama server and it was really fast.
Is your bolt-ai-dev container behind a reverse proxy providing a HTTPS connection? Also you will either need to set the correct CORS url on the ollama container or just wild card it in the container environment e.g OLLAMA_ORIGINS=*
If you could share you compose stack in it’s entirety minus sensitive info I might be able to help
here is the docker compose for bolt-ai, I have a second for the traefik but I do not use it at reverse for the moment for this container. I just add the network to be abble to join the ollama container. do you need the docker compose with traefik?
networks:
traefik_network:
name: traefik_network
driver: bridge
default:
driver: bridge
services:
bolt-ai-dev:
image: bolt-ai:development
build:
target: bolt-ai-development
environment:
- NODE_ENV=development
- VITE_HMR_PROTOCOL=ws
- VITE_HMR_HOST=localhost
- VITE_HMR_PORT=5173
- CHOKIDAR_USEPOLLING=true
- WATCHPACK_POLLING=true
- PORT=5173
# - GROQ_API_KEY=${GROQ_API_KEY}
# - HuggingFace_API_KEY=${HuggingFace_API_KEY}
# - OPENAI_API_KEY=${OPENAI_API_KEY}
# - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
# - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
# - GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY}
- OLLAMA_API_BASE_URL=http://localhost:11434
- VITE_LOG_LEVEL=${VITE_LOG_LEVEL:-debug}
- DEFAULT_NUM_CTX=${DEFAULT_NUM_CTX:-32768}
- RUNNING_IN_DOCKER=true
extra_hosts:
- “host.docker.internal:host-gateway”
volumes:
- type: bind
source: .
target: /app
consistency: cached
- /app/node_modules
networks:
- traefik_network
ports:
- “5173:5173” # Same port, no conflict as only one runs at a time
command: pnpm run dev --host 0.0.0.0
profiles: [“development”, “default”] # Make development the default profile
i’ve just put together a really rough and ready guide I hope it helps, I been avoiding do this as there is so many different ways to this but this might provide some help.