How to run an ollama model with locally installed OttoDev?

The only related thing I could find was this post but it doesn’t look like there’s a solution there and I’m not entirely sure its a match to what my problem is - I don’t know what my problem is.

I managed to get OttoDev installed (I also have docker and ollama installed as well). OttoDev was installed via docker container. I see Ollama as a selection in the first (left hand side) dropdown but no entries in the dropdown next to it (to its right). When I created / ran the container for OttoDev I did not edit the contents of the .env.local file (I only renamed .env.example to .env.local and then created and ran the container). I’m seeing everywhere that you do not need an api key to use ollama in OttoDev so does that mean leave it blank? Make something up but put something in there? Where do you get what to put in there? Why isn’t it working and how do I make it work?

I go the following output when running the container. Only the part up to giving the server url that it is running on at first then everything starting wtih the first error printed to the terminal (was appended to the first part) after I resolved the sever url in my browser. It mentions something about not being able to find ollam or something like that.

$ sudo docker run 5189b1b1b4c7

> bolt@ dev /app
> remix vite:dev "--host"

[warn] Data fetching is changing to a single fetch in React Router v7
┃ You can use the `v3_singleFetch` future flag to opt-in early.
┃ -> https://remix.run/docs/en/2.13.1/start/future-flags#v3_singleFetch
┗
  ➜  Local:   http://localhost:5173/
  ➜  Network: http://172.17.0.2:5173/
Error getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13185:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.getOllamaModels [as getDynamicModels] (/app/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (/app/app/utils/constants.ts:389:9)
    at handleRequest (/app/app/entry.server.tsx:30:3)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:340:12)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:18)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}
6:08:02 AM [vite] ✨ new dependencies optimized: remix-island, ai/react, framer-motion, node:path, diff, jszip, file-saver, @octokit/rest, react-resizable-panels, date-fns, istextorbinary, @radix-ui/react-dialog, @webcontainer/api, @codemirror/autocomplete, @codemirror/commands, @codemirror/language, @codemirror/search, @codemirror/state, @codemirror/view, @radix-ui/react-dropdown-menu, react-markdown, rehype-raw, remark-gfm, rehype-sanitize, unist-util-visit, @uiw/codemirror-theme-vscode, @codemirror/lang-javascript, @codemirror/lang-html, @codemirror/lang-css, @codemirror/lang-sass, @codemirror/lang-json, @codemirror/lang-markdown, @codemirror/lang-wast, @codemirror/lang-python, @codemirror/lang-cpp, shiki, @xterm/addon-fit, @xterm/addon-web-links, @xterm/xterm
6:08:02 AM [vite] ✨ optimized dependencies changed. reloading
Error: No route matches URL "/favicon.ico"
    at getInternalRouterError (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:5505:5)
    at Object.query (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:3527:19)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:275:35)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:24)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25
Error getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13185:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.getOllamaModels [as getDynamicModels] (/app/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (/app/app/utils/constants.ts:389:9)
    at handleRequest (/app/app/entry.server.tsx:30:3)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:340:12)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:18)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}
No routes matched location "/favicon.ico" 
ErrorResponseImpl {
  status: 404,
  statusText: 'Not Found',
  internal: true,
  data: 'Error: No route matches URL "/favicon.ico"',
  error: Error: No route matches URL "/favicon.ico"
      at getInternalRouterError (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:5505:5)
      at Object.query (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:3527:19)
      at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:275:35)
      at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:24)
      at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25
}
No routes matched location "/favicon.ico" 
Error getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13185:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.getOllamaModels [as getDynamicModels] (/app/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (/app/app/utils/constants.ts:389:9)
    at handleRequest (/app/app/entry.server.tsx:30:3)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:340:12)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:18)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}
Error: No route matches URL "/favicon.ico"
    at getInternalRouterError (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:5505:5)
    at Object.query (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:3527:19)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:275:35)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:24)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25
Error getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13185:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.getOllamaModels [as getDynamicModels] (/app/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (/app/app/utils/constants.ts:389:9)
    at handleRequest (/app/app/entry.server.tsx:30:3)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:340:12)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:18)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}
No routes matched location "/favicon.ico" 
ErrorResponseImpl {
  status: 404,
  statusText: 'Not Found',
  internal: true,
  data: 'Error: No route matches URL "/favicon.ico"',
  error: Error: No route matches URL "/favicon.ico"
      at getInternalRouterError (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:5505:5)
      at Object.query (/app/node_modules/.pnpm/@remix-run+router@1.21.0/node_modules/@remix-run/router/router.ts:3527:19)
      at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:275:35)
      at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:24)
      at processTicksAndRejections (node:internal/process/task_queues:95:5)
      at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25
}
No routes matched location "/favicon.ico" 

^C^X

^TError getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13185:13
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.getOllamaModels [as getDynamicModels] (/app/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (/app/app/utils/constants.ts:389:9)
    at handleRequest (/app/app/entry.server.tsx:30:3)
    at handleDocumentRequest (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:340:12)
    at requestHandler (/app/node_modules/.pnpm/@remix-run+server-runtime@2.15.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:160:18)
    at /app/node_modules/.pnpm/@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED 127.0.0.1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1607:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -111,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '127.0.0.1',
    port: 11434
  }
}

I’d also like to add that when resolving OttoDev in my browser I can use the “Network” address and it works but I can not use the “Local” address I get a browser error and the page will not resolve.

Any help will be greatly appreciated
Thanks

Have you configured ollama base url in the .env.local file?

You need to have ollama running and the port, which by default should be http://localhost:11434

In your env file on line 36, updaet it to

OLLAMA_API_BASE_URL=http://localhost:11434

Rebuild the container and it should be able to access ollama now.

1 Like

I will try some things today. Thank you for your feedback. As mentioned the Local address results in an error in brave but the Network one does work to launch OttoDev. I’ll try plugging the network address in there and see.

---------------------------------------- edit ----------------------------------------

Could this problem be because I need to add an entry in my /etc/hosts file? I’m on Ubuntu 24.04.

---------------------------------------- edit ----------------------------------------

So I added the Network address to the env

file and rebuilt the container - nothing has changed. I still have no ability to connect with any ollama llm.

Here is my current .env.local file - from which I built and ran the new container.

# Rename this file to .env once you have filled in the below environment variables!

# Get your GROQ API Key here -
# https://console.groq.com/keys
# You only need this environment variable set if you want to use Groq models
GROQ_API_KEY=

# Get your HuggingFace API Key here -
# https://huggingface.co/settings/tokens
# You only need this environment variable set if you want to use HuggingFace models
HuggingFace_API_KEY=


# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# You only need this environment variable set if you want to use GPT models
OPENAI_API_KEY=

# Get your Anthropic API Key in your account settings -
# https://console.anthropic.com/settings/keys
# You only need this environment variable set if you want to use Claude models
ANTHROPIC_API_KEY=

# Get your OpenRouter API Key in your account settings -
# https://openrouter.ai/settings/keys
# You only need this environment variable set if you want to use OpenRouter models
OPEN_ROUTER_API_KEY=

# Get your Google Generative AI API Key by following these instructions -
# https://console.cloud.google.com/apis/credentials
# You only need this environment variable set if you want to use Google Generative AI models
GOOGLE_GENERATIVE_AI_API_KEY=

# You only need this environment variable set if you want to use oLLAMA models
# EXAMPLE http://localhost:11434
OLLAMA_API_BASE_URL=http://172.17.0.2:5173/

# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=

# You only need this environment variable set if you want to use Together AI models
TOGETHER_API_BASE_URL=

# You only need this environment variable set if you want to use DeepSeek models through their API
DEEPSEEK_API_KEY=

# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=

# Get your Together API Key
TOGETHER_API_KEY=

# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
MISTRAL_API_KEY=

# Get the Cohere Api key by following these instructions -
# https://dashboard.cohere.com/api-keys
# You only need this environment variable set if you want to use Cohere models
COHERE_API_KEY=

# Get LMStudio Base URL from LM Studio Developer Console
# Make sure to enable CORS
# Example: http://localhost:1234
LMSTUDIO_API_BASE_URL=

# Get your xAI API key
# https://x.ai/api
# You only need this environment variable set if you want to use xAI models
XAI_API_KEY=

# Include this environment variable if you want more logging for debugging locally
VITE_LOG_LEVEL=debug

# Example Context Values for qwen2.5-coder:32b
# 
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
DEFAULT_NUM_CTX=

If you are running ottodev in the container & ollama on the host, you need a url other than localhost. Cole talks about this in a recent youtube: https://www.youtube.com/watch?v=23s2N3ug8B8&t=4s

2 Likes

I’m sorry but I cannot find anything in that video that pertains to a solution - except, maybe, at about the 11 minute mark where he says something about not installing OttoDev via container so that he doesn’t have to install his ollama models in the container (or something like that). I realize I’m getting in on this awesome thing EARLY on and that it doesn’t have a lot of documentation, and the kinks are getting worked out, but I’m just a regular user wanting to experience the OttoDev experience and I don’t have the technical expertise to figure out how to fix stuff like this. That’s why I started this thread. I’d be happy to format, proofread, submit something to be added to documentation on how to perform the steps and accomplish this if anyone would lay it out. That way it would be there for others (until changes have to be made anyway).

My issues was using Safari at first, moved to Chrome Canary and it sorted some things out. Might not be relevant but thought I would mention it!

Have you verified that the ollama is responding? Start with your machine and if that works test from the container. If you can do this from the terminal then you know it’s connecting, and it’s a configuration issue. If you are not able to connect, you need to figure that out first.

Are you running your Ollama directly on your linux or did you load it in a docker as well? If the latter you’ll need to setup the network / routing from the docker container to allow traffic.

Hopefully this is a good start in troubleshooting your Ollama connection issues.

I am facing the same thing with Ollama. I posted the following on a thread on the github site.

I am having the same issue with the dropdown not populating for Ollama. I have Docker running inside an LXC container with Alpine Linux. I have Ollama installed on a Windows 11 computer. I can curl from the Bolt container to the Ollama server. I even tried the following curl prompt:

curl http://192.168.3.104:11434/api/generate -d '{ "model": "llama3.2", "prompt": "Why is the sky blue?" }'

It worked perfectly. It just won't work through the Bolt interface.
The online LLM's seem to work ok.

I did install Open WebUI as a container on same Docker server to test connectivity to Ollama on my Win 11 computer. It worked fine.

It might be helpful to add that I was monitoring the Ollama console and I could see Bolt successfully hitting the Ollama server with /api/tags. That should retrieve the available models, but it isn’t populating correctly in Bolt so it defaults to a model that isn’t there and causes a 404 when you try to chat. Bolt is communicating with the Ollama server. It seems that this version is not parsing the /api/tags call correctly.

I am able to run a local llm and communicate with it via the command line.

I am running Ubuntut 24.04

I have been using Brave browser for a long time but if it is necessary to use something else I’m totally down with that.

I still haven’t been able to get AI working with it. I thought I saw that the patch that was mentioned here has been applied so I deleted and re-built the image but without cache so (I thinnnk) it was supposed to build the latest image that would have included the patch?

After everything - the list of available models does indeed populate and allow you to select from it now but chatting with any local model is still not actually happening. I get an error pop-up in the lower right of the page and I took a screenshot of it (see at the bottom of this post). Is there something I missed in how you’re supposed to do this? Maybe I just got something out of order or missed some step or something?

I though about installing it through npm (not in a container) but if the thing has access to run commands (and I’m not sure if it does) then I think I need to hold off on that until / unless I understand it better and understand how to be safe with it.

Screenshot:

Current .env (image built with production option):

# Rename this file to .env once you have filled in the below environment variables!

# Get your GROQ API Key here -
# https://console.groq.com/keys
# You only need this environment variable set if you want to use Groq models
GROQ_API_KEY=

# Get your HuggingFace API Key here -
# https://huggingface.co/settings/tokens
# You only need this environment variable set if you want to use HuggingFace models
HuggingFace_API_KEY=


# Get your Open AI API Key by following these instructions -
# https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
# You only need this environment variable set if you want to use GPT models
OPENAI_API_KEY=

# Get your Anthropic API Key in your account settings -
# https://console.anthropic.com/settings/keys
# You only need this environment variable set if you want to use Claude models
ANTHROPIC_API_KEY=

# Get your OpenRouter API Key in your account settings -
# https://openrouter.ai/settings/keys
# You only need this environment variable set if you want to use OpenRouter models
OPEN_ROUTER_API_KEY=

# Get your Google Generative AI API Key by following these instructions -
# https://console.cloud.google.com/apis/credentials
# You only need this environment variable set if you want to use Google Generative AI models
GOOGLE_GENERATIVE_AI_API_KEY=

#################### OLLAMA CONFIG ####################

# You only need this environment variable set if you want to use oLLAMA models
# EXAMPLE http://localhost:11434**
OLLAMA_API_BASE_URL=http://localhost:11434/

########################################################

# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=

# You only need this environment variable set if you want to use Together AI models
TOGETHER_API_BASE_URL=

# You only need this environment variable set if you want to use DeepSeek models through their API
DEEPSEEK_API_KEY=

# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=

# Get your Together API Key
TOGETHER_API_KEY=

# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
MISTRAL_API_KEY=

# Get the Cohere Api key by following these instructions -
# https://dashboard.cohere.com/api-keys
# You only need this environment variable set if you want to use Cohere models
COHERE_API_KEY=

# Get LMStudio Base URL from LM Studio Developer Console
# Make sure to enable CORS
# Example: http://localhost:1234
LMSTUDIO_API_BASE_URL=

# Get your xAI API key
# https://x.ai/api
# You only need this environment variable set if you want to use xAI models
XAI_API_KEY=

# Include this environment variable if you want more logging for debugging locally
VITE_LOG_LEVEL=debug

# Example Context Values for qwen2.5-coder:32b
# 
# DEFAULT_NUM_CTX=32768 # Consumes 36GB of VRAM
# DEFAULT_NUM_CTX=24576 # Consumes 32GB of VRAM
# DEFAULT_NUM_CTX=12288 # Consumes 26GB of VRAM
# DEFAULT_NUM_CTX=6144 # Consumes 24GB of VRAM
DEFAULT_NUM_CTX=

Note: I have tried building for both production as well as for development and did change the the filename appropriately each time (ie: .env.local when building for development and .env when building for production). The results are the same for each build.

I have checked that ollama is running and is running on http://localhost:11434/ specifically. This has consistently come back affirmative (ollama is running).

I run it with straight pnpm personally or build for Cloudflare Pages, and you don’t need to worry about it executing code or whatnot because it executes exclusively in an isolated Web Container. So, it’s perfectly safe and technically works the same whichever way you deploy it.

I think I see your issue though, Docker uses a different URL than localhost, depending on how you are running it. Someone could say better, but I believe if you use docker you want to use http://host.docker.internal:11434

Or if set in the docker compose: http://ollama:11434

Watch this for a better explanation (time stamped where you need):

1 Like

I wonder if he just said you can install and run ollama INSIDE the container? Cause it sounded like it at 10:40.

Containers never cease to fascinate me. I always seem to forget about how there’s this whole own world in there (so to speak) that you can do stuff with.

Sounds like pnpm is the way to go? I’m sure I heard that right. Now that I know its safe I’ll give it a try. Thanks.

What exactly is meant in the readme with the statement "…Ollama doesn’t need an API key because it runs locally on your computer:"? Does that mean that if the only AI I am going to use is locally installed ollama model(s) then I can leave that blank?

Yup, just need to provide the base URL if difference than the default.

Then that seems like it would be http://localhost:11434/ and I don’t know why / what this http://host.docker.internal:11434 is. I’m guessing its only applicable with OTHER applications (like flowise) but I’m not sure.

End of the day the best I can ascertain is I need to change the filename from .env.example to .env.local, add http://localhost:11434/ to that file (or maybe add nothing to it at all), then use pnpm to install? I’m kinda reading and reareading things.

use pnpm to install,

A lot of people love docker for everything, but I’m on the fence and maybe old school. Most things are easy enough to run without it and turning everything into contains has it’s limitations too. Just the choice I made, you are free do make whichever one you want.

I did the same for Flowise and n8n, and it’s the same steps I used to install then on a VPS to try out, so to me it’s consistent. So idk, just comes down to preference and what’s easiest for you.

Best of luck🤞

http://host.docker.internal:11434/ should be the address exposed from the container to your machine. And being that you’d need to get everythinh talking to each other, I’d assume that’s the correct BaseUrl? Buy in not honestly sure, just a guess.

But yes, just run:
npm install pnpm (if not installed)
pnpm install (in the repo folder)
pnpm run dev

Should just work.

Yes ok everything is installed ok; and, now, with pnpm install I no longer get the model selections for ollama (the one that there was a patch for that I thought got applied to the codebase already). This ui element works running it in the container but that is not my problem. I can-not seem to get a local ollama AI model connected with bolt.diy.