Docker on Linux Does Not Work

Initial Setup

Docker does not seem to work at all on linux. Instructions followed:

git clone https://github.com/coleam00/bolt.new-any-llm.git
cd bolt.new-any-llm
vim .env.local # Add API keys
cp .env.local .env
docker build . --target bolt-ai-production -t bolt-ai:production
docker-compose --profile production -f docker-compose.yaml up -d
  • I have tested this by adding (so far), Ollama, Mistral, Google. None of them are detected or working.

The following logs are produced by the docker container initially:

2024-11-23 19:22:33 bolt-ai-1  | 
2024-11-23 19:22:33 bolt-ai-1  | > bolt@ dockerstart /app
2024-11-23 19:22:33 bolt-ai-1  | > bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 5173 --no-show-interactive-dev-session
2024-11-23 19:22:33 bolt-ai-1  | 
2024-11-23 19:22:33 bolt-ai-1  | ./bindings.sh: line 12: .env.local: No such file or directory
2024-11-23 19:22:36 bolt-ai-1  | 
2024-11-23 19:22:36 bolt-ai-1  |  ⛅️ wrangler 3.63.2 (update available 3.90.0)
2024-11-23 19:22:36 bolt-ai-1  | ---------------------------------------------
2024-11-23 19:22:36 bolt-ai-1  | 
2024-11-23 19:22:37 bolt-ai-1  | ✨ Compiled Worker successfully
2024-11-23 19:22:38 bolt-ai-1  | [wrangler:inf] Ready on http://0.0.0.0:5173
[wrangler:inf] - http://127.0.0.1:5173
[wrangler:inf] - http://172.18.0.2:5173
⎔ Starting local server...
  • Note: I have tested with both localhost and host.docker.internal as the base url for Ollama. Neither worked.

Trying to Fix Environment Variables

Because the .env.local file is not being detected, I figured I would try to create a bind mount volume in the docker compose file. This fixes the issue, but gave me mixed results on PC and Laptop (both same OS).

The following was added to the docker compose file for the production profile:

volumes:
  - type: bind
    source: .env.local
    target: /app/.env.local

This changed the logs to the following:

2024-11-23 19:39:41 bolt-ai-1  | > bolt@ dockerstart /app
2024-11-23 19:39:41 bolt-ai-1  | > bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 5173 --no-show-interactive-dev-session
2024-11-23 19:39:41 bolt-ai-1  | 
2024-11-23 19:39:44 bolt-ai-1  | 
2024-11-23 19:39:44 bolt-ai-1  |  ⛅️ wrangler 3.63.2 (update available 3.90.0)
2024-11-23 19:39:44 bolt-ai-1  | ---------------------------------------------
2024-11-23 19:39:44 bolt-ai-1  | 
2024-11-23 19:39:44 bolt-ai-1  | ✨ Compiled Worker successfully
2024-11-23 19:39:44 bolt-ai-1  | Your worker has access to the following bindings:
2024-11-23 19:39:44 bolt-ai-1  | - Vars:
2024-11-23 19:39:44 bolt-ai-1  |   - GROQ_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - HuggingFace_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - OPENAI_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - ANTHROPIC_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - OPEN_ROUTER_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - GOOGLE_GENERATIVE_AI_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - OLLAMA_API_BASE_URL: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - OPENAI_LIKE_API_BASE_URL: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - DEEPSEEK_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - OPENAI_LIKE_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - MISTRAL_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - COHERE_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - LMSTUDIO_API_BASE_URL: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - XAI_API_KEY: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  |   - VITE_LOG_LEVEL: "(hidden)"
2024-11-23 19:39:44 bolt-ai-1  | [wrangler:inf] Ready on http://0.0.0.0:5173
[wrangler:inf] - http://127.0.0.1:5173
[wrangler:inf] - http://172.18.0.2:5173
⎔ Starting local server...

This had a different effect on each platform:

  • PC: The environment variables were detected; the models were not.
  • Laptop: The environment variables were detected; the models were detected.

However, on both platforms, I could not send a single message. They all failed with an error:

There was an error processing your request: No details were returned

Because nothing was working, I decided to just try a simple mixtral api key and it actually did work. However, I immediately got this error in the bolt terminal:

Failed to spawn bolt shell

Failed to execute 'postMessage' on 'Worker': SharedArrayBuffer transfer requires self.crossOriginIsolated.

I’m not sure how so many people are using this, but it’s not working in the slightest for me.

3 Likes

I have bolt.and Ollama both installed on my server as well. I set the .env and can access both on my local via web interfaces. Both are on the same network, but I get the same. I set it to Ollama in the env, but still get the same error as well.

I think in other thread what we came to is http vs https.
If you are serving it over http Chrome blocks usage of sharedarraybuffer which is needed to communicate with web workers in which web container runs.
Reason it works on localhost without https is that localhost is exception for chrome.

Can you guys try that?

I figured out how to get the models to show. It was actually a very simple fix that a lot of people must be missing (myself included at first).

The new issue is the second half of the original post. I have seen so many people showing the models listed in their videos, images, etc. but no one is actually running them. I have not seen a single person actually use ollama, only list the models in the drop-down. It’s not currently working for me trying to use the custom models.

It requests sonnet 3.5 for every request. I am going to try changing it in the codebase when I get home, but it’s definitely an issue still.

1 Like

I am not sure if you saw this article, and I am not sure if this is your issue but its worth a look. Failed to spawn bolt shell - Failed to execute 'postMessage' on 'Worker': SharedArrayBuffer transfer requires self.crossOriginIsolated

@grk9993 had a possible solution.

What did you do to fix the issue of it trying to default to Claude sonnet every time you select an ollama model? That’s where I’m at now.

I’ve done essentially everything I’ve seen on this forum.

From the line below it seems Claude-3.5-… is the default model loaded if no other model is found. Make sure you get the Ollama models working and this may fix your issue. As stated I am new to linux and may not be of much help. "import type { ModelInfo, OllamaApiResponse, OllamaModel } from ‘./types’;
import type { ProviderInfo } from ‘~/types/model’;

export const WORK_DIR_NAME = ‘project’;
export const WORK_DIR = /home/${WORK_DIR_NAME};
export const MODIFICATIONS_TAG_NAME = ‘bolt_file_modifications’;
export const MODEL_REGEX = /^[Model: (.?)]\n\n/;
export const PROVIDER_REGEX = /[Provider: (.
?)]\n\n/;
export const DEFAULT_MODEL = ‘claude-3-5-sonnet-latest’;

const PROVIDER_LIST: ProviderInfo = [
{
name: ‘Anthropic’,
staticModels: [

"

file location = bolt.new-any-llm/app/utils$ sudo cat constants.ts

I’m not sure what happened, but I launched the dev version and it suddenly started working. I did add in some of what you said so maybe it was that. I dont know. Thanks either way! Have literally spent days trying to get ollama to work and started to believe everyone was lying lol.

UPDATE:

I think I figured out what’s causing all of the issues here. I decided to manually debug it by going to the file you mentioned earlier: /app/utils/constants.ts and then go to the getOllamaModels() function. I am like 90% sure this is the cause. I copy pasted the portion that returns a value, but I put it above as a console.log. Purely for debugging reasons right? Well it turns out that fixed it. I asked Cursor why that might be happening and it said that this function is causing a race condition where the models are not loaded yet because it’s asynchronous and so it uses the default model of claude sonnet 3.5. I changed it to this (per cursor’s instructions) and it’s working fine now.

Original:

async function getOllamaModels(): Promise<ModelInfo[]> {
  try {
    const baseUrl = getOllamaBaseUrl();
    const response = await fetch(`${baseUrl}/api/tags`);
    const data = (await response.json()) as OllamaApiResponse;
    return data.models.map((model: OllamaModel) => ({
      name: model.name,
      label: `${model.name} (${model.details.parameter_size})`,
      provider: 'Ollama',
      maxTokenAllowed: 8000,
    }));
    // eslint-disable-next-line @typescript-eslint/no-unused-vars
  } catch (e) {
    return [];
  }
}

With console.log:

async function getOllamaModels(): Promise<ModelInfo[]> {
  try {
    const baseUrl = getOllamaBaseUrl();
    const response = await fetch(`${baseUrl}/api/tags`);
    const data = (await response.json()) as OllamaApiResponse;
    console.log(data.models.map((model: OllamaModel) => ({
      name: model.name,
      label: `${model.name} (${model.details.parameter_size})`,
      provider: 'Ollama',
      maxTokenAllowed: 8000,
    })));
    return data.models.map((model: OllamaModel) => ({
      name: model.name,
      label: `${model.name} (${model.details.parameter_size})`,
      provider: 'Ollama',
      maxTokenAllowed: 8000,
    }));
    // eslint-disable-next-line @typescript-eslint/no-unused-vars
  } catch (e) {
    return [];
  }
}

Cursor-Proposed Fix:

async function getOllamaModels(): Promise<ModelInfo[]> {
  try {
    const baseUrl = getOllamaBaseUrl();
    const response = await fetch(`${baseUrl}/api/tags`);
    const data = (await response.json()) as OllamaApiResponse;
    
    // Ensure data is fully resolved before mapping
    const models = await Promise.resolve(data.models);
    
    // Filter out potentially problematic models and ensure proper formatting
    return models
      .filter(model => {
        // Skip models that might cause GGML assertion errors
        const skipPatterns = ['vision', 'multimodal'];
        return !skipPatterns.some(pattern => model.name.toLowerCase().includes(pattern));
      })
      .map((model: OllamaModel) => ({
        name: model.name,
        label: `${model.name} (${model.details.parameter_size || 'Unknown size'})`,
        provider: 'Ollama',
        maxTokenAllowed: 8000,
      }));
  } catch (e) {
    console.warn('Failed to fetch Ollama models:', e);
    return [];
  }
}

If someone with more experience wants to chime in, please do so!

1 Like

Thanks this seems plausible. I just tried the Cursor-proposed fix and it still won’t work for me. In my case I am accessing the Bolt web-ui from my laptop however the production server and Ollama are running on my Linux server.

I might try breaking this out into it’s own standalone web-app and run it within and outside of a Docker container to what it does. In React development I’ve run into these async-await issues before and it can be tricky.

I ended up swapping to the development version instead of production. Try swapping to development, run it, then paste the fix and save, and let it refresh (all without restarting anything manually). This fix assumes you got past the part where the models aren’t showing at all though. For me, that had to do with chrome having issues. Basically needed to use localhost in the URL in my browser (not the .env file) instead of 0.0.0.0 or 192.168.0.1, etc

I am lost when it comes to linux, but @mahoney maybe able to help incorporate this if it works.

OK, your best bet is not to install it yourselves if you don’t understand Linux to well. Paste the code into claude, tell it to write you a bash script to set the whole thing up for you. *** KEY - Tell it you want a postgres persistent database to store you chats in along with your .env ****.

You final comment should be. “I don’t want to type anything into the keyboard apart from, 1. nano setup.sh 2. chmod +x setup.sh 3. ./setup.sh”

In theory this really should work. If it does please share the script with the community then it can be incorporated going forward.

I did some more debugging using console.log statements and a lot docker build/compose steps. I found that despite the pointing the bindings.sh to .env.local the code still does not believe that OLLAMA_API_BASE_URL is being set.

In this bit of code in app/util/constant.py it keeps reverting the baseUrl back to localhost:11434:

const getOllamaBaseUrl = () => {
const defaultBaseUrl = import.meta.env.OLLAMA_API_BASE_URL || ‘http://localhost:11434’;

I hardcoded this at the end of getOllamaBaseUrl to return my server name:
http://fractal:11434

I can then see in getOllamaModels() that it would display the JSON for each of the models loaded in my server Ollama instance however the UI dropdown still refuses to show anything and reverts back to Claude Sonnet.

Even after hardcoding in PROVIDER_LIST the dropdown is still empty.
{
name: ‘Ollama’,
staticModels: [{name:‘qwen2.5-coder’,label:‘qwen’,provider:‘Alibaba company’,maxTokenAllowed:8000}],
getApiKeyLink: ‘Download Ollama on macOS’,
labelForGetApiKey: ‘Download Ollama’,
icon: ‘i-ph:cloud-arrow-down’,
},

I think I’ve had it for today on this. If anybody else has any idea I’m all ears.

This seems to work with far less fuss outside of a docker container and if everything is entirely enclosed on just my laptop. There seems to be some adverse interaction with Docker and secondly even if staticallly defining the Ollama model I’m still not able to get it to work.

Can you post your:

  1. .env.local file and .env file
  2. Command you’re using to launch the docker container
  3. The output of the docker logs when you first run the container
  4. The output of the console in the tab when you first load it in your browser (as well as with a request)
  5. The url you’re using in your browser to access your docker container
  6. The browser you’re using

I had to overcome all of these issues too to get it working. I didn’t think it was possible.

You’re correct that I didn’t run ollama, and from what I’ve seen it almost seems that should be included as a prerequisite. I haven’t had time to do a third round, but I suspect getting ollama running will fix many issues.

The other significant issue I think need to be resolved is this ssl/advanced headers requirement for the browser. That’s just silly, and whatever is causing that requirement in the codebase needs to be rethought, and urgently.

I should also say it’s silly to have a dev and production version (which aren’t differentiated in any documentation well enough anyway) when niether once is working well. It just confuses things and adds unnecessary complexity, especially when they function quite differently.