Local-ai-packaged by Cole

Ubuntu 20.x LTS, self-hosted, dedicated server, 8 core, 32 MB ram, script start_services.py (latest)

Hi!

I try to use Coles local-ai-packaged on a dedicated server.
Can someone help with this?

Are these kind of health-checks valid with a real domain name in the server?..

depends_on:
      analytics:
        condition: service_healthy
    environment:
      STUDIO_PG_META_URL: http://meta:8080
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

      DEFAULT_ORGANIZATION_NAME: ${STUDIO_DEFAULT_ORGANIZATION}
      DEFAULT_PROJECT_NAME: ${STUDIO_DEFAULT_PROJECT}
      OPENAI_API_KEY: ${OPENAI_API_KEY:-}

      SUPABASE_URL: http://kong:8000
      SUPABASE_PUBLIC_URL: ${SUPABASE_PUBLIC_URL}
      SUPABASE_ANON_KEY: ${ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SERVICE_ROLE_KEY}
      AUTH_JWT_SECRET: ${JWT_SECRET}

[...]

auth:
    container_name: supabase-auth
    image: supabase/gotrue:v2.170.0
    restart: unless-stopped
    healthcheck:
      test:
        [
          "CMD",
          "wget",
          "--no-verbose",
          "--tries=1",
          "--spider",
          "http://localhost:9999/health"
        ]
      timeout: 5s
      interval: 5s
      retries: 3

[...]

Steps to Reproduce

trigger: python3 start_services.py --profile cpu

Ouput

Starting Supabase services...
Running: docker compose -p localai -f supabase/docker/docker-compose.yml up -d
[+] Running 14/14
 ✔ Network localai_default                   Created                                                             0.1s
 ✔ Container supabase-imgproxy               Started                                                             0.8s
 ✔ Container supabase-vector                 Healthy                                                             6.3s
 ✔ Container supabase-db                     Healthy                                                            13.1s
 ✘ Container supabase-analytics              Error                                                             125.3s
 ✔ Container supabase-rest                   Created                                                             0.1s
 ✔ Container supabase-meta                   Created                                                             0.1s
 ✔ Container supabase-kong                   Created                                                             0.1s
 ✔ Container realtime-dev.supabase-realtime  Created                                                             0.1s
 ✔ Container supabase-edge-functions         Created                                                             0.1s
 ✔ Container supabase-studio                 Created                                                             0.1s
 ✔ Container supabase-auth                   Created                                                             0.1s
 ✔ Container supabase-pooler                 Created                                                             0.1s
 ✔ Container supabase-storage                Created                                                             0.0s
dependency failed to start: container supabase-analytics is unhealthy
Traceback (most recent call last):
  File "start_services.py", line 242, in <module>
    main()
  File "start_services.py", line 232, in main
    start_supabase()
  File "start_services.py", line 63, in start_supabase
    run_command([
  File "start_services.py", line 21, in run_command
    subprocess.run(cmd, cwd=cwd, check=True)
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['docker', 'compose', '-p', 'localai', '-f', 'supabase/docker/docker-compose.yml', 'up', '-d']' returned non-zero exit status 1.

Hey, yes they are valid, cause they are just internally on the server/docker environment. It´s not a health-check from outside by another software.

Did you check your docker-compose logs for analytics? Guess there is more then just this you posted above.

you can also run just the analytics container again, without “-d” option

docker compose -p localai -f supabase/docker/docker-compose.yml up supabase-analytics
1 Like

Thank you for the answer leex279… Danke.

I have no config file but get this:

 ✔ Container n8n-import  Created                                                                                 0.0s
Attaching to n8n-import
n8n-import  | Permissions 0644 for n8n settings file /home/node/.n8n/config are too wide. This is ignored for now, but in the future n8n will attempt to change the permissions automatically. To automatically enforce correct permissions now set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true (recommended), or turn this check off set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false.
n8n-import  | User settings loaded from: /home/node/.n8n/config
n8n-import  | There was an error initializing DB
n8n-import  | Could not establish database connection within the configured timeout of 20,000 ms. Please ensure the database is configured correctly and the server is reachable. You can increase the timeout by setting the 'DB_POSTGRESDB_CONNECTION_TIMEOUT' environment variable.
n8n-import  | Error: Could not establish database connection within the configured timeout of 20,000 ms. Please ensure the database is configured correctly and the server is reachable. You can increase the timeout by setting the 'DB_POSTGRESDB_CONNECTION_TIMEOUT' environment variable.
n8n-import  |     at Object.init (/usr/local/lib/node_modules/n8n/dist/db.js:57:21)
n8n-import  |     at processTicksAndRejections (node:internal/process/task_queues:95:5)
n8n-import  |     at ImportCredentialsCommand.init (/usr/local/lib/node_modules/n8n/dist/commands/base-command.js:98:9)
n8n-import  |     at ImportCredentialsCommand._run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/command.js:301:13)
n8n-import  |     at Config.runCommand (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/config/config.js:424:25)
n8n-import  |     at run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/main.js:94:16)
n8n-import  |     at /usr/local/lib/node_modules/n8n/bin/n8n:71:2
n8n-import  |
n8n-import  | Connection terminated due to connection timeout
n8n-import  | Connection terminated unexpectedly
n8n-import exited with code 1

Can you help me out?

np, the config is not the problem. It says “This is ignored for now…”

The problem is with the db connection, which could not be established.

Did you watch my youtube video on this? Its for localhost, but I this has nothing todo with domains, so maybe you see something you missed.

1 Like

Which video? Where should I look for?

This one:

This is error output from the n8n-import container but do you have output from the analytics container that could speak to the error?

Ubuntu 20.x LTS, Dedicated Server (live), ai-local-packed latest
Hi there,

here is my trial…

Could you go into the logs for the n8n-import container and see what the error is?

Thank you for the answer Cole. I’m relatively new to docker. Where can I find the logs for n8n-import?

1 Like

@hmeiser1966 here you go (in your cmd within the cloned repo):

docker compose -p localai logs n8n-import
1 Like

Okay, here are the logs:

n8n-import  | Permissions 0644 for n8n settings file /home/node/.n8n/config are too wide. This is ignored for now, but in the future n8n will attempt to change the permissions automatically. To automatically enforce correct permissions now set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true (recommended), or turn this check off set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false.
n8n-import  | There was an error initializing DB
n8n-import  | Could not establish database connection within the configured timeout of 20,000 ms. Please ensure the database is configured correctly and the server is reachable. You can increase the timeout by setting the 'DB_POSTGRESDB_CONNECTION_TIMEOUT' environment variable.
n8n-import  | Error: Could not establish database connection within the configured timeout of 20,000 ms. Please ensure the database is configured correctly and the server is reachable. You can increase the timeout by setting the 'DB_POSTGRESDB_CONNECTION_TIMEOUT' environment variable.
n8n-import  |     at Object.init (/usr/local/lib/node_modules/n8n/dist/db.js:57:21)
n8n-import  |     at processTicksAndRejections (node:internal/process/task_queues:95:5)
n8n-import  |     at ImportCredentialsCommand.init (/usr/local/lib/node_modules/n8n/dist/commands/base-command.js:98:9)
n8n-import  |     at ImportCredentialsCommand._run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/command.js:301:13)
n8n-import  |     at Config.runCommand (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/config/config.js:424:25)
n8n-import  |     at run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/core/lib/main.js:94:16)
n8n-import  |     at /usr/local/lib/node_modules/n8n/bin/n8n:71:2
n8n-import  | 
n8n-import  | Connection terminated due to connection timeout
n8n-import  | Connection terminated unexpectedly
n8n-import  | Permissions 0644 for n8n settings file /home/node/.n8n/config are too wide. This is ignored for now, but in the future n8n will attempt to change the permissions automatically. To automatically enforce correct permissions now set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true (recommended), or turn this check off set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false.
n8n-import  | User settings loaded from: /home/node/.n8n/config
n8n-import  | There was an error initializing DB
n8n-import  | getaddrinfo EAI_AGAIN db
n8n-import  | Permissions 0644 for n8n settings file /home/node/.n8n/config are too wide. This is ignored for now, but in the future n8n will attempt to change the permissions automatically. To automatically enforce correct permissions now set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true (recommended), or turn this check off set N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false.
n8n-import  | User settings loaded from: /home/node/.n8n/config
n8n-import  | There was an error initializing DB
n8n-import  | getaddrinfo EAI_AGAIN db

yeah that are the same logs so posted initially. Thats normally the case when something is configured wrong, what cant be the case, if you use the defaults from local-ai-packaged (did not change anything in there).

Did you?

We can do a Websession (Teams, Googlemeetup, etc.) if you want and take a look together. Maybe I see the problem quickly.

I googled a bit. It seems to be a DNS-Problem.
Do you know DNS-relevant stuff in the ai-local? I use dynDNS and ddclient updates.

Can you explain a bit more what you found out? There shouldnt be any dns problems, cause this communication is internally between the docker containers and not externally via domains etc.

I googled the last two lines of the log (
There was an error initializing DB
getaddrinfo EAI_AGAIN db).

Ok, but as said above, this doesnt make sense, I guess, as you got all in one docker environment and the containers are communicating via their container name / internal ips and not with external domains.

Hi All - I am having an issue that is manifesting as a DNS issue on a dedicated server as well.
Also deployed to a dedicated headless server linux server, so I will not be accessing anything via localhost and everything needs to work through Caddy.

Two issues:
Issue One - Internet access from within the container
The ollama and openweb-ui containters cannot see the internet. This manifests with the application in the container unable to see the DNS server to resolve domain names. For example:
Ollama:

Openwebui:
open-webui | 2025-04-13 14:57:53.484 | ERROR | open_webui.routers.openai:get_models:517 - Client error: Cannot connect to host api.openai.com:443 ssl:default [Temporary failure in name resolution] - {}

I can access Openwebui using the https://URL:3000 - but the Openwebui cannot see the internet to interact with huggingface or openai (as the error shows above). When attempting to download a model, I get the DNS misbehaving error from Ollama.

I am able to resolve all these issues by adding “network_mode: host” to the docker-compose.yml for the ollama and openwebui services (although this breaks other things).

Leex279 - it feels just like what you described: “all in one docker environment and the containers are communicating via their container name / internal ips and not with external domains.”

Any help would be appreciated :slight_smile:

Also - in the start_services.py script, the profile needs to be passed to the stop_existing_containers function and then “–profile”, profile, added to the run_command so that the ollama containers shutdown with the rest of the containers.

Thanks for putting all this together!!!

Hi All,

For some unknown reason I have lost in my loacl-ai-package folder, i have lost me .env file and docker compose file and not sure if was bacause i was trying to do other n8n projects.

Would it be possible for my to rebuild and it restore my workflows and credentials?

Thanks in Advance

[SOLVED]
Reinstalled docker (native instead of snap version), switched to docker compose V2.

Tweaked firewall with a little help from my friend Gemini.

Bugs are gone!

That’s magic