How to install local-ai-packaged when some compenents are already installed (ollama, openweb-ui)?

Hello,

I’m trying to install local-ai-packaged on Debian 12. I didn’t find any information about this question in the “Run Supabase 100% LOCALLY for Your AI Agents” video and in the comments of the video, on GitHub in the docs and issues, or in this forum, so I’m asking here :slight_smile:

I have seen that local-ai-packaged will install solutions like OpenWeb UI and Ollama. I have already installed OpenWeb UI and Ollama without using Docker. This might create conflicts, as they use the same ports by default.

So, I’m wondering how to use local-ai-packaged when some components are already installed. What is the best practice?

  1. Can I keep my Ollama and OpenWeb UI, and local-ai-packaged will use them? But maybe local-ai-packaged has some changes or special configurations for Ollama and OpenWeb UI, so it’s not a good idea?

or

  1. Can I keep my Ollama and OpenWeb UI, and local-ai-packaged will use its own Ollama and OpenWeb UI? In this case, I will certainly need to change the ports. If so, do I just need to do this?
  2. a) First, I install Ollama in Docker with this command:
run -d --gpus=all -v ollama:/root/.ollama -p 11435:11435 --name ollama ollama/ollama
  1. b) In docker-compose.yml, I add this:
services:
  ollama:
    image: ollama/ollama
    ports:
      - "11435:11435"
    volumes:
      - ollama:/root/.ollama
  1. c) And I find a similar solution to install OpenWeb UI with Docker first, so it will be installed with different listening ports?

Thanks a lot for your help.

I will try to follow this answer of perplixity deepsearch and qwen :

Installing Local-AI-Packaged with Existing Ollama and OpenWeb UI

In this analysis, I will explore best practices for installing local-ai-packaged on a Debian 12 system where Ollama and OpenWeb UI are already natively installed. This particular configuration requires special attention to avoid port conflicts and ensure seamless integration of components.

Current Situation and Problem

Based on the information provided, you have installed Ollama and OpenWeb UI directly on your Debian 12 system without using Docker. You now want to add local-ai-packaged, which includes its own dockerized versions of Ollama and OpenWeb UI. This potential overlap creates a risk of conflict, particularly with network ports, as these services typically use specific default ports (11434 for Ollama and 8080 for OpenWeb UI)

.

Integration Options

Option 1: Use Existing Installations with Local-AI-Packaged

This option presents several technical challenges that make it difficult to recommend:

  1. Compatibility Issues: Local-ai-packaged may require specific configurations of Ollama and OpenWeb UI that differ from your current installations
  • .
  • Complex Orchestration: The local-ai-packaged project is designed to function as an integrated set via Docker Compose, which would complicate integration with native services
  1. .
  2. Difficult Maintenance: Future updates could break compatibility between your native installations and the configuration expected by local-ai-packaged.

Option 2: Use Separate Instances with Different Ports

This approach is more robust and appears to be the most viable solution:

  1. Environment Isolation: The dockerized instances of local-ai-packaged will remain separate from your native installations, avoiding mutual interference

  2. .

  3. Port Configuration: Your suggestions for port modification are sensible and constitute a good approach to avoid conflicts.

  4. Scalability: This configuration facilitates independent updates of each component.

Recommended Implementation

Here’s a structured approach to set up your solution:

1. Modify Ollama Configuration in Docker

Your proposed command to run Ollama is almost correct, but it needs a slight adjustment:

bash

docker run -d --gpus=all -v ollama:/root/.ollama -p 11435:11434 --name ollama ollama/ollama

Note that the port configuration should map external port 11435 to internal port 11434 (which is Ollama’s default port inside the container)

.

2. Modify the docker-compose.yml File

For local-ai-packaged, you’ll need to adjust the docker-compose.yml file to reflect the new ports:

text

services:
  ollama:
    image: ollama/ollama
    ports:
      - "11435:11434"
    volumes:
      - ollama:/root/.ollama
    
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    ports:
      - "3001:8080"
    environment:
      - OLLAMA_API_BASE_URL=http://ollama:11434
    volumes:
      - open-webui:/app/backend/data
    depends_on:
      - ollama

volumes:
  ollama:
  open-webui:

This configuration:

  • Runs Ollama on port 11435 of your host
  • Runs OpenWeb UI on port 3001 of your host
  • Configures OpenWeb UI to connect to Ollama via the internal Docker network

3. Installing OpenWeb UI with Different Ports

For OpenWeb UI, you can use this command:

bash

docker run -d -p 3001:8080 -v open-webui:/app/backend/data --name open-webui-docker --restart always ghcr.io/open-webui/open-webui:main

And you’ll need to configure this instance to connect to the dockerized Ollama via its admin interface once started

.

Additional Considerations

  1. Resource Management: Running two instances of Ollama (native and Docker) can consume a lot of resources, especially if you’re using large models. You might consider temporarily disabling your native installations when using local-ai-packaged
  • .
  • Model Synchronization: Models downloaded in one Ollama instance won’t be automatically available in the other. You’ll need to manage your models separately for each instance
  • .
  • API Access: If you’re developing applications that connect to Ollama or OpenWeb UI, make sure they use the correct ports depending on which instance you want to connect to
  1. .

Conclusion

While both approaches are technically possible, the second option (using separate instances with different ports) offers a cleaner and more reliable solution. It allows for clear isolation of environments and minimizes the risk of interference, while allowing you to test local-ai-packaged without compromising your existing installations.

For an optimal long-term experience, you might consider fully migrating to the dockerized solution once you’re satisfied with its operation, which would simplify maintenance and future updates.