[Help] Local AI on AMD System :hide:

Hi Guys,

i think i have a worst case scenario for running the local ai in docker ;)…

i tried to setup this package: GitHub - coleam00/local-ai-packaged: Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!
and followed the instructions: https://www.youtube.com/watch?v=aj2FkaaL1co

now i have trouble with 3 services…

first things first → Win 11, Docker Desktop, AMD CPU, 6800XT

wsl -l -v
NAME STATE VERSION

  • Ubuntu-22.04 Running 2

rocminfo found 2 agents (CPU, GPU)

but if i try to run the dockerimage for:

ollama: (HTTP code 500) server error - error gathering device information while adding custom device “/dev/kfd”: no such file or directory)…

whispr-asr: ((HTTP code 500) server error - could not select device driver “amd” with capabilities: [[gpu]])

coqui-tts: ((HTTP code 500) server error - failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’ nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown)

maybe anyone had the same issues? or if we solve them i think it could also help other people :smiley:

Thanks!

Hi,
I am running on AMD too but as shown in my install videos on youtube, I use Ollama on the host instead of within the container, cause this is not working with the AMD GPUs. Guess the same for whisper, but did not try this.

when i run ollama on the host it works :wink: - thought anyone can guide me how to run it within the local ai container (where all services run)

Yeah thats just not working on Windows in my opinion. I wasted days trying everything out I could find on how to install it with WSL, different rocm versions, amd driveres etc.
Officially its just supported with Linux, where it probably works with AMD as well, but did not try yet.

I just researched again quickly and found this now:

did you try that out?

ok cool thanks for you quick response - maybe i´ll try it with linux or an nvidia GPU