Hi Guys,
i think i have a worst case scenario for running the local ai in docker ;)…
i tried to setup this package: GitHub - coleam00/local-ai-packaged: Run all your local AI together in one package - Ollama, Supabase, n8n, Open WebUI, and more!
and followed the instructions: https://www.youtube.com/watch?v=aj2FkaaL1co
now i have trouble with 3 services…
first things first → Win 11, Docker Desktop, AMD CPU, 6800XT
wsl -l -v
NAME STATE VERSION
- Ubuntu-22.04 Running 2
rocminfo found 2 agents (CPU, GPU)
but if i try to run the dockerimage for:
ollama: (HTTP code 500) server error - error gathering device information while adding custom device “/dev/kfd”: no such file or directory)…
whispr-asr: ((HTTP code 500) server error - could not select device driver “amd” with capabilities: [[gpu]])
coqui-tts: ((HTTP code 500) server error - failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’ nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown)
maybe anyone had the same issues? or if we solve them i think it could also help other people
Thanks!