When i test the flow of the v3 agent it does create document_metadata into the table but when the flow comes to processing Postgres PGVector Store node it fails, saying
Error in sub-node ‘Embeddings Ollama’ - fetch failed
All involved nodes in that step got data in their execution log, the vector store itself, the default data loader and the text splitter. Just the embedding model fails. I also ollama pulled the nomic-embed model.
The error details don’t help me so i would be happy if anyone can suggest a solution for the embed node model:
n8n version
1.91.2 (Self Hosted)
Time
5/6/2025, 11:27:42 AM
Error cause
{}
Check the docker logs please for container n8n, if there is something that helps.
Credential with ID “3tjC***K7R” does not exist for type “supabaseApi”. o.O
didn’t saw any of this in the .env files. Could also be an artefact as i deleted and reinstalled the local ai a bunch of times. Not sure where to fix that.
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
fetch failed
getaddrinfo EAI_AGAIN http
only get this on each execution
another thing to mention is when i stop and relaunch the full container i always have to delete the container from docker-desktop because when i simply try to relaunch it after i stopped it, i get:
Error response from daemon: failed to set up container networking: network 9c9fae80b03759e66f178fe6047ca598b526085daced97dcf125bf3904db4096 not found
Traceback (most recent call last):
File "C:\Users\xefe4\local-ai-packaged\start_services.py", line 242, in <module>
main()
~~~~^^
File "C:\Users\xefe4\local-ai-packaged\start_services.py", line 239, in main
start_local_ai(args.profile)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "C:\Users\xefe4\local-ai-packaged\start_services.py", line 74, in start_local_ai
run_command(cmd)
~~~~~~~~~~~^^^^^
File "C:\Users\xefe4\local-ai-packaged\start_services.py", line 21, in run_command
subprocess.run(cmd, cwd=cwd, check=True)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xefe4\AppData\Local\Programs\Python\Python313\Lib\subprocess.py", line 577, in run
raise CalledProcessError(retcode, process.args,
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'compose', '-p', 'localai', '--profile', 'cpu', '-f', 'docker-compose.yml', 'up', '-d']' returned non-zero exit status 1.
I had the same with OllamaAPI yesterday while preparing a install guide video on how to setup and vps server and install local ai on it.
For me the solutions was just to reload the Browser and then it worked. If this is not enough, try to clean cache and try again.
Network error => not sure about that. how to you restart it?
after docker system purge -a i started with a clean environment, since then i had no problems.
The last thing that would need a fix is the supabase/docker/volumes/pooler/pooler.exs file
it has a whitespace line at the end of the file causing the pooler to launch/shutdown in a loop