Issue with Ollama Models in "The ULTIMATE n8n RAG AI Agent Template - Local AI Edition"

I’ve been following along with the video tutorial “The ULTIMATE n8n RAG AI Agent Template - Local AI Edition” by Cole. In the video, Cole mentions a bug when using the Ollama Chat Model node with PostgreSQL in this template. After testing, I confirmed that the workflow errors out in the final step after querying the vector store.

The suggested fix is to use the OpenAI Chat Model node instead, create a new credential, add a random API key, and change the URL to http://ollama:11434. However, when I try this, it doesn’t connect, and none of the Ollama models appear in the dropdown menu.

To clarify:

  • The Ollama Chat Model node credentials work fine, and the models display correctly.
  • The issue seems to be specific to using the OpenAI Chat Model node to connect to Ollama models.

I’ve spent hours troubleshooting this but haven’t been able to resolve it. Has anyone successfully gotten this to work? Any advice or guidance would be greatly appreciated!

1 Like

Make sure it’s http://ollama:11434/v1, so add in the /v1 at the end! Not sure why it is required for the URL here and not in other places (like the base URL for the Ollama credentials for embeddings) but it is :wink:

1 Like

@ColeMedin Thank you so much for your help. I didn’t see /v1 in the address “ollama:11434”. I added /v1 and the models displayed correctly and the workflow completed successfully. I also learned that my laptop doesn’t seem have enough power to query a 126 page PDF. My specs are said to be AI worthy but the results are slow and I get either low quality results or the query times out for simple queries. This is my first n8n template. I appreciate you so much for all that you do. I look forward to your videos.

1 Like

Glad it’s working now! You are very welcome!

126 page PDFs are going to be tough with n8n, I’m guessing it’s more of a limitation with n8n than it is your machine.