Thrilled to have recently discovered Cole’s channel and now this community. I’m a non-technical person trying to pick up n8n…the tutorials are accessible enough to give me (false?) hope!
I’ve experimented with the ultimate n8n RAG AI Agent, running n8n locally on a Mac and have managed to ingest documents into Supabase. However, it doesn’t happen automatically when creating/updating files on google drive - i have to click “test step” on the Supabase node. It’s also finnicky, e.g. two PDFs in the folder are properly ingested, but an added third PDF one isn’t unless I delete the first two.
Also I live in Hong Kong, where I can’t get access to OpenAI/Anthropic APIs so I’ve tried alternative chat models and use Ollama embeddings (mxbai-embed-large:latest).
With the Gemini models, I get: “Bad request - please check your parameters (Google Gemini requires at least one dynamic parameter”. With the Mistral chat model (apparently the Supabase node only works with the Large model version), I occasionally get hallucinated answers that don’t draw on RAG. I made the prompt more strict, but it didn’t solve it entirely. Sporadically I get the error: [Message content must be a string or an array.
Received: undefined].
The gold standard I’ve found for RAG is Google’s notebooklm and I wonder if there is any hope for an n8n RAG workflow to perform similarly? Would it entail a better LLM model, embeddings model, vector database, etc.? Any suggestions very much appreciated.