I’m still waiting on a different Ollama issue to be resolved before I finally get an agent made, but I wanted to understand what the process would be to add a folder of chunked files. The data looks like this:
Personally I use 2 different methods, one easy and one a bit more complicated. n8n is great for easy fast setup and flexible usage of models as you can see. You can still ask that system to write you a node.js UI for whatever based on examples and information in the node.js docs I crawled and it will write the whole thing out beautifully. The other is a more complex Next.js I expose for usage anywhere and it has more complex features and all my crawl4ai and embedding and Supabase vector storage as well as a RAG interface and a playground I can generate code right in to. I self-host everything so it was a bit of a chore to get the whole thing orchestrated and fine tuned but its working amazing.
The thing of it is it can be as mild or wild as you want but you do have to pay careful attention when retrieving you do of course match all your column headers and their types as well as spacing and yes it will have to be created though I played around briefly with Coles Archon and I believe he created a site_pages table and a series of columns in my postgres db in Supabase. One way or another whatever you call for has to be in the order you are asking for without creating some fuzzy logic which you can also do but simple match is simple match…hard to go wrong.