I know there are several other messages with discussions of similar topics, but I thought we needed one that really focused on the matter at hand. Cole has mentioned several times that he wants to include Archon but he is juggling a lot of issues, so perhaps we can work together.
There are two approaches.
The first is to create Archon as a separate docker container and just reference all the local-ai-packaged. Then we need to get from one docker network to another.
The second is to actually put Archon into the docker compose for local-ai-packaged. That way we just need to hit services by name.
For the llm, it would be ollama:11434 or localhost:11434. FYI, when running ollama, if you have some giant model loaded and you need to use nomic-embed-text, it appears to load it and run it in computer memory, and leaves your large model loaded and undisturbed.
Regarding the database, for either method I think it would be through pooler:5432 or localhost:5432. But in the package we have the faster dedicated vector database, so would that be a better choice?
Aside from the .env connections, everything else should be set with the stand-alone docker container, but the mods for adding it to the larger local-ai seem to be explained in the “containerizing our local python agent” section of the Ultimate Guide to Local AI… video.
If you have done any of this work already, please respond with what worked for you. I forked the local-ai-packaged on git under local-ai-packaged and will be updating the files there as I work through this problem.
Please join me in this effort or follow along, and I will also be documenting the bits and pieces so as to add anything else to to the amazing local-ai-packaged. – Bob O