RAG for token reduction

I came upon this article yesterday while thinking about token reduction. How to generate accurate LLM responses on large code repositories: Presenting CGRAG, a new feature of dir-assistant | by Chase Adams | Medium

TLDR: this developer took the approach of using RAG as a means of pulling in context from a large project for code development but recognized that traditional RAG by itself might not pull in all the context from intricate project dependencies. They instead use 2 rounds of RAG, one to determine dependience and a second to pull context for code generation.

we are brainstorming on different context optimization approaches.
the article looks cool, I have read a paper on similar topic to add some context of the snippet of doc before placing it to the vector db that way the when llm reads the snippet it does not read it out of context

not sure how much these approaches will work on a codebase as codebase.
as these files will change very often and needs to be reindexed multiple times and that might increase the token count instead of deceasing