i suggest moving all functionality to “capabilities” to create tools:
- these capabilities and tools can be dl (a default capability) or on-the-fly created;
- this is implemented via mcp and call to llama.cpp by default (optional openai compatible url) and a tiny model (llama3.2 1b) and using a modular multi-agent architecture - something like this:
- various tiny tools (capabilities) are “registered” via a “tools template” (tool), which defines their dependencies and success / fail criteria: cd folder, md folder, pipe text into a file, …;
- a manager agent spawns the various capabilities per the template as mcp servers (async agents) working independently;
- a capability is provided start and fail condition(s);
- when a capability completes its work (or times out) it returns a status to the manager;
- the manager then returns a message to bolt once all capabilities finished running (however that’s implemented).
this would have the benefit of not having to make so many basic code changes anymore, and instead most if not all functionality could be implemented via capabilities and tools.
an example tool would be “connect to ai provider abc using model xyz”.