Psyched about this, but gaping hole in implimentation

@ColeMedin or someone should create a LangGraph template with LLM and tools.

Archon totally inspired me, but as a very low level coder, it has been frustrating. I am trying to write up a multi-agent langgraph graph with 1 base agent and 1 tool, just to get the structure down. The problem is that the langgraph documentation and versions keep changing, and even their on-line resources and classes are based several incompatible older versions. I can make functional tools, I can get LangGraph to use tools, but I cannot get all the balls in the air at one time.

By using Cursor or similar, I get stuff going, then it tries to ā€œhelpā€ by changing the code layout to something that worked in the past, and I end up going in circles and destroying my functionality by playing whack-a-mole with little issues.

I now finally can define what I am looking for:

  • A LangGraph template with an orchestrator and tools.
  • Tools should be in LangChain, Python, Pydantic, API, and sub-graph formats.
  • It should use the latest version of ToolNode or similar.
  • It should demonstrate chat memory with local and Supabase (postgres).
  • It should demonstrate OpenAI and Ollama formats for chat and tool calls. Grok, Anthropic, Gemini and similar are nice-to-have’s but are fundamentally the same as OpenAI.
  • At the end, the user should be able to install Cole’s AI Development Suite, install and run Archon, plug the tool from archon directly into the template and run it.
  • It would also be nice to have generic UI that can demonstrate things like adding buttons or lists to the UI that would feed back through the API to tools.

Issues with LangGraph. *Tool calls keep changing, mostly for the better, but it confounds people like me.

  • At this point I believe langgraph.prebuilt with ToolNode and tool_condition are latest method.
  • They have changed language models and again I believe that langchainOllama ChatOllama is the latest method.
  • The setup for the web GUI to visualize and monitor graphs is obtuse at best. I get it to work about 1/2 the times I try.

My guess is it would take a competent programmer a few hours at most to crank this out and a few more hours to document it. I have spent weeks trying to reach this point. In fact, I failed at submitting my tool for the Hackathon because when I finally tried to integrate it with Agent 0, I could not make it work in spite of it working in my local test setup.

I understand for Cole, making the amazing new things he can envision is both fun and more lucrative on YouTube than fundamental, but it would REALLY help people join this new wave and start applying the tools of AI to real world problems sooner.

FIY, I have gotten numerous features running, so I have certainly done my homework before reaching out here. Not just asking as a frustrated script kiddie.

  • Agent personalities
  • RAG
  • Supabase before Cole added it to the package
  • Ollama before LangGraph made tools making it easier to integrate
  • Chat memory with vector from LangGraph (checkpoint does not really work for this purpose)
  • Web access to local tools and resources through free cloudflared.
  • Almost all Google API tools through several different LLM integration formats.

PS Add tags Pydantic, LangGraph, Archon, tools, graph, LLM, database, and let users either add tags themselves or easily suggest new ones.

1 Like

I appreciate the feedback for Archon a lot! I started it as a project to educate on LangGraph and Pydantic AI versus having it actual be a ā€œproduction readyā€ agent, so I agree 100% that there are many hurdles to overcome still before it’s fundamentally solid.

One thing to note is right now Archon uses both LangGraph and Pydantic AI, but it just has the documentation for Pydantic AI. I am adding in LangGraph to its knowledge base soon!

Seems like you’ve got some issues with LangGraph in general - is there something else you are considering better that’s similar?

1 Like

No, actually, it appears version 4 provides almost exactly what I need. I have installed it and will be running tomorrow. Thanks for the reply, and keep leveling us up!

Since I am having issues with Cursor jumping back and forth in versions of langgraph and ratholing me with fixes that break other stuff, I have added copilot AND gemini chat and code help to my Cursor extensions. I am going to have amazing code or loose my marbles here soon!

1 Like

Okay perfect! Yeah I’ve been pouring myself into V4 over the last couple of days so I’m glad it’s giving you what you are looking for!

Cursor with Copilot and Gemini? Sounds like quite the workflow haha

FYI
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
langchain-cli 0.0.35 requires typer[all]<0.10.0,>=0.9.0, but you have typer 0.15.1 which is incompatible.

then

ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fastapi-cli 0.0.7 requires typer>=0.12.3, but you have typer 0.10.0 which is incompatible.
langchain-cli 0.0.35 requires typer[all]<0.10.0,>=0.9.0, but you have typer 0.10.0 which is incompatible.
Successfully installed typer-0.10.0

Interesting… I’m not sure where the LangChain CLI is coming from since that isn’t in the requirements.txt and I believe I included all dependencies of the top level packages.

Are you installing this within a fresh virtual environment?

Yes. I followed the instructions exactly, but just in case I will pull it and start again. FYI There are no ollama requirements, and I have been using import Ollama or from langchain_ollama import ChatOllama then llm = ChatOllama(model=ā€œllama3.1ā€, temperature=0.0) and in my earlier mod to make your earlier.

Would you possibly be available to switch to email? I don’t want to bury this thread, rather solve it and post that.

1 Like

Well, on I went. The pip issue cleared up, apparently I had not deleted old files the way I thought I had, so the wipe and reinstall cleared it all up.

I then spend quite some time trying to chase down a database error

Error checking table status: {ā€˜code’: ā€˜PGRST301’, ā€˜details’: None, ā€˜hint’: None, ā€˜message’: ā€˜JWSError JWSInvalidSignature’}

The ULR and key seem good in env.vars

{
ā€œBASE_URLā€: ā€œhttp://localhost:11434/v1ā€,
ā€œSUPABASE_URLā€: ā€œhttp://localhost:8000ā€,
ā€œSUPABASE_SERVICE_KEYā€: ā€œeyJ…4GEā€,
ā€œREASONER_MODELā€: ā€œdeepseek-r1ā€,
ā€œPRIMARY_MODELā€: ā€œllama3.1ā€,
ā€œEMBEDDING_MODELā€: ā€œnomic-embed-textā€
}

and I can pull data from the table with

curl -H ā€œapikey: eyJ…4GEā€ http://localhost:8000/rest/v1/site_pages?select=*&limit=1

I am certain I could just move to the hosted database, but I want it to work the way I set it up and that it appears it should work from the code.

Following the Database tab guide, supabase at :8000/settings/api just cause invalid credentials error.

1 Like

Yes we can move to email if you want! Thanks for posting all of these issues so I can work through them with you.

I haven’t tested Archon with a local Supabase instance yet so I’m not surprised that there are a couple issues coming up. Right now the URL it gives to access the API keys is just for the remote Supabase, that is true. I should probably call that out.

For this error:

Error checking table status: {ā€˜code’: ā€˜PGRST301’, ā€˜details’: None, ā€˜hint’: None, ā€˜message’: ā€˜JWSError JWSInvalidSignature’}

Where is that happening exactly?

Pretty soon here I’ll have to try to connect Archon to the local AI stack and see if I run into these things myself.