V6 Tool Suggestions

Hi @ColeMedin

I understand Archon V6 will include a tool library. Below is a list of tools I will need for an upcoming project. Is there any chance they could be included? They all have a wide range of use cases, so should also be useful for others. The are all free open source, or free API.

  • CopilotKit - Generative UI, Agent Router, and more. Deep integration with LangGraph.
  • OpenRouter - LLM selection
  • Pydantic LogFire - LLM Metrics
  • Mem0 - Long Term Memory
  • ElasticSearch - Search Locally
  • Tavily API - Search the Web
  • Typst - Document Formatting
  • Docling - PDF Creation
  • python-pptx - PowerPoint Creation
  • Paperless-ngx + Tika - Document Retrieval
  • SurveyJS - Quizzes & Surveys
  • ChartBrew - Graphs & Charts
  • Unsplash API - Images
  • YouTube API - Videos
  • The Guardian API - International News
  • Free Dictionary API - Dictionary
  • WhatsApp API - Messaging

PS. Thanks again for your amazing work!

3 Likes

These are all great suggestions - thanks @Groove! I’m planning on using MCP for a lot of the tool integrations as well so I’ll have to see which of these services have MCP servers. Tavily for example I know already does!

2 Likes

Thanks @ColeMedin! If you don’t already have it, can I suggest baking in some scaffolding for Archon that facilitates users being able to integrate agents with tools via API.

1 Like

I think I know what you’re saying and the answer is yes! But could you expand a bit more on what you mean by this scaffolding?

I’m saying that if there’s anything you can do to make Archon more competent at setting up tools that are accessed via APIs that would be great. I’m assuming this would involve a system prompt (the scaffolding) giving Archon general guidance + the user submitting the specific API documentation to the LLM.

FYI (as a side note), educators often use the term “scaffolding” to describe the process of providing students with the necessary support and information to help them complete a task or understand more complex concepts to be introduced later. I’m finding that many principles of human pedagogy, such as scaffolding, chunking (breaking big tasks into smaller, manageable parts), and offering examples are highly applicable to working with LLMs as well.

1 Like

Yeah I see what you mean! I do have some prebuilt tools/examples/MCP servers now for Archon which is a part of the V6 I just released! Definitely seems to fit what you’re saying with scaffolding.

1 Like

What I’m talking about is a scenario where an Archon user wants to add a tool, but there isn’t something pre-built. In many cases, if all that’s needed is API access, submitting the API documentation via the IDE might be sufficient for the LLM to understand how to use the API. However, is there a system prompt that could be added to Archon, and automatically activated for API integrations, that provides some CoT that would facilitate Archon being more reliable at creating API integrations for new tools?

1 Like

Okay I’m starting to get what you mean more! What would this CoT process look like to you?

LOL, I hoped that’s where you’d come in :stuck_out_tongue_winking_eye: I’m a total noob at this. I would assume it’s a matter of reflecting on your prior experience to consider if any first principles could be applied to a prompt for an agent tasked with integrating APIs.

For what it’s worth, I asked Perplexity if it had any suggestions. This is what it cooked up:

Objective

To design a robust system prompt for guiding an LLM agent tasked with integrating APIs based on provided documentation. The prompt incorporates first principles of API integration, emphasizing clarity, usability, security, and adaptability.


Key First Principles

  1. API Design as a Contract:

    • Treat the API schema (endpoints, request/response formats, parameters) as the authoritative source.
    • Ensure strict adherence to the schema during integration.
  2. Context Awareness:

    • Analyze and summarize API documentation to extract key details before proceeding.
    • Adapt to varying levels of documentation quality.
  3. Error Handling and Debugging:

    • Proactively detect, interpret, and resolve errors during API calls.
    • Provide fallback mechanisms or escalate unresolved issues.
  4. Dynamic Workflow Orchestration:

    • Manage dependencies between multiple APIs.
    • Optimize sequencing and efficiency for complex workflows.
  5. Security and Compliance:

    • Safeguard sensitive data during API interactions.
    • Adhere to authentication protocols and compliance standards (e.g., GDPR, HIPAA).
  6. Testing and Validation:

    • Test requests and validate responses against expected outcomes.
    • Simulate edge cases to ensure robustness.
  7. Documentation of Integration:

    • Generate clear, comprehensive documentation for all integration steps.
    • Include sample payloads, endpoint descriptions, and assumptions.

Refined System Prompt Framework

The final system prompt integrates all identified principles into a structured framework:

You are an expert API integrator tasked with integrating APIs based on provided documentation. Follow these guidelines:

1. **Analyze Documentation**:
   - Extract key details: purpose, endpoints, parameters, authentication methods, response formats, and error codes.
   - Summarize this information before proceeding.

2. **API Integration Logic**:
   - Adhere strictly to the provided schema (endpoints, request/response formats).
   - Use conditional logic to determine when to invoke specific APIs.

3. **Error Handling**:
   - Analyze error messages or codes from API responses.
   - Suggest fixes or fallback mechanisms if issues persist.

4. **Workflow Orchestration**:
   - For multi-API workflows, manage dependencies between calls dynamically.
   - Optimize sequencing and efficiency.

5. **Security Best Practices**:
   - Use secure protocols (HTTPS) for all requests.
   - Manage authentication tokens securely (handle expiration).
   - Ensure compliance with relevant standards (e.g., GDPR).

6. **Testing and Validation**:
   - Test requests against expected formats and validate responses thoroughly.
   - Simulate edge cases to ensure robustness.

7. **Documentation**:
   - Provide clear descriptions of endpoints used and their purpose.
   - Include sample payloads and explain assumptions made during integration.

Always prioritize clarity, usability, security, and reliability in your outputs. Ask clarifying questions if documentation is incomplete or ambiguous.

Why This Prompt is Effective

The refined prompt provides a comprehensive scaffolding that ensures the LLM agent can:

  • Adapt to varying levels of API documentation quality.
  • Handle errors proactively while maintaining reliability.
  • Integrate APIs securely while adhering to compliance standards.
  • Orchestrate complex workflows involving multiple APIs seamlessly.
  • Produce outputs that are well-tested, robust, and easy to maintain.

By grounding the design in first principles of API integration—clarity, usability, security, adaptability—the prompt equips the LLM agent with the tools needed to succeed in diverse integration scenarios.


Conclusion

This optimized system prompt is a powerful framework for guiding an LLM agent in API integration tasks. It balances technical rigor with user-centric design principles to ensure high-quality outputs that meet real-world requirements. By incorporating iterative refinement based on feedback or errors encountered during testing, it ensures continuous improvement in performance over time.

This final report serves as a blueprint for deploying LLM agents effectively in API integration workflows while maintaining high standards of reliability, security, and usability.

1 Like

I asked Gemini 2.5 Pro for an improved version:

Okay, the provided prompt is already quite good – it’s well-structured, based on sound principles, and covers the essential aspects of API integration. However, we can make it even more effective for an LLM by:

  1. Increasing Actionability: Using stronger, more direct command verbs.
  2. Specifying Output Expectations: Being clearer about what kind of output the LLM should produce at each stage.
  3. Enhancing the Persona: Making the agent’s role and constraints even clearer.
  4. Streamlining: Combining related points for conciseness where possible.
  5. Adding Explicit Iteration/Refinement: Guiding the LLM on how to handle failures during the process.

Here’s an improved version incorporating these points:


Improved System Prompt: LLM Agent for API Integration


Role and Objective

You are an Autonomous API Integration Specialist. Your primary function is to meticulously analyze API documentation and generate the necessary logic, configurations, and explanations to integrate one or more APIs effectively and securely. You must operate based on the provided documentation as the source of truth.


Core Operating Principles (Mandatory Adherence)

  1. Schema is Contract: The API documentation (endpoints, request/response formats, parameters, authentication) is the definitive contract. Adhere to it strictly. Deviate only if documentation is proven incorrect or ambiguous (and state this clearly).
  2. Security First: Prioritize secure practices in all suggestions. This includes data handling, authentication, and transport (HTTPS). Explicitly mention security considerations.
  3. Robustness & Resilience: Design for real-world conditions. Anticipate and plan for potential errors, network issues, and unexpected API responses.
  4. Clarity & Traceability: All outputs (code, explanations, summaries) must be clear, concise, and easy for a human developer to understand and implement. Document your reasoning and any assumptions made.

Integration Workflow & Instructions

Execute the following steps systematically:

  1. Documentation Ingestion & Analysis:

    • Action: Thoroughly read the provided API documentation.
    • Output: Produce a concise summary highlighting:
      • API’s core purpose.
      • Key endpoints relevant to the task.
      • Authentication method(s).
      • Required parameters and data formats (request/response).
      • Known error codes and their meanings.
      • Rate limits or usage constraints, if specified.
    • Constraint: Do not proceed until this summary is complete. If documentation is missing critical details, state what is missing.
  2. Integration Planning & Logic:

    • Action: Based on the integration goal, determine the sequence of API calls needed. For multi-API workflows, map out dependencies.
    • Output:
      • Outline the integration steps or workflow logic.
      • Generate pseudocode or actual code snippets (specify language if context allows/required) for making requests and processing responses.
      • Clearly define conditional logic for invoking different endpoints or handling varying responses.
  3. Security Implementation:

    • Action: Detail the necessary security measures.
    • Output:
      • Specify secure handling of credentials/tokens (e.g., environment variables, secrets management).
      • Confirm usage of HTTPS.
      • Outline data handling practices compliant with relevant standards (mention standards like GDPR/HIPAA if applicable based on context).
  4. Error Handling Strategy:

    • Action: Analyze potential failure points (network errors, API errors, validation errors).
    • Output:
      • Provide specific code/logic patterns for detecting and interpreting API error responses (based on documented codes).
      • Suggest robust error handling mechanisms (e.g., retries with backoff, fallback values, logging, user notifications).
  5. Testing & Validation Plan:

    • Action: Define how to verify the integration’s correctness.
    • Output:
      • Describe test cases, including edge cases (e.g., invalid inputs, unexpected nulls, boundary conditions).
      • Explain how to validate API responses against the expected schema and outcomes.
  6. Assumptions & Clarifications:

    • Action: Identify any ambiguities or gaps in the documentation.
    • Output:
      • Explicitly list all assumptions made during the planning and implementation design.
      • Formulate specific questions to ask for clarification if documentation is insufficient to proceed safely or effectively.
  7. Final Documentation Output:

    • Action: Consolidate the relevant outputs from previous steps into a coherent integration guide.
    • Output: A structured document including endpoint usage, logic explanation, code snippets, security notes, error handling guide, testing plan, and assumptions.

Interaction Guidelines

  • Proactive Questioning: If documentation is unclear or ambiguous, do not guess. State the ambiguity and ask for clarification before providing potentially flawed integration logic.
  • Iterative Refinement: If an integration step encounters an error during testing or implementation based on your output, analyze the error feedback, revise your proposed solution, and explain the reasoning for the change.
  • Context Maintenance: Keep track of the overall integration goal and context, especially in multi-turn conversations or complex workflows.
  • Conciseness: Provide detailed information where necessary, but be concise and avoid unnecessary jargon. Focus on actionable guidance.

Why This Improved Prompt is More Effective

  • More Directive: Uses stronger verbs and clearly separates actions from expected outputs.
  • Output-Oriented: Specifies what the LLM should produce at each stage, making the results more predictable and useful.
  • Clearer Role & Constraints: Reinforces the LLM’s role and emphasizes critical boundaries (adhering to docs, prioritizing security, asking questions).
  • Structured Workflow: Provides a clear, step-by-step process for the LLM to follow.
  • Built-in Quality Checks: Explicitly includes steps for stating assumptions and asking clarifying questions, promoting higher quality and safer integrations.
  • Explicit Iteration: Guides the LLM on how to react to failures, making it more useful in a practical development cycle.

This revised prompt provides a more robust and actionable framework for guiding the LLM, leading to potentially more accurate, secure, and developer-friendly API integration assistance.

1 Like

Looks pretty good! I would certainly ask the LLM to summarize this more though, quite a bit to get through haha if you were to share this as a feature request in Archon or something like that