Dupre
March 10, 2025, 6:19am
1
I have been trying different models all night with OpenRouter, and every single one throws an error. FYI: All these tests were done using the example “Build me an AI agent that can search the web with the Brave API.”
I have yet to find a single working model. Ollama doesn’t work. OpenRouter doesn’t work.
Here are some of the errors:
openai.NotFoundError: Error code: 404 - {‘error’: {‘message’: ‘No endpoints found that support tool use. To learn more about provider routing, visit: Provider Routing | Intelligent Multi-Provider Request Routing — OpenRouter | Documentation ’, ‘code’: 404}}
TypeError: ‘NoneType’ object cannot be interpreted as an integer
pydantic_ai.exceptions.UnexpectedModelBehavior: Received empty model response
Dupre
March 10, 2025, 6:58am
2
I finally found a model that works.
When using OpenRouter, you can filter models by “tools” under Supported Parameters on the sidebar on their website.
You should definitely add that to the instructions.
1 Like
Dupre
March 10, 2025, 7:23am
3
Still getting a lot of errors. This one primarily: TypeError: ‘NoneType’ object cannot be interpreted as an integer
opened 09:36PM - 22 Dec 24 UTC
bug
refactor
In case the API returns a 429/Rate limit exceeded, pydantic-ai throws a date-tim… e parsing exception instead of surfacing the appropriate error message from the API around RLE(rate-limit-exceeded).
This can easily be replicated by using openrouter with one of the free gemini models.
```python
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
model = OpenAIModel(
"google/gemini-2.0-flash-exp:free",
base_url="https://openrouter.ai/api/v1",
api_key="key",
)
agent = Agent(
model=model,
system_prompt='Be concise, reply with one sentence.',
)
result = agent.run_sync('Who are you?')
print(result.data)
```
The above returns -
```python
Traceback (most recent call last):
File "/Users/sam/dev/openai/openai_demo.py", line 32, in <module>
result = agent.run_sync('Who are you?')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sam/dev/openai/.venv/lib/python3.12/site-packages/pydantic_ai/agent.py", line 327, in run_sync
return asyncio.get_event_loop().run_until_complete(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/sam/dev/openai/.venv/lib/python3.12/site-packages/pydantic_ai/agent.py", line 255, in run
model_response, request_usage = await agent_model.request(messages, model_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sam/dev/openai/.venv/lib/python3.12/site-packages/pydantic_ai/models/openai.py", line 152, in request
return self._process_response(response), _map_usage(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sam/dev/openai/.venv/lib/python3.12/site-packages/pydantic_ai/models/openai.py", line 207, in _process_response
timestamp = datetime.fromtimestamp(response.created, tz=timezone.utc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object cannot be interpreted as an integer
```
This happens because the error response is not correctly handled in _process_response -
```python
ChatCompletion(id=None, choices=None, created=None, model=None, object=None, service_tier=None, system_fingerprint=None, usage=None, error={'message': 'Provider returned error', 'code': 429, 'metadata': {'raw': '{\n "error": {\n "code": 429,\n "message": "Quota exceeded for aiplatform.googleapis.com/generate_content_requests_per_minute_per_project_per_base_model with base model: gemini-experimental. Please submit a quota increase request. https://cloud.google.com/vertex-ai/docs/generative-ai/quotas-genai.",\n "status": "RESOURCE_EXHAUSTED"\n }\n}\n', 'provider_name': 'Google'}}, user_id='user_...')
```
We should check for the presence of the error object and handle the other fields appropriately.
Note: I have noticed this with both google's OpenAI compat API and openrouter's gemini API.
This is what an example output response may look like
1 Like
Yeah I should definitely add to the instructions that the LLM has to support tools, I appreciate the suggestion!
Where are you getting that NoneType error?
Dupre
March 10, 2025, 8:23pm
5
From within the Archon chat tab. Regardless of what the prompt is, it returns this error.
Gotcha… do you know which part of the graph process behind the scenes is producing this?
I have found for local hosting qwq has tool capabilities.
1 Like