Webhook as model

Hi everyone,
If I wanted to use a webhook as an .env variable, would it be as simple as changing this part?

You only need this environment variable set if you want to use oLLAMA models

EXAMPLE http://localhost:11434

OLLAMA_API_BASE_URL=http://my_localwebhook_ip:port_of_choice.

As long as the webhook is capable of processing responses, will the code still be generated in ottodev?

So a webhook endpoint isn’t going to work out of the box because it has to support all the endpoints that an “Open AI compatible API” supports. I would do some research on that! Your API would have to at least support the /invoke method.

It fits best by using the AnythingLLM API as a “model” in OttoDev. OttoDev can call the AnythingLLM API endpoints to interact with models, workspaces, and other features without needing to convert AnythingLLM itself. This keeps the integration straightforward and modular.

From Gpt, I wont lie.

So does AnythingLLM turn webhooks into OpenAI compatible APIs? I’m a little confused here but this seems like something really cool!

I was thinking that way until you introduced flowise. Just to clarify, the json code that allows flowise to communicate with n8n, can that be fed into OttoDev and how?

1 Like