Hello, is there a way to use bolt.diy using Github copilot? At the moment, I have a local server that can programmatically call GitHub copilot API. Almost any AI app like bolt works well. This gives me access to Claude and gpt4 unlimited token usage for just $10. But its not working with bolt, im not sure why. Could it be because I only use the chat completions endpoint and bolt uses something else?
At the moment there is no native integration of Github Copilot.
Can you explain a bit more in detail what you configured and what is not working? (maybe add some screenshots)
My current configuration
I believe no additional integrations are needed since the Copilot API is almost fully compatible with OpenAI, which is why it works well with other agent like applications that are similar to bolt.
However, when I try sending a msg in Bolt, I encounter the following error:
‘‘There was an error processing your request: No details were returned’’
I monitored the live backend logs of my local server, I noticed that no requests are received when i send msg from bolt. The only time my server gets a request is when I add the Base URL in the Bolt settings page. However, after that, if I send a message through Bolt, it doesn’t forward any requests to my local server.
How can I configure Bolt to ensure that it sends requests to my local server?
For records, my local server supports this API. Any requests to these endpoints should work.
GET /
: Home pageGET /healthz
: Health checkGET /v1/models
: Get model listPOST /v1/chat/completions
: Chat APIPOST /v1/embeddings
That configuration can’t possibly work… All those services can’t run off of one port. I’d suggest using the defaults. Unless I’m missing something…
Edit: I meant configuration, not confusion. Apologies.