Hello, I’m new there.
Just cloned the repo, readed the ReadMe, but first i didn’t found any indications to use the models or links to download them (For ollama there is a link, i presume we need to install it ourselves)
For chatGPT, i went to the API Key page, created one with “ALL” params, pasted it in my env.local file but still not working. I also tried to add it trough the bolt.new form.
And, there was some troubleshooting at start:
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h + enter to show help
Error getting Ollama models: TypeError: fetch failed
at node:internal/deps/undici/undici:13484:13
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at Object.getOllamaModels [as getDynamicModels] (C:/dev/Bolt-Local/bolt.new-any-llm/app/utils/constants.ts:318:22)
at async Promise.all (index 0)
at Module.initializeModelList (C:/dev/Bolt-Local/bolt.new-any-llm/app/utils/constants.ts:389:9)
at handleRequest (C:/dev/Bolt-Local/bolt.new-any-llm/app/entry.server.tsx:30:3)
at handleDocumentRequest (C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+server-runtime@2.15.0_typescript@5.7.2\node_modules\@remix-run\server-runtime\dist\server.js:340:12)
at requestHandler (C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+server-runtime@2.15.0_typescript@5.7.2\node_modules\@remix-run\server-runtime\dist\server.js:160:18)
at C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm\node_modules\@remix-run\dev\dist\vite\cloudflare-proxy-plugin.js:70:25 {
[cause]: Error: connect ECONNREFUSED ::1:11434
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1615:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 11434
}
}
To fix this, i went to app>utils>constants.ts and, looked for “OLLAMA_API_BASE_URL” and put here: http://127.0.0.1:11434 instead of “localhost”.
Then for ChatGPT: got this error:
responseBody: '{\n' +
' "error": {\n' +
' "message": "The model `gpt-4o` does not exist or you do not have access to it.",\n' +
' "type": "invalid_request_error",\n' +
' "param": null,\n' +
' "code": "model_not_found"\n' +
' }\n' +
'}\n',