Some bugs and improvments

Hello, I’m new there.
Just cloned the repo, readed the ReadMe, but first i didn’t found any indications to use the models or links to download them (For ollama there is a link, i presume we need to install it ourselves)
For chatGPT, i went to the API Key page, created one with “ALL” params, pasted it in my env.local file but still not working. I also tried to add it trough the bolt.new form.

And, there was some troubleshooting at start:

➜  Local:   http://localhost:5173/
  ➜  Network: use --host to expose
  ➜  press h + enter to show help
Error getting Ollama models: TypeError: fetch failed
    at node:internal/deps/undici/undici:13484:13
    at processTicksAndRejections (node:internal/process/task_queues:105:5)
    at Object.getOllamaModels [as getDynamicModels] (C:/dev/Bolt-Local/bolt.new-any-llm/app/utils/constants.ts:318:22)
    at async Promise.all (index 0)
    at Module.initializeModelList (C:/dev/Bolt-Local/bolt.new-any-llm/app/utils/constants.ts:389:9)
    at handleRequest (C:/dev/Bolt-Local/bolt.new-any-llm/app/entry.server.tsx:30:3)
    at handleDocumentRequest (C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+server-runtime@2.15.0_typescript@5.7.2\node_modules\@remix-run\server-runtime\dist\server.js:340:12)
    at requestHandler (C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+server-runtime@2.15.0_typescript@5.7.2\node_modules\@remix-run\server-runtime\dist\server.js:160:18)
    at C:\dev\Bolt-Local\bolt.new-any-llm\node_modules\.pnpm\@remix-run+dev@2.15.0_@remix-run+react@2.15.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_zyxju6yjkqxopc2lqyhhptpywm\node_modules\@remix-run\dev\dist\vite\cloudflare-proxy-plugin.js:70:25 {
  [cause]: Error: connect ECONNREFUSED ::1:11434
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1615:16)
      at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -4078,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '::1',
    port: 11434
  }
}

To fix this, i went to app>utils>constants.ts and, looked for “OLLAMA_API_BASE_URL” and put here: http://127.0.0.1:11434 instead of “localhost”.

Then for ChatGPT: got this error:

responseBody: '{\n' +
    '    "error": {\n' +
    '        "message": "The model `gpt-4o` does not exist or you do not have access to it.",\n' +
    '        "type": "invalid_request_error",\n' +
    '        "param": null,\n' +
    '        "code": "model_not_found"\n' +
    '    }\n' +
    '}\n',

That’s a very strange error! localhost is generally the same as 127.0.0.1. What OS are you on? And have you done anything to tweak the networking for your machine?

For that GPT error, did you set up billing in the OpenAI developer console?

Hi, you are right about the billing, i didn’t setup it for OpenAi but i did for Anthropic.
With Anthropic it is working, but bolt just stop in the process of creating the files… .
For now the best results i got was using Ollama, and it’s free.
Can we use any model locally (after downloading it with hugginface?) without being forced to use the API’s in general?

Just asking, but is there a problem with this line in app>utils>constants.ts: export const DEFAULT_MODEL = ‘claude-3-5-sonnet-latest’; ?

Another thing i found is that pasting the API Keys directly into bolt doesn’t save them into my .env file, is that the normal behaviour?

Sorry for all thoses questions, i’m still new an trying to undersand how things works.