Describe the bug
When I use openAiLike, I can pull the qwen model list normally, but when I select a model to send a request, the browser console prompts POST http://localhost:5173/api/llmcall 500 (Internal Server Error), the terminal prompts the following content:
APICallError [AI_APICallError]: <400> InternalError.Algo.InvalidParameter: Range of max_tokens should be [1, 2048]
at file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@AI-SDK+provider-utils@2.1.2_zod@3.24.1/node_modules/@ai-sdk/provider-utils/src/response-handler.ts:59:16
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at postToApi (file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@AI-SDK+provider-utils@2.1.2_zod@3.24.1/node_modules/@ai-sdk/provider-utils/src/post-to-api.ts:81:28)
at OpenAIChatLanguageModel.doGenerate (file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@AI-SDK+openai@1.1.2_zod@3.24.1/node_modules/@ai-sdk/openai/src/openai-chat-language-model.ts:367:50)
at fn (file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/ai@4.1.2_react@18.3.1_zod@3.24.1/node_modules/ai/core/generate-text/generate-text.ts:331:30)
at file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/ai@4.1.2_react@18.3.1_zod@3.24.1/node_modules/ai/core/telemetry/record-span.ts:18:22
at retryWithExponentialBackoff (file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/ai@4.1.2_react@18.3.1_zod@3.24.1/node_modules/ai/util/retry-with-exponential-backoff.ts:37:12)
at fn (file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/ai@4.1.2_react@18.3.1_zod@3.24.1/node_modules/ai/core/generate-text/generate-text.ts:291:32)
at file:///Users/admin/Desktop/bolt.diy/node_modules/.pnpm/ai@4.1.2_react@18.3.1_zod@3.24.1/node_modules/ai/core/telemetry/record-span.ts:18:22
at llmCallAction (/Users/admin/Desktop/bolt.diy/app/routes/api.llmcall.ts:104:22)
at Object.callRouteAction (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+server-runtime@2.16.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/data.js:36:16)
at /Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:4934:19
at callLoaderOrAction (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:4998:16)
at async Promise.all (index 0)
at defaultDataStrategy (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:4807:17)
at callDataStrategyImpl (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:4870:17)
at callDataStrategy (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:4027:19)
at submit (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:3790:21)
at queryImpl (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:3719:22)
at Object.queryRoute (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+router@1.23.0/node_modules/@remix-run/router/router.ts:3664:18)
at handleResourceRequest (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+server-runtime@2.16.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:420:20)
at requestHandler (/Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+server-runtime@2.16.0_typescript@5.7.2/node_modules/@remix-run/server-runtime/dist/server.js:166:18)
at /Users/admin/Desktop/bolt.diy/node_modules/.pnpm/@remix-run+dev@2.16.0 @remix-run+react@2.16.0_react-dom@18.3.1_react@18.3.1__react@18.3.1_typ_p5vtzgbgkqnkvdtz7nu6nrsb2i/node_modules/@remix-run/dev/dist/vite/cloudflare-proxy-plugin.js:70:25 {
cause: undefined,
url: ‘https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions’,
requestBodyValues: {
model: ‘qwen-vl-max’,
logit_bias: undefined,
logprobs: undefined,
top_logprobs: undefined,
user: undefined,
parallel_tool_calls: undefined,
max_tokens: 8000,
temperature: 0,
top_p: undefined,
frequency_penalty: undefined,
presence_penalty: undefined,
response_format: undefined,
stop: undefined,
seed: undefined,
max_completion_tokens: undefined,
store: undefined,
metadata: undefined,
prediction: undefined,
reasoning_effort: undefined,
messages: [ [Object], [Object] ],
tools: undefined,
tool_choice: undefined,
functions: undefined,
function_call: undefined
},
statusCode: 400,
responseHeaders: {
‘content-encoding’: ‘gzip’,
‘content-type’: ‘application/json’,
date: ‘Sun, 23 Mar 2025 14:48:25 GMT’,
‘req-arrive-time’: ‘1742741305230’,
‘req-cost-time’: ‘163’,
‘resp-start-time’: ‘1742741305393’,
server: ‘istio-envoy’,
‘set-cookie’: ‘acw_tc=d1461275-0c16-99bc-a0d2-fa67891ff92c306386e7fdbb01f208c48a073ce93826;path=/;HttpOnly;Max-Age=1800’,
‘transfer-encoding’: ‘chunked’,
vary: ‘Origin,Access-Control-Request-Method,Access-Control-Request-Headers, Accept-Encoding’,
‘x-dashscope-call-gateway’: ‘true’,
‘x-envoy-upstream-service-time’: ‘162’,
‘x-request-id’: ‘d1461275-0c16-99bc-a0d2-fa67891ff92c’
},
responseBody: ‘{“error”:{“code”:“invalid_parameter_error”,“param”:null,“message”:“<400> InternalError.Algo.InvalidParameter: Range of max_tokens should be [1, 2048]”,“type”:“invalid_request_error”},“id”:“chatcmpl-d1461275-0c16-99bc-a0d2-fa67891ff92c”,“request_id”:“d1461275-0c16-99bc-a0d2-fa67891ff92c”}’,
isRetryable: false,
data: {
error: {
message: ‘<400> InternalError.Algo.InvalidParameter: Range of max_tokens should be [1, 2048]’,
type: ‘invalid_request_error’,
param: null,
code: ‘invalid_parameter_error’
}
},
[Symbol(vercel.ai.error)]: true,
[Symbol(vercel.ai.error.AI_APICallError)]: true
}
May I ask how to configure an HTTP address for deployment in a private environment;
Also in the official documentation adding-new-llms it was mentioned that there is a MODEL_LIST constant in app/tiles/constants. ts, but I couldn’t find it
Link to the Bolt URL that caused the error
Steps to reproduce
- Configure OPENAI_LIKE_API_BASE_URL and OPENAI_LIKE_API_KEY in.env.local
- Select OpenAiLike to get the list of models and select any one
- Type in some random text and send it
Expected behavior
You can configure a private https address and return it normally
Screen Recording / Screenshot
No response
Platform
- OS: macOS
- Browser: Chrome
- Version: 134.0.6998.118(arm64)
Provider Used
No response
Model Used
No response
Additional context
No response