Wirte the code but Preview not working, [RSS] Invalid or missing options

i just install and start to using bolt.diy but already stuck.

i gave it prompt just for test, “build bolg” one of test and it write code ( normal chrome and canary) and stop, not run dev automatically, so i run it in terminal.

npm run dev

astro-blog@0.0.1 dev
astro dev

[config] Astro found issue(s) with your configuration:
! integrations.1 Expected object, received promise.

Error: [RSS] Invalid or missing options:
“” is required. ()
at validateRssOptions (file:///home/project/node_modules/@astrojs/rss/dist/index.js:57:26)
at async Module.getRSS (file:///home/project/node_modules/@astrojs/rss/dist/index.js:47:31)

Node.js v18.20.3

~/project 25s

~/project
❯ npm run dev

astro-blog@0.0.1 dev
astro dev

is this problem with node.js ?

i had an issue when i install it, but repaired (reinstall lilke) then move on, install the bolt.diy.

i did check update
pnpm install
git pull origin main

all checked, using gpt 4o and Gemini 2.0 flash
and normal chrome browser, look same in canary
same teminal error in canary

please help me :wink:

Hey,

use the stable branch if you have trouble.

git checkout stable
git pull
pnpm install
pnpm run dev

I testet your prompt and it worked fine fore me.

thank you and i tried, but same error, asrto rss blah…

other one not blog making app, it write code but preview dosent change like old internet look UI, doesnt affect at preview but button animation work when cursor hover it. and also snake game looks ok.

any idea???

Ok, just saw that it is astro. I testet also and same problems as you got. looks like astro at all is not handled very well.

Got some luck, when doing more advanced prompts with more instructions what to do, but still very bad output.

Maybe someone else has more ideas/insights on this (@thecodacus or @aliasfox maybe?)

oh thank you for confirming not only me, i think i might go back to normal bolt.new. Was thinking way too expensive error fixing and doesnt follow my instruction bit stress but qulity way better than bolt.diy. dont know it is me or maybe the api? openai and google not good for this job?

this is really plain look build and prompt dosent affected, i cant figure it out. maybe try again…gemini 2.0 write code really fast btw.

anyway thank you for your help :wink:

i could be wrong, but it seems you have build in vite.
it doesn’t work in bolt.diy for some reason.
i mean, it will, if you will ask bolt to fix and fix and fix…

but if you ask it to use Next js for example, it will work perfectly.

this prompt was done in 8 seconds, no errors or anything.

1 Like

@Arka I dont think the problem is vite. It is “astro” as metioned in my post.

1 Like

my reply was about last prompt, where BugShare app, not the blog :slight_smile:

1 Like

I didnt see an error in this. Thought he just wanted to show this working :smiley:

1 Like

yes,it was`nt able to build anything with vite and react. i will try with “next” thank you!

1 Like

haha no, I wasn`t. It look ugly and I just want to people know about this “bug” or error. So now I can try another round! thank you :wink: do you know how to hook the Ollama to Bolt.diy? in my comand it keep showing me fetching faild and that cause some disconnect or glitch. and do you know what does mean of the orange dot in the code tab? when the AI write the code it keep showing.

thanks guys i confirm that its working even on old labtop little glitch when i do in public using free wifi but normally it is superfast with Gemini 2.0 (i guess free so far otherwise i will be bankcrupt ) appreciate it!

1 Like

Thats not the fault of bolt :slight_smile: => depends on your prompt. Try to do something like:

Implement a simple bug tracker app. Apply a clean, modern UI design with rounded corners, subtle shadows, and a minimalistic color scheme using soft gradients and ample white space.

what will lead to (Gemini 2.0 Flash in my case):

See also the Tips and Tricks in the Docs:
https://stackblitz-labs.github.io/bolt.diy/#tips-and-tricks

1 Like

Ollama => check this thread: How to run an ollama model with locally installed OttoDev?

1 Like

I used gpt made prompt as well, it only gave me that ugly UI too ( cause by vite and react? ) works fine in Next.js projects, and yes my prompt isn`t good enough, so I most of time use gpt :wink: and thank you for the Ollama instruction, by the way, did you ever get this warn “WARN Constants Failed to get Ollama models: fetch failed WARN Constants Failed to get Ollama models: fetch failed WARN Constants Failed to get Ollama models: fetch failed” I am keep getting this notification in CMD window.

1 Like

Warning Ollama => yes, get them too, even when Ollama is disabled. Could be improved I guess and not shown, when Ollama is disabled anyway.

You could create a ticket in github for this.

2 Likes

hmm I dont know how to create ticket, is that mean create post like this one? it is super annoying, it kept trying to fetch which affect to my network? not sure but I feel its laggy, my system is already old… i just want to disable it bur ca`t find simple answer :wink: thank you for helping me.

You can do an issue ticket in github:

But I dont think this is a cause for a laggy system. It just tries to fetch the available models, which is a very simple call and should not cause laggy system, but maybe @thecodacus or @aliasfox have for insights on this.

2 Likes

yes that makes sense

1 Like