This new version of bolt.diy is very bad at generating interfaces with all llm models, much inferior to oTTodev which was the initial fork of the project. Has anyone tried the version before bolt.diy? That version worked well. In this image I show you tha

I’ve never used Gemini 1.5 flash and had it work. But had reasonable success with Flash 2.0.

I have done the following to explain a couple of fundamentals to you - entirely from my perspective.

Firstly I converted Spanish to English so I understoor the logic in the request. This is important I believe. While the LLM might be able to translate language, it’s much more difficult for the LLM to understand a request if the prompt doesn’t make sense. I mean - similarly if you translate and it sems you may be making a statement rather than asking a question. Happy to be educated better around this also.

podrias crear un dashboard con muchos graficos con valores inventados

You could create a dashboard with many graphs with invented values

I changed the prompt as below…

create a dashboard with many graphs with random values

I will document my approach and how you may still need to help Bolt complete some tasks. I found a couple of issues here around JSX files and missing code elements. Happened twice as you’ll see.

Cheat Help - I actually used ChatGPT to assist with the error determination because I find it’s very fast and often provides insight where I just missed something very simple. Also helps keep my chat history in Bolt nice and neat rather than asking it to fix several issues manually.

1 Like

Had a code completion issue with Dashboard.jsx - went to ChatGPT to fix

import React from ‘react’;
import Dashboard from ‘./components/Dashboard’;

function App() {
return (

    <h1>Dashboard</h1>
    <Dashboard />

);
}

export default App;

Another code issue - this time with App.jsx for the same reasn - no wrapping as below…

error?
Expected “}” but found “.”
11 | return (
12 |
13 | {graphs.map((graph) => (
| ^
14 |
15 | ))}

Then another error launching…

1 Like

Now the graph is working as below…

So in short. I don’t believe your initial statement around the earlier version of Bolt.diy. At least you haven’t explained which LLM you used with Ottodev and what your previous prompt was either.

I’m not being critical, just trying to help you in the right direction. What I’ve posted above is all correct and prompts still ‘think’ and deconstruct your requirements using english styled logic. Once again, happy for you to prove me wrong and show some random prompts and how successful they are.

You need to take some time to work out what prompts offer results you’re after also. If you want to use free LLM’s like Google Gemini or Deekseek, then be aware of the ability of each. The smaller LLM’s can be utilised for local builds if that was your desire, but other than that I don’t see why you’d want to use the smaller LLM’s for online development. Code completion always requires more manual assitance. Live your best life and use the right LLM for your needs.

You know why Bolt.new uses Anthropic’s Claude 3.5 and is tailored with enhancements and optimizations. It’s very good on code completion on it’s first pass - but you have to pay for this.

Also don’t forget to switch on context optimization also from the system settings. This will help when your build starts to increase in size along with the requests.

All the best…

2 Likes


Create the same dashboard with jdoodle.ai

2 Likes

And similar first pass for Bolt.new

And again jdoodle.

Haha. Funny thing I thought I’d try Bolt.diy using Claude 3.5 Sonnet, bought some credits, updated the API and - completed on first pass OK but with half the results.

Will try back-to-back with a couple of other LLM’s through OpenRouter. This is the idea with Bolt.diy, you have options. It’s not always the same result and newer solutions come on the field, you can take a look and see what you find.

2 Likes

is this your website? is it a fork of bolt ?