Proof of concept for returning to older chat state demonstrating how to work with Bolt.New chat history

Here is PR

Just made it quickly as it sounded like a super low hanging fruite.

It demonstrates that single source of truth in Bolt/oTToDev is chat history.

If you manipulate it an reload the page then project “history” is changed including fils.

I think its a powerful abstraction that should stay.
Want to change files? Manipulate chat history.

This way AI is aware of changes and if messages change files or run commands then commands are run and files are changed.

Even file upload can (and in my view should) be done trough that.

This brings oncern with context size.

For that my plans in future are to introuce modes that allow to send chat selectively to AI. Basically add chat context curation targeted at better token usage.

Though I want to add token usage to chat messages before hand(may be even with cost).

Thoughts, counter arguments, proposals?

2 Likes

this is interesting, what will happen when you reply on a rewinded history ?

1 Like

I have so many thoughts, and you address most of the technical aspects here. In short since this will definitely include stuff that lands on roadmap:

Good thoughts:

  • Chat history intact, action runner behavior intact
  • Local directory/zip considerations (I think honestly these remain separate featurewise, but would utilize this process)
  • Replay which to me doesn’t seem to work well here currently, or even in the commercial service.

Hesitations:

  • I don’t know enough about where LLM token cost impact hits here, for things like project imports. We should avoid any sort of impact like that which can have real costs outside of local AI. Anyone thinking about this solution is also thinking about that, I know. Would love to hear input here to learn.
  • As much as it is important to maintain the chat as an intuitive user interface, I think it is equally important for us to maintain a UX behavior pipeline from web frontend → oTToDev core code → Webcontainer modification. This is especially important to me for large injects such as project imports, or additionally the injection of context (PDF, Markdown docs, etc.) that we may be able to skip in terms of ingesting via the LLM, versus providing that context in current/future forms including RAGs, etc.

This is a huge help to me personally to understand the codebase more, and begin to answer more questions than I ask around these important tools. Thanks to you, @thecodacus, other community members on related PRs for focusing on this important topic. :raised_hands:

I’d really like to hear what people think about how this user experience should develop, and if anyone is interested in this part of the work by all means, come help contribute!

Edit: I said “in short” at the beginning, sorry about that haha

1 Like

I would propose that the original stem of that history needs to remain in order to go back to it if path B didn’t work out. Therefore this would probably need a branching structure for the message history which feels significant if it’s linear at the moment. Haven’t looked into that yet.

I have a mixed feelings about this approach. its great when you trying to undo a step you just did and see a different outcome.

but at the same time, i also dont like the fact that when we reopen a big project after some time it starts from the begining and

  • writes all the files
  • executes all the commands
  • rewrites file n number of times
  • then finally reaches the section where the user can start working

its powerful but less scalable in my opinion. maybe if we can have a combination of both

2 Likes

At the moment original branch will remain until you make a change, then it will be overwritten

That’s my issue as well, I’d suggest we prioritize the bootstrap taking place when revisiting a chat on the roadmap, while a compromise solution is worked out for this one. Both are complex, a collective decision and deserve a good amount of time to get them figured out :raised_hands: I’ll be poking at my small piece of the puzzle later today

Target and locked files as a recent feature of commercial might be a good place to start for what @thecodacus mentions

got it, that makes sense

To some extent that is how Git works, it does not store full snapshots but changes. Its done efficiently though.

In case of Bolt and chat messages.
It is not super efficient at the moment.

But doing things like storing whole snapshots for each chat message also is inefficient in other way.

There is also issue with snapshots around commands.
For simple cases we can just keep something like a last command of a certain type, run in sequence they were encountered.

So we do not run npm install multiple times.

But there could be issues… Where if we did not run commands on right version of files it would produce different result then if we played out chat history as is… That is problematic.

Also, this proposal, so far, just uses how Bolt works already.

1 Like

BTW about @mahoney question about branching.
That is a big change.

Instead I propose forking.
There will be 2 buttons instead of one. Rever to roll back the history in current chat. And fork which would create a copy of current chat ending at message that was forked

That would allow to keep older version of chat while starting alternative chat/history.

I would not add UI/UX around “branching trees” at this point, this is feature people want at times and would be good to deliver, and we can iterate on it further as needed in follow up PRs.

I am for small incremental changes in that sense.

1 Like

what if we store the files and the snapshot on browser storage, at each AI reply, then write the latest snapshot id is a pointer like cookies or db.
the history works as it is working now, withe the undu and other stuff, but when we refresh the browser it will look for the latest snapshot from present in the pointer and load that snapshot directly in the workbench

and the snapshot can store the status of the execution, messages so that it can recreate without re running the command each time we open

Without running commands it will not run things like npm install and other setups.

Or you want to store whole node_modules in snapshots?
I think that is crazy :smiley: Not stab at you, just imaging storing multiple states of node_modules in Git like fashion… Ugh :smiley:

1 Like

I don’t think anyone favors branching node_modules, haha. These are great lines of discussion, I’ve been exploring git that can run in the container in a fashion that doesn’t incur context length waste or loss of context. Will draw some diagrams of my thoughts when I have time this week, not prescriptive at all but rather to help foster the conversation :raised_hands: anyone else from the community that wants to chime in, please feel welcome to!

ohh no definitely should not add node modules :laughing:… also the node modules wont showup in file store so we are good. there is a simple solution just add a message to the that says restored project and puts a message for the AI so that it knows it needs to run the npm install, and we can exclude the first 2 messages from history if that makes sense… ot just keep it as reference also

1 Like

Just for context, node_modules hide in the background and any discussion about them is just funny :upside_down_face:

On a serious note, execution queueing doesn’t behave well when someone reloads a /chat/[chat-slug-id] page. We should align that with how well it works on initial project start.

yes, the file store ignores the node_modules by default and so we can only consider what ever is being tracked in the file store and snapshots that

about the execution order… I am working on a fix

2 Likes

Can you make PR so that others see what you are working on?

As for commands, i am thinking of making a ductionary with command key of and value as the order of execution, so at least same commands don’t run twice.
And running them with latest srate of files jn order.

But i see these as optimisation on top.

I will add forking capability next and submit my PR for review and merge.

It seems to be paralel to your efforts codacus.

yes i just reverted the changes… there is a indipendent action runner for each artifact which is causing them to run in parallel, its a miss from my side.
I am making a execution queue at workbench level so that these can be queued

2 Likes

@wonderwhy.er check the updated implementation

1 Like