We are having issues with timeouts when we test with a number of files changing at the same time (20-30 onwards). With the delay on the model queries and others, some of the latest workflows triggered remain on forever and never finish.
As I understand, the local file trigger executes once per file changed/created/deleted and reports one file, so I am also not certain why the loop was required (I understand it was not there at the beginning).
The ideal scenario for us would be that each file waits for the whole process, then next and so on. I also understand that in n8n I can not limit the max number of executions of any specific workflow either.
What we would need is either that the workflow is executed only once per file or a way to queue them so they get executed in order only once the previous has finished, with might require something like redis and a queue?
The idea is that if suddenly XX files change, even if the process takes longer, only one (or x) workflows get executed simultaneously.
Any help would be appreciated.