Folder priorities / dynamic throttling

So, not necroing. Folder priority Folder Priorities - #3 by AudriusButkevicius

Folder priorities would be nice. If I have folders with more critical 1G of unsynced beans, I want them to be synced with priority, throttling the 1T background chugger folder. Actual network speed is 5-6GB/hr

Since the 1T folder has been (already) chugging along, it tends to take priority. It’s annoying and creates sync conflicts, if the smaller / more frequently changed folders get bogged down. I tend to manually go in and pause the syncs (what risks forgetting to unpause and lose progress / wastes potential bw usage).

I’d propose¹ implementation in ± two steps:

  1. Allow throttling folders.
  2. Allow throttling folders percentage-wise, of available bw (howto determine it? one thing is of device-wide bw limit, but what about local devices, and it’d be better to look at the total bw, and attempt to restrict there; with units it’d be for each byte x syncs, you should be prioritized to sync/request kx bytes; what about syncs being stuck, and them not using their allocated bw?)
  3. Allow setting units (where from percentages are found, or not) to folders.

¹ I might be motivated to implement it at some point in time, but not taking any commitment or assignment :wink:

ok isn’t this another one of the “world could be better, but hey, see it, it isn’t, and it’s not going to, ain’t it?”

4 Likes

Internally in Syncthing folders are just threads (-ish) who throw requests at available connections to other devices. So absent data that points in another direction I’d presume that two syncing folders will get roughly 50% each of the available I/O and network capacity. That is, I think things should be fair by default.

(There are of course details; larger files use larger blocks, so given equal shares of requests the larger file will consume larger amounts of available bandwidth and I/O resources for the responses. Perhaps this is what you’re seeing.)

4 Likes

Perhaps take the existing global bandwidth limiter and allow it to be set at the per-folder level?

2 Likes

Hopefully you don’t mind if I revive the topic.

I suffer from the same problem. Rather than suggesting solutions to you, who are undoubtedly the experts, perhaps I better share my observations and problems.

My device group consists of three main devices (plus five others that share only a few folders. Two of the five are Android devices and share ~40 GB in ~17,000 files, recently they are also subject to the issues mentioned below). Syncthing versions are 1.26.1 to 1.27.2-preview.1 . A senile Windows laptop and recently a Raspberry Pi 4 share ~4TB of data in ~700,000 files with my desktop PC (all NTFS), and especially since I set up the Pi as another “backup device” (to one day replace the laptop), I’ve had numerous issues with “out of sync” and “local additions” on all three devices. And when I rename a file on my desktop computer, the old version of the file often reappears within a few seconds. I’m really worried that the detection of the last change on the network seems to have become unreliable at my site. Because the wimpy backup devices take days rather than hours to rescan the entire folders (full rescan interval is set to 1 week), I’m worried if file system watcher events might be processed late, causing such conflicts. These devices are all on the local network. It doesn’t look like a transfer bottleneck to me, but rather a processing bottleneck or perhaps differences in local system time, or harmful delay between detection and processing of changes.

Like the other posters, I have small but vital folders with frequent updates, and large archive folders with little change. I trust that with just the small folders, the reliability and responsiveness of mirroring of the system would be fine (responsiveness of web interface is mostly fine). I will try by pausing the archive folders manually.

I better admit I did not read the code in advance, still dare to ask: would it be an option to manually set copiers and hashers to 1 for the archive folders to shift processing capacity towards the more frequently changing folders?