I seem to have a love/hate relationship with concurrency. On the one hand, with data being on USB, it’s great to limit the number of live scans, but on the other, nothing happens until the scan and sync has taken place.
I have 4 very large jobs, all between 400Gb ~1.2Tb. If I set concurrency too low and all those jobs are scanning then nothing else is ever going to happen for days if not weeks. If I set concurrency high then there’s a huge amount of disk thrashing.
However, if concurrency was tweaked that the limitation was only applied to the number of jobs being scanned, and the syncing operation is not counted towards the concurrency limit, this would make everything more efficient as the syncing is plodding along at it’s own speed per job where necessary whereas the scanning can continue through the rest of the jobs.
The situation is many more folders than the limit. so the mutual exclusivity for a single folder isn’t the issue. The problem is that a few huge operations can delay small ones for a really long time. Maybe an improvement would be to limit at a lower (e.g. single file operation) level, but that would be much more complicated and the same thing could still happen (4 huge files blocking everything). If you have jobs with vastly different sizes and change ratio,ba workaround might be to separate those to two different syncthing instances
I don’t really want to separate the jobs into separate St instances but it had crossed my mind. I was looking at ways to see if I can optimise the way things scan, for instance under ‘waiting to scan’, I can’t force a scan, the option is greyed out. But there (for me) could be times that some scan jobs are being performed on one USB drive and the other two are idle whilst waiting. Perhaps an ability to override the concurrency could be an option?
At the moment I still have not had a full sync since 1.4 came out!!
I don’t think this is really about whether syncing is or isn’t counted in the concurrency, as syncing is similar to scanning I/O wise. Rather I guess it’s that since you have multiple USB disks (?) there really isn’t a relationship between folders on different disks and those should be in separate “concurrency groups”.
I’ve been separating the jobs over 4 disks, with the largest files now stored on raid 10 and 2 PCs, but due to the sheer number of files and jobs it’s still takes a long time to scan all the folders.
So thinking of concurrency again, the scanning is the single biggest slowdown. Thought is, concurrency is set to min / max folder syncs.
So lets say 3 is minimum, 5 is maximum. St starts and 3 folders run the scan. At the end of the scan all 3 find files are needed to be synced, the sync operation doesn’t need much IO, so St then moves to folder 4 (whilst the 3 are syncing), scans, and for example finds it’s up to date, so starts scanning a 5th folder. if it’s up to date, goes no further as the max is at 5. Ideally St scans at the lowest number (min) but syncs at the highest (max)
With large a large collection of folders I find that scanning takes forever and the syncing plods along as that’s limited to internet speed. My current way around it is to pause all after a new version or a PC restart, then resume a folder for scanning, then when done scanning, resume another and so on.
I have just upgraded Syncthing to 1.4.0, and unfortunately I must say that I am not a fan of the change from maxConcurrentScans to maxFolderConcurrency.
In my case, I have a mixture of HDDs and SSDs. Previously, I limited maxConcurrentScans in order to reduce the overhead on the HDDs. After the change, both syncs are scans are limited.
However, I personally do not care how many syncs are taking place at the same time, because my main problem is the bandwidth, not CPU or disk usage (especially for the SSDs). Right now, many of my folders are stuck. A few large files being slowly synced in separate folders can block scans and syncs of the other folders forever.
This was not the case before, when only scans were limited. I would personally prefer to have scans and syncs limitations completely separated. At the moment, it seems that the only solution is to completely unlimit maxFolderConcurrency, which I will likely do .
No no, what I meant was that because of low bandwidth, syncing folders with large files takes a very long time. In some cases, they are basically being synced all the time. Due to maxFolderConcurrency, other folders are unable to even start scanning/syncing before those folders are “Up to Date”, which may take days. The only solution seems to be to set maxFolderConcurrency to -1.
That is why I do not want to limit syncing at all, but I would still like to limit just scanning, for the sake of the slow HDDs. This has become impossible right now.
Also, with this change, if a file gets stuck in syncing, e.g. due to being locked by the operating system or similar, then the folder will never reach being “Up to Date”, which may prevent other folders from scanning/syncing at all.
Ah got it, thanks. That seems like a valid aka non-niche scenario: Low bandwidth making sync operations not io intensive. Maybe we need an additional switch to enable/disable io limiting for syncing (@AudriusButkevicius@calmh).
How would that happen? The folder need not get up-to-date to release the io limiter lock. When a sync happens and fails, it releases the locks and pauses for a (increasing) while before retrying the sync. Maybe if there is a non-time limited write operation somewhere in syncing and the OS blocks that write without generating an error somehow - however I’d not expect such nastyness and if there is, much more than just Syncthing should break down (maybe I am being naive here).
Yes, that would be great… In my case, a lot of syncing happens between machines located in different countries, which is usually very slow and totally not I/O intensive. The only I/O intensive syncing operations here are probably those happening on LAN. Scanning is, of course, very I/O intensive, hence my wish to limit it a little bit, without limiting syncing at the same time.
Just for the record, I did set maxFolderConcurrency to -1 on my desktop machines for now, as I do not want to have the folders stuck on “Waiting to Scan”, which is especially a big deal on a machine with a dual-core CPU .
Hmm, I did have some problems with locked files in Windows before, but I have likely added all of those files to my .stignore. I will try to create a reproducible scenario for this one later. The above was more of a guess based on my experience, so I apologise if that is not the case.
It’s not about no concurrency limiting, it’s about a scenario where the premise that syncing is IO intensive and thus should be limited as well as scans does not hold (low bandwidth). I.e. you still want io concurrency limiting, but don’t want a few super slow syncs having almost no io impact at all (pulling huge files with a few kB/s) to block everything else.
The syncing bandwidth is tiny, with it being no more than my max incoming internet, at 80 Mbps, and usually it’s only a few folders that are syncing at any one time so often as 15Mbps so the data transfer is minimal compared to the scanning throughput.
Due to the slow sync speed, this is why I have to have all the folders scanning in order for the handful of folders to sync. Concurrency would mean that whilst x folders scan, when they sync, it holds up any other folder until they are up to date, but with folders each being around 600Gb each and my incoming speed very low, those concurrency locked folders could take days before the concurrency moves to the next folder. Hence since 1.4.x came out I literally still have not completed a full scan. I do fully accept this is due to the receiving drives being on USB and the scanning takes a long time, however again, under 1.3.4 I could scan all the folders within 2 days
From my side it’s clear what your situation is and why the new concurrency limit is detrimental. The remaining question is whether it warrants a config switch or if this is considered “too niche” (you could work around it with two Syncthing instances, one for LAN and one for internet, which would be ugly though). I personally think it is not too niche and also a super small code change, so config switch to exclude sync from the limiter seems reasonable.
All my folders are syncing from other offices across the country, nothing syncs locally / over lan. I have split it up into two Syncthings / PCs and both are set to connect via port forwarding. It’s helped in as much that some jobs complete sooner.
Not sure if what myself and tomasz are unique since we both appear to have very similar setups, so it’s likely to be beneficial for others to have a little extra control.
Your avatars are too similar (T), I was thinking I am speaking to the same person again all the time - apologies.
I am bad enough with names of “analog individuals”, I am most definitely not capable of keeping track of them here…
(sorry for quoting the unique bit out of context, it just fit way too well).
Yes, both. I never intended to imply that the new behaviour of the limiting isn’t the right choice. I am saying that in the situation @tomasz86 outlines (very slow syncs taking a long time without much disk io preventing anything else from happening) are real, and thus it might make sense to provide a switch to not include syncing in the limiter as before. That would be a non-standard, advanced settings only switch.