How to sync files in parallel

Hello everyone. I launched Syncthing on two linux servers to sync datasets about DL. So There are a number of files in small size to process, actually, about 2 millon. But I found there are always only two files transfered at the same time. Too slow.
Are there any config to transfer more files at the same time? Thanks.

There’s a page in the docs about configuration tuning; Configuration Tuning — Syncthing documentation.

See, among others, copiers and hashers. These specifically mention a potential increase in performance by increasing them when there are a lot of files to sync.

But do read through it and try some of them out, that is probably the best starting point here.

It will still be slow however, as we fsync after every file.

I kinda wonder if there’s a way to improve this via batching. I know it’s about keeping the dB and file system in sync but I wonder if you can queue fsync and database writes when many small files are being transferred and still maintain the integrity of both with of course is of critical importance.

I’m too ignorant of the details to know if this is possible.

Thanks for your suggestions. I have adjusted configuration file according to Configuration Tunning page in doc. And the tranfer speed is a little faster than brefore but still slow.

I kind of know what you mean. Syncthing syncs files one by one rather than packs multi small files into a huge file to transfer.

I didn’t mean this specifically. I meant if fsync was recently called and more files are transferring or maybe more files are transferring and near completion, to delay fsync and delay the dB updates. Then after 2-4 seconds call fsync and do all the dB updates. Fewer fsync calls. Faster transfers.

Anyway an ignorant comment because I don’t know enough about how it works but that was my thought.

IIRC, fsyncing 100 files takes about the same time whether we do it “along the way” or in a batch, but it’s something that could be tried. We fsync parent directories in the db updater routine (which has the above-desired behavior: it processes things in batches), and we could in principle fsync the files there as well.

You can always disable fsync as well, if nothing else then just to confirm that it’s causing the bottle neck.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.