Impact of (f)sync on performance

(Simon) #1

Prompted by @AudriusButkevicius:

If you comment out the fsync, what happens then?
https://github.com/syncthing/syncthing/pull/5680#issuecomment-491198495

I did that and ran TestBenchmarkTransferManyFiles with 10000 instead of 50000 files (otherwise exceeds 10min timeout) and a little patch that changed reported times from total to total / MiB.

The results on my laptop (spinning disk, system in use but not on heavy load) were astounding: About 70 times faster walltime/transfer rate and ~5 times lower u- and stime each for both receiver and sender. Being a bit suspicious of the magnitude I retried on my homeserver on an SSD and speedups were even bigger.

Then I moved the .Sync calls to the dbUpdaterRoutine to batch them up (even if it’s still individual calls) - that showed an improvement of ~15% in wall time and factors 2-3 in u-/stime. Then I realized before we synced the temp file, now I sync the “real” file. However shouldn’t that be equivalent, as the rename just changes FS metadata, but it’s still the same data? If that’s correct, I proposed we do this change, as it doesn’t add any complexity and has quite a sizeable performance impact.

(Audrius Butkevicius) #2

There is a os.Sync() with no arguments (or perhaps thats a syscall) or something which flushes all caches, essentially we could not fsync files, but fsync the os on batch update, which would help alot too.

(Simon) #3

There is a sync syscall on unix (didn’t look at other systems): https://linux.die.net/man/2/sync. However that doesn’t seem optimal, as it affects all, not only “our”, outstanding data to disk and it schedules a write and returns immediately, which defeats the purpose of not committing to database before we actually committed to disk. So that isn’t suitable.