File updates are incredibly slow

Let’s say I replace a few files, several GBs big (or even TBs). On the target machine Syncthing starts reading them, very slowly, at about 15MB/s. It takes forever. It is 10x faster if I delete the changed files and restart the Synchthing, so it has nothing to read before overwriting them. I have two targets, both do this. Windows, i3-10105 and J4105. The reported hashing speed is over several 100 MB/s. No other activity, spinning disk, capable of doing 200MB/s.

Maybe not while copying, though, which is what happens in this step? Syncthing is essentially reading a block, hashing it for verification, writing it, then going to the next block.

Can you explain the process more? This is what I see.

  1. Replace the source file
  2. Syncthing creates an empty ~syncthing~…tmp file, no activity to this file
  3. Syncthing starts reading the target file (this is the painfully slow part)
  4. Writes to the tmp file from the network (very fast)
  5. Replaces the original with the tmp file

These steps do not overlap in time. I believe they could be, 2-4, to make the whole process at least as fast as the slowest step 3.

Question, is there a setting to make Syncthing automatically overwrite any changed file?

Roughly this;

  1. Create the temp file
  2. Compute lists of blocks that are same as previous version (on disk) and changed (not same as current block on disk)
  3. Concurrently:
    • Copy unchanged blocks from old file, one by one
    • Look in the database for changed blocks – do we have them in some other file locally?
      • If so, copy the block from there
      • If not, request it from the network and write it when it arrives (multiple blocks are requested concurrently)
  4. Once all blocks are in place, close the file and rename it.

I wrote up some more details once upon a time.

There is no option to overwrite in place.

Now that we have a temporary file we copy all the unchanged blocks from the existing, old version of the file. After reading each block we calculate the hash and make sure it is what we expect.

I’m 100% sure that none of the blocks match in my case, since it was a recompressed file, and the block database should tell this to Syncthing as well. It still tries to hash/merge it (there are no writes to .tmp while reading the old file, all blocks are different?).

If the blocks are changed they are of course not copied. In this case there would also be no point to an in place overwrite since every block in the file has changed.

However what you might be running into is that it’s trying to find shifted blocks. This kicks in when a significant fraction of the file has changed. Set weak hash threshold to 101% for the folder in question and see if this helps.

I know nothing about go, but I tried commenting out this line in folder_sendrecv.go: blocks, reused = f.reuseBlocks(blocks, reused, file, tempName) The slow disk read disappeared, but there was still a long delay with high single-threaded cpu usage before the tmp file was written.

Then I tried your suggestion, setting that threshold to 101% and even with the unmodified code it started copying immediately. That’s exactly what I wanted!

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.