Sync over LAN is very slow

I have moved ~/.config/syncthing outside of /home so that sda would get more time spinning. Let’s see how that goes…

Now it’s getting around 9 MiB/s. Interestingly sda is now getting most of the busy time it seems.

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            136.00    2.00    932.00     28.00     0.00     5.00   0.00  71.43   17.44   30.00   2.34     6.85    14.00   7.25 100.00
sdb              0.00    1.00      0.00      4.00     0.00     0.00   0.00   0.00    0.00   24.00   0.02     0.00     4.00  24.00   2.40
sdc              4.00    1.00    512.00      0.00     0.00     0.00   0.00   0.00    2.00  112.00   0.04   128.00     0.00   7.20   3.60
dm-0             4.00    0.00    512.00      0.00     0.00     0.00   0.00   0.00    2.00    0.00   0.06   128.00     0.00  15.00   6.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.02   60.71   14.80   22.96    0.00    0.51

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            154.00    1.00   1056.00      0.00     0.00     0.00   0.00   0.00   17.79   20.00   2.78     6.86     0.00   6.45 100.00
sdb              0.00    2.00      0.00     56.00     0.00    13.00   0.00  86.67    0.00    2.00   0.00     0.00    28.00   2.00   0.40
sdc              3.00   20.00    384.00  11336.00     0.00     0.00   0.00   0.00   38.67   74.00   2.04   128.00   566.80  21.04  48.40
dm-0             4.00   21.00    512.00  11396.00     0.00     0.00   0.00   0.00   29.00   25.71   1.10   128.00   542.67  19.36  48.40

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.98   63.37   16.83   16.83    0.00    0.99

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            108.91    0.99    708.91      0.00     0.00     0.00   0.00   0.00   13.67    0.00   1.46     6.51     0.00   6.81  74.85
sdb              0.00    1.98      0.00      3.96     0.00     0.00   0.00   0.00    0.00    4.00   0.01     0.00     2.00   4.00   0.79
sdc              1.98    1.98    380.20      0.00     0.99     0.00  33.33   0.00  328.00  416.00   1.03   192.00     0.00 130.00  51.49
dm-0             1.98    0.99    253.47      0.00     0.00     0.00   0.00   0.00  546.00  844.00   1.47   128.00     0.00 177.33  52.67

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.03   53.33   15.90   26.67    0.00    3.08

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            113.00    1.00    720.00      0.00     0.00     0.00   0.00   0.00   16.42   24.00   1.90     6.37     0.00   8.00  91.20
sdb              0.00    2.00      0.00     52.00     0.00    12.00   0.00  85.71    0.00    8.00   0.02     0.00    26.00   8.00   1.60
sdc              2.00   22.00    256.00  11040.00     1.00     0.00  33.33   0.00   12.00   68.91   2.61   128.00   501.82  34.67  83.20
dm-0             4.00   20.00    512.00  11096.00     0.00     0.00   0.00   0.00    6.00   30.40   1.88   128.00   554.80  34.83  83.60

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.31   33.15   17.68   30.39    0.00   15.47

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda             32.00    3.00    184.00      0.00     0.00     0.00   0.00   0.00    8.62    0.00   0.26     5.75     0.00   6.17  21.60
sdb              0.00   10.00      0.00     84.00     0.00    14.00   0.00  58.33    0.00    6.00   0.06     0.00     8.40   6.00   6.00
sdc              2.00   48.00    384.00  23404.00     0.00     0.00   0.00   0.00  492.00   63.08   2.94   192.00   487.58  18.88  94.40
dm-0             1.00   32.00    128.00  23484.00     0.00     0.00   0.00   0.00 1700.00   61.12   2.41   128.00   733.88  29.94  98.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.00   59.20   12.44   26.87    0.00    0.50

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            139.00    3.00    976.00      0.00     1.00     0.00   0.71   0.00   16.83   17.33   2.60     7.02     0.00   6.99  99.20
sdb              0.00    7.00      0.00    100.00     0.00    21.00   0.00  75.00    0.00   10.29   0.07     0.00    14.29  10.29   7.20
sdc              4.00   40.00    512.00  18224.00     0.00     0.00   0.00   0.00   16.00   26.90   1.13   128.00   455.60   6.55  28.80
dm-0             4.00   36.00    512.00  18324.00     0.00     0.00   0.00   0.00   16.00   18.67   0.74   128.00   509.00   8.70  34.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.53   73.98    9.69   14.80    0.00    0.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            129.00    2.00    876.00     16.00     0.00     2.00   0.00  50.00   18.57   24.00   2.24     6.79     8.00   7.33  96.00
sdb              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
sdc              4.00    0.00    512.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00   128.00     0.00   0.00   0.00
dm-0             4.00    0.00    512.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00   128.00     0.00   0.00   0.00

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.52   67.68    9.60   21.21    0.00    0.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            138.00    1.00    920.00      0.00     0.00     0.00   0.00   0.00   16.72   12.00   2.34     6.67     0.00   7.05  98.00
sdb              0.00    2.00      0.00     36.00     0.00     8.00   0.00  80.00    0.00    6.00   0.01     0.00    18.00   6.00   1.20
sdc              3.00    0.00    384.00      0.00     0.00     0.00   0.00   0.00    4.00    0.00   0.67   128.00     0.00 144.00  43.20
dm-0             4.00   10.00    512.00     40.00     0.00     0.00   0.00   0.00    3.00    0.00   0.67   128.00     4.00  30.86  43.20

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.08   45.31   19.79   32.81    0.00    0.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            137.62    1.98    843.56      3.96     0.00     0.00   0.00   0.00   16.09  202.00   2.60     6.13     2.00   7.06  98.61
sdb              0.00    4.95      0.00     51.49     0.00     8.91   0.00  64.29    0.00   11.20   0.06     0.00    10.40  11.20   5.54
sdc              5.94   29.70    447.52  11263.37     0.99     8.91  14.29  23.08  234.00  242.93   7.96    75.33   379.20  21.44  76.44
dm-0             5.94   33.66    320.79  11314.85     0.00     0.00   0.00   0.00  334.67  296.47  11.32    54.00   336.12  19.50  77.23

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.02   63.13   14.14   20.71    0.00    0.00

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            132.00    3.00    844.00      0.00     0.00     0.00   0.00   0.00   17.21   12.00   2.35     6.39     0.00   7.11  96.00
sdb              0.00    9.00      0.00     96.00     0.00    18.00   0.00  66.67    0.00   12.44   0.12     0.00    10.67  12.89  11.60
sdc              4.00   46.00    512.00  21768.00     0.00     0.00   0.00   0.00   20.00   37.22   1.79   128.00   473.22   6.80  34.00
dm-0             4.00   36.00    512.00  21864.00     0.00     0.00   0.00   0.00   20.00   27.22   1.06   128.00   607.33  10.70  42.80

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.50   58.00   16.00   24.00    0.00    0.50

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda            149.00    0.00   1132.00      0.00     0.00     0.00   0.00   0.00   17.99    0.00   2.66     7.60     0.00   6.71 100.00
sdb              0.00    0.00      0.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     0.00     0.00   0.00   0.00
sdc              4.00    0.00    512.00      0.00     0.00     0.00   0.00   0.00    6.00    0.00   0.02   128.00     0.00   6.00   2.40
dm-0             4.00    0.00    512.00      0.00     0.00     0.00   0.00   0.00    6.00    0.00   0.02   128.00     0.00   6.00   2.40

If you are able to compile syncthing yourself (docs explain how), you could comment out block shuffling and see if that helps. Also you could try commenting out fsync.

How much code is this block shuffling? Does Syncthing know that the other device is in LAN? Maybe it could then disable this part automatically. It’s an idea because a lot of people have regular spinning disks for big data storage at home and it may be a while until SSDs in TB size are affordable for average joe.

It’s 3 lines, literally shuffling the blocks.

This happens before we know anything about which devices might have the blocks.

I have removed the shuffle block. But unfortunately there are no big files (>256 MiB) left for me to test on once the scanning has completed. But so far I haven’t seen much of a difference. :sweat_smile:

Just create one with dd, as I am now curious to know if it will help :smiley: If it doesn’t, try removing fsync.

Great idea! But unfortunately I’ll be going away for a few weeks. Maybe I will try that out when I come back. All files are synced for now. :stuck_out_tongue:

Okay I’m back. One question: should I remove shuffling in receiver or on both sides?

Receiving side.

I did the following changes.

diff --git a/lib/model/folder_sendrecv.go b/lib/model/folder_sendrecv.go
index edb0ea91..f56d4335 100644
--- a/lib/model/folder_sendrecv.go
+++ b/lib/model/folder_sendrecv.go
@@ -10,7 +10,6 @@ import (
        "bytes"
        "errors"
        "fmt"
-       "math/rand"
        "path/filepath"
        "runtime"
        "sort"
@@ -1012,12 +1011,6 @@ func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, copyChan chan<- c
                }
        }

-       // Shuffle the blocks
-       for i := range blocks {
-               j := rand.Intn(i + 1)
-               blocks[i], blocks[j] = blocks[j], blocks[i]
-       }
-
        events.Default.Log(events.ItemStarted, map[string]string{
                "folder": f.folderID,
                "item":   file.Name,
@@ -1596,20 +1589,6 @@ func (f *sendReceiveFolder) dbUpdaterRoutine(dbUpdateChan <-chan dbUpdateJob) {
                        lastFile = job.file
                }

-               // sync directories
-               for dir := range changedDirs {
-                       delete(changedDirs, dir)
-                       fd, err := f.fs.Open(dir)
-                       if err != nil {
-                               l.Debugf("fsync %q failed: %v", dir, err)
-                               continue
-                       }
-                       if err := fd.Sync(); err != nil {
-                               l.Debugf("fsync %q failed: %v", dir, err)
-                       }
-                       fd.Close()
-               }
-
                // All updates to file/folder objects that originated remotely
                // (across the network) use this call to updateLocals
                f.model.updateLocalsFromPulling(f.folderID, files)

Now it doesn’t seem to be syncing things any more. D/U rate is 0 (and it keeps that way) and 78,189 items, ~13.4 GiB is out of sync. No errors in log.

Can you post UI screenshots? Also, see if you see something fishy when running with model debugging enabled.

Also make sure the folder is shared on both sides.

Okay I did a reboot and everything is synced now. So I tried creating a 4GB file via FSUtil File CreateNew temp 0x100000000 and it just synced instantly. I guess I’ll try harder to outsmart syncthing. :sweat_smile:

UPDATE: dd if=/dev/zero didn’t work either. Wow. (I guess that makes sense because it’s hash-based synchronization)

Use /dev/random or /dev/urandom

Yes. Now the not syncing problem is back (> 5 mins and no progress at all). I can see the log is just repeating these two lines every 5 seconds:

...:04:54.621315 progressemitter.go:62: DEBUG: progress emitter: timer - looking after 1
...:04:54.621364 progressemitter.go:81: DEBUG: progress emitter: nothing new

EDIT: This is on the receiver side.

EDIT: After a long wait, it’s now syncing at 150Mbps which is definitely an improvement. :smiley:

The long wait was probably the time it took to hash the file.

That’s way too long for a 2GB file IMO.

Also I just tried adding fsync back. It seems like I’m getting similar performance.

Okay here’s the results for syncing a hashed 5GB random file…

Running ada5ab74d26b7c0b1c2e564cf64d4713cf9735a7: ~175s. Removing shuffle: ~190s. Removing fsync: ~215s.

Anyway I guess those don’t make a difference. EDIT: But it is significantly faster than before, maybe due to some changes to syncthing during the past 2 weeks?

So no > 5min wait times?

And I can’t think of any changes from in the past two weeks that would influence performance from the top of my head.