File transfer initiated with delay

Overhead from IPsec plus the relatively low bandwidth at the office will impact sync times, but at the moment it doesn’t look like it’s the main cause.

Filtering the snippet of log data you provided earlier from your deletion test of file prova.093550 in Syncthing folder dati-cl:

nov 09 09:42:51 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:51.476411 aggregator.go:296: DEBUG: aggregator/dati-cl: Tracking (type remove): prova.093550
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381095 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381156 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381496 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381543 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381946 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381993 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382289 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382344 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382474 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382542 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382652 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382715 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383710 aggregator.go:347: DEBUG: aggregator/dati-cl: Notifying about 1 fs events
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383761 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383824 folder.go:218: DEBUG: sendreceive/dati-cl@0xc00029e800 Scan due to watcher
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383852 folder.go:432: DEBUG: sendreceive/dati-cl@0xc00029e800 scanning
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.384817 folder.go:496: DEBUG: sendreceive/dati-cl@0xc00029e800 finished scanning, detected 1 changes
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.385051 indexhandler.go:231: DEBUG: indexHandler@0xc000037100 for dati-cl to LXHJPLL at 10.8.0.10:22000-192.168.1.207:22000/tcp-server/TLS1.3-TLS_CHACHA20_POLY1305_SHA256: Sending 1 files (<74 bytes)
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.452168 model.go:1126: DEBUG: Index update (in): LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG / "dati-cl": 1 files
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058007 progressemitter.go:285: DEBUG: progress emitter: bytes completed for dati-cl: 0
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058188 model.go:925: DEBUG: model@0xc0003eb380 Completion(LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG, "dati-cl"): map[completion:100 globalBytes:324115070807 globalItems:671876 needBytes:0 needDeletes:0 needItems:0 remoteState:valid sequence:4995014]

Note how Syncthing’s notifyTimer is reset 6 times, each cycle adding an additional 10 seconds (60 seconds total) before Syncthing’s scanner is finally triggered at time stamp 09:43:59.383710.

I haven’t done a deep dive into Syncthing’s source code, but if I’m interpreting the log entries correctly, 6 file system events were detected between when your test file prova.093550 was deleted and when the file deletion event was conveyed by the cloud server (WKR5X) to the office server (LXHJPLL).

If I was dealing with the same sync delay problem, my next step would be to temporarily shut down Syncthing on the cloud server and use a monitoring tool to see what the extra file system events are.

There are several tools available for Linux, but one that’s widely available – and also cross-platform – is fswatch. To watch the directory /home/dati-cl and all of its subdirectories:

fswatch --timestamp --recursive /home/dati-cl

If /home/dati-cl is idle, the output from fswatch would just show the initial scan by fswatch followed by silence.

How does the office server provide the network share to the client machines (e.g. Samba)?

What’s on the client machines (e.g. Windows, Linux, macOS)?

Should also check for any packet loss in addition to the RTT. While 26ms is pretty good for an internet connection, latency can easily add up even for small files.

Since the typical MTU is 1500 bytes. A 20MB file would need to be diced up into over 13,333 packets. If not transmitted potentially out-of-order and possibly in parallel, it’s 13ms (half of 26ms) network travel time per packet for a total of 2 minutes 53 seconds worse case scenario assuming no packet loss.

1 Like