File transfer initiated with delay

Hi. I’ve two Linux servers running ST 1.22 monitoring 4 folders on ext4.

I have a doubt about a sync delay I’m getting, which may range from 1 to up to 10 minutes in rare occasions (customer reported).

In some quick tests I made, I managed to get hosts aligned usually quick, but often after at least 60s. One situation I monitored was the deletion of a file, which was propagated after one minute.

In the log below, I deleted the file prova.093550 at 09:42:51, but the change has been synced to remotes at 09:43:59.

I still don’t know much about ST insights, so it might be considered normal. In that case, I wonder if there are settings which can lower this gap. And yet, if this is normal, I wonder how can I track what customer reported, files which are synced nearly 10 minutes later. Are model and watchaggregator debuggers enough to sort it out? If it’s a network issue?

Thanks

nov 09 09:42:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:49.379954 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:42:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:49.380016 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:42:51 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:51.476411 aggregator.go:296: DEBUG: aggregator/dati-cl: Tracking (type remove): prova.093550
nov 09 09:42:53 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:53.184298 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:42:53 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:53.184347 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:42:57 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:57.066463 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:42:57 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:57.066557 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381095 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381156 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:03 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:03.184705 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:03 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:03.184781 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:07 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:07.066671 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:07 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:07.066757 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381496 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381543 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:13 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:13.185005 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:13 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:13.185070 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:17 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:17.067480 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:17 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:17.067528 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381946 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381993 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:23 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:23.185144 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:23 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:23.185205 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:27 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:27.068112 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:27 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:27.068176 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382289 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382344 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:33 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:33.186288 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:33 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:33.186335 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:37 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:37.069103 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:37 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:37.069151 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382474 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382542 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:43 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:43.186974 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:43 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:43.187032 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:47 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:47.069320 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:47 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:47.069384 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382652 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382715 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:53 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:53.187380 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:43:53 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:53.187466 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s
nov 09 09:43:57 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:57.069916 aggregator.go:334: DEBUG: aggregator/dati-ncdata: No old fs events
nov 09 09:43:57 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:57.069965 aggregator.go:315: DEBUG: aggregator/dati-ncdata: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383710 aggregator.go:347: DEBUG: aggregator/dati-cl: Notifying about 1 fs events
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383761 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383824 folder.go:218: DEBUG: sendreceive/dati-cl@0xc00029e800 Scan due to watcher
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383852 folder.go:432: DEBUG: sendreceive/dati-cl@0xc00029e800 scanning
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.384393 folder.go:783: DEBUG: marking file as deleted File{Name:"prova.093550", Sequence:7609480, Permissions:0666, ModTime:2022-11-09 09:43:59.271652068 +0100 CET, Version:{[{WKR5X3I 1667983439}]}, VersionHash:, Length:0, Deleted:true, Invalid:false, LocalFlags:0x0, NoPermissions:false, BlockSize:131072, Blocks:[], BlocksHash:, Platform:{<nil> <nil> <nil> <nil> <nil> <nil>}}
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.384817 folder.go:496: DEBUG: sendreceive/dati-cl@0xc00029e800 finished scanning, detected 1 changes
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.385051 indexhandler.go:231: DEBUG: indexHandler@0xc000037100 for dati-cl to LXHJPLL at 10.8.0.10:22000-192.168.1.207:22000/tcp-server/TLS1.3-TLS_CHACHA20_POLY1305_SHA256: Sending 1 files (<74 bytes)
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.452168 model.go:1126: DEBUG: Index update (in): LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG / "dati-cl": 1 files
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058007 progressemitter.go:285: DEBUG: progress emitter: bytes completed for dati-cl: 0
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058188 model.go:925: DEBUG: model@0xc0003eb380 Completion(LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG, "dati-cl"): map[completion:100 globalBytes:324115070807 globalItems:671876 needBytes:0 needDeletes:0 needItems:0 remoteState:valid sequence:4995014]
nov 09 09:44:03 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:03.188148 aggregator.go:334: DEBUG: aggregator/homes: No old fs events
nov 09 09:44:03 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:03.188204 aggregator.go:315: DEBUG: aggregator/homes: Resetting notifyTimer to 10s

Modifications and deletions are separate operations in Syncthing. Some delays when using file watching are normal. The default is 10s for modifications and 6 times that (i.e. 60s) for deletions. You can tweak https://docs.syncthing.net/users/config.html#config-option-folder.fswatcherdelays if you need even faster syncing. However, the lowest value for deletions is still going to be 6s, and also please keep in mind that there can be negative performance impact with low values.

You shouldn’t experience delays longer than 1m in any case though, and if you do, I’d personally suggest to look at the system resource usage, as I’ve seen everything delayed more when the machine was heavily bogged down (e.g. 100% CPU and disk usage, etc.).

I made a test on deletion, but the user reporting the 7m delay was upon file creation (a PDF). I enabled debug and asked the user to report back the file name when it happens again, so I can check in detail what went on.

About the CPU, the graph shows a load average below 0.1 in the last hours, including the timespan when the user reported the delay.

If there’s anything else I can check in the meantime… Thanks

The snippet from your Syncthing log contains the following line:

nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.385051 indexhandler.go:231: DEBUG: indexHandler@0xc000037100 for dati-cl to LXHJPLL at 10.8.0.10:22000-192.168.1.207:22000/tcp-server/TLS1.3-TLS_CHACHA20_POLY1305_SHA256: Sending 1 files (<74 bytes)

So one of your Linux servers is at IP address 10.8.0.10 while your other Linux server is at 192.168.1.207. Could you provide details as to the network path? (e.g., connection type, speed, etc.)

A few questions…

  • Are all of the files being created and deleted local to each server?
  • Are the users creating and deleting files while logged onto the two Linux servers? (i.e., the Linux servers aren’t providing network shares to user workstations)
  • Could you post images of the Syncthing configuration panels?

One of the server is at the office, while the other is at a cloud provider. There’s an IPsec connection linking the two nodes, and I’ve configured ST to exchange data only on the tunnel. The office bandwidth isn’t that much, I think it’s at most 1MB/s on through the VPN.

The clients connect to the cloud server (10.8.0.10), the software they use generates PDF files which are usually very very small, the biggest ones can be around 20 MBytes. Those files are then accessed through a share at 192.168.1.207. They use this server for reading files and nothing more, they don’t create data here and currently won’t generate sync events.

I have a ping monitor between the two hosts, there are some seldom interruptions, but last time they reported me the 7m sync delay the ping average has been regular around 26ms, without interruptions.

This is the panel of the cloud host (primary):

This is the panel of the office host (secondary):

Thank you

Overhead from IPsec plus the relatively low bandwidth at the office will impact sync times, but at the moment it doesn’t look like it’s the main cause.

Filtering the snippet of log data you provided earlier from your deletion test of file prova.093550 in Syncthing folder dati-cl:

nov 09 09:42:51 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:51.476411 aggregator.go:296: DEBUG: aggregator/dati-cl: Tracking (type remove): prova.093550
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381095 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:42:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:42:59.381156 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381496 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:09 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:09.381543 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381946 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:19 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:19.381993 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382289 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:29 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:29.382344 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382474 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:39 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:39.382542 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382652 aggregator.go:334: DEBUG: aggregator/dati-cl: No old fs events
nov 09 09:43:49 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:49.382715 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383710 aggregator.go:347: DEBUG: aggregator/dati-cl: Notifying about 1 fs events
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383761 aggregator.go:315: DEBUG: aggregator/dati-cl: Resetting notifyTimer to 10s
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383824 folder.go:218: DEBUG: sendreceive/dati-cl@0xc00029e800 Scan due to watcher
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.383852 folder.go:432: DEBUG: sendreceive/dati-cl@0xc00029e800 scanning
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.384817 folder.go:496: DEBUG: sendreceive/dati-cl@0xc00029e800 finished scanning, detected 1 changes
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.385051 indexhandler.go:231: DEBUG: indexHandler@0xc000037100 for dati-cl to LXHJPLL at 10.8.0.10:22000-192.168.1.207:22000/tcp-server/TLS1.3-TLS_CHACHA20_POLY1305_SHA256: Sending 1 files (<74 bytes)
nov 09 09:43:59 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:43:59.452168 model.go:1126: DEBUG: Index update (in): LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG / "dati-cl": 1 files
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058007 progressemitter.go:285: DEBUG: progress emitter: bytes completed for dati-cl: 0
nov 09 09:44:01 localhost syncthing[1015]: [WKR5X] 2022/11/09 09:44:01.058188 model.go:925: DEBUG: model@0xc0003eb380 Completion(LXHJPLL-KCKGP2V-NTFCREJ-ZKEHNGV-6KNFV3F-UTDWCL7-5Z3DSQR-ABCDEFG, "dati-cl"): map[completion:100 globalBytes:324115070807 globalItems:671876 needBytes:0 needDeletes:0 needItems:0 remoteState:valid sequence:4995014]

Note how Syncthing’s notifyTimer is reset 6 times, each cycle adding an additional 10 seconds (60 seconds total) before Syncthing’s scanner is finally triggered at time stamp 09:43:59.383710.

I haven’t done a deep dive into Syncthing’s source code, but if I’m interpreting the log entries correctly, 6 file system events were detected between when your test file prova.093550 was deleted and when the file deletion event was conveyed by the cloud server (WKR5X) to the office server (LXHJPLL).

If I was dealing with the same sync delay problem, my next step would be to temporarily shut down Syncthing on the cloud server and use a monitoring tool to see what the extra file system events are.

There are several tools available for Linux, but one that’s widely available – and also cross-platform – is fswatch. To watch the directory /home/dati-cl and all of its subdirectories:

fswatch --timestamp --recursive /home/dati-cl

If /home/dati-cl is idle, the output from fswatch would just show the initial scan by fswatch followed by silence.

How does the office server provide the network share to the client machines (e.g. Samba)?

What’s on the client machines (e.g. Windows, Linux, macOS)?

Should also check for any packet loss in addition to the RTT. While 26ms is pretty good for an internet connection, latency can easily add up even for small files.

Since the typical MTU is 1500 bytes. A 20MB file would need to be diced up into over 13,333 packets. If not transmitted potentially out-of-order and possibly in parallel, it’s 13ms (half of 26ms) network travel time per packet for a total of 2 minutes 53 seconds worse case scenario assuming no packet loss.

1 Like

Hi maxxer,

After thinking about the problem you’re having, I need to make a correction to my earlier post regarding the 60 second delay between deleting the file prova.093550 on your cloud server and before the change is propagated to your office server.

I forgot that it’s normal behavior because Syncthing has a (default) 60 second delay for deletions. So at least with regards to your Syncthing log sample above, it all looks normal

Without a Syncthing log sample that shows a 7 minute delay, I think the problem is likely due to the network connection and/or the network file share (if you can let me know the answers to the questions regarding the office server and client hosts it’ll help a lot).

Yes, we have a Samba fileserver.

All Windows 10.

I understand, I asked to report as soon as it happens again with a specific file name. I have debug enabled for model and watchaggregator, is this enough to collect the needed information?

Thanks again

A few things to check in your Samba configuration:

  • server max protocol = SMB3
  • change notify = yes
  • kernel change notify = yes

If the three parameters above are set to something different – especially the notify parameters – client machines will not be notified when there are file and/or directory changes. Users will only see changes when they click a folder or some other action that causes Windows Explorer to refresh the display. The server protocol has to be a minimum of SMB2 in order to support client requests for file system events.

Yes, it should be enough.

In the meantime, I recommend running some network tests using iperf to root out any potential network issues such as packet loss, latency that occurs with certain packet sizes, etc. Test both the local network and the VPN connection.

1 Like

Hi. I haven’t reported back because I haven’t heard from the customer about new events.

In the meantime, I added the suggested statements to smb.conf, altought for our current version they should be enabled by default.

thanks again

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.