Windows 10 Sync Delayed

Syncthing is installed on two systems; Ubuntu 18.04 server and Windows 10 Pro. Everything is working as expected except file watch syncing on Windows. Here’s my observations:

  1. After creating a new file on Ubuntu, “Recent Changes” shows the file as “added” after 12-ish seconds – as expected per documentation. As soon as Ubuntu “Recent Changes” shows the file as “added”, Windows 10 UI updates showing the file in “out of sync items” where it sits for 92-ish seconds before Windows 10 “Recent Changes” shows “added”. My expectation is that “Out of sync items” will be immediately “added”. Is this a Syncthing bug, undocumented behavior, or Windows 10 issue?

  2. Reversing the above, when I create a file on Windows 10, Windows 10 UI doesn’t show “Recent Changes”'s “added” for 92-ish seconds. Once “added” appears on Windows 10, the file is immediately created on Ubuntu – no “out of sync items” delay. Again, I’m not expecting Syncthing on Windows 10 to take 92-ish seconds. Same questions as above. I assume the issue is the same.

Not expected. Please check the log and of it didn’t contain anything related, enable model debug logging and reproduce the problem.

1 Like

Seems like the Windows client is repeatedly syncing. It will sync, then show “Up to Date”. I immediately create a new file. It might copy the file quickly to Ubuntu, but then the Windows client immediately starts a sync. Seems like copying files slows down greatly during a sync so subsequent files are slow to copy. Does file monitoring get lowered priority during a sync?

I’ve pasted in the end of the Windows log file (STTRACE=model). Also attached the log file section starting at execution startup.

[3OBTO] 2020/04/21 20:00:04.907502 folder.go:597: INFO: Completed initial scan of sendreceive folder "sw" (9rawa-euph5)
[3OBTO] 2020/04/21 20:00:04.907502 folder.go:247: DEBUG: sendreceive/9rawa-euph5@0xc000a9c000 next rescan in 46m24.554898582s
[3OBTO] 2020/04/21 20:02:25.869590 folder.go:184: DEBUG: sendreceive/9rawa-euph5@0xc000a9c000 Scan due to watcher
[3OBTO] 2020/04/21 20:03:02.881287 model.go:1971: DEBUG: indexSender@0xc001e678f0 for 9rawa-euph5 to  at [2a01:e35:2fbe:faa0:a1b9:849b:dc3b:5a8b]:52026-[2a01:e35:2fbe:faa0:618b:65d3:7658:6fd7]:22000/tcp-client/TLS1.3-TLS_AES_128_GCM_SHA256: Sending 1 files (<156 bytes)
[3OBTO] 2020/04/21 20:03:02.885304 model.go:1497: DEBUG: model@0xc00044c480 REQ(in): VDV2UK6-LKETEJJ-ABVVMF2-HLETYGD-BZWMTR2-TGHTJIG-6V6QQTJ-XACC4AK: "9rawa-euph5" / "ai\\syncthingtest97.txt" o=0 s=10 t=false
[3OBTO] 2020/04/21 20:03:02.903229 model.go:944: DEBUG: Index update (in): VDV2UK6-LKETEJJ-ABVVMF2-HLETYGD-BZWMTR2-TGHTJIG-6V6QQTJ-XACC4AK / "9rawa-euph5": 1 files
[3OBTO] 2020/04/21 20:03:03.659233 progressemitter.go:269: DEBUG: progress emitter: bytes completed for 9rawa-euph5: 0
[3OBTO] 2020/04/21 20:03:07.360330 model.go:829: DEBUG: model@0xc00044c480 Completion(VDV2UK6-LKETEJJ-ABVVMF2-HLETYGD-BZWMTR2-TGHTJIG-6V6QQTJ-XACC4AK, "9rawa-euph5"): 100.000000 (0 / 109370673598 = 0.000000)

syncthing.2.log (106.1 KB)

1 Like

That changes the situation completely. Please read and if your question remains, please provide a simple numbered list of steps taken/happening and then how that differs from your expectation.

I should add that when I use a subset of the data, syncing seems to work ok.

My expectations are “Therefore it is expected that you experience a slight delay between making the change and it appearing on another device.” but 92-ish seconds isn’t a “slight delay”.

Do I understand correctly that a file change is designed to trigger an entire resync? If so then isn’t there a scale issue?

I’m trying to sync 1M files, 64K folders, 1GB total disk space. The subset which seems to be working is 1/3 that size. Are my expectations too grand?

Syncing and scanning are mutually exclusive operations, so it’s either looking for new files/changes or downloading new files.

Syncthing will suck for many small files, because for every file there is database overhead which becomes dominant.

We try to target scans at the files we get notified as changing, but if there are many files changing at the same time, we might resort to rescan everything, where a rescan can mean just check the metadata on the filesystem (if it has not changed) or potentially re-read and recompute hashes for the file, which will take time.

Anyways, if you are trying to implement some sort of real-time sync system, then I think you are looking at the wrong tool, as we definately do not optimise for latency.

Also, your logs contain failures to sync, in which case all operations are aborted and delayed.

The logs are riddled with:

[3OBTO] 10:35:27 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 2m0.0049603s.
[3OBTO] 10:35:27 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 10:35:27 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 4m0.0059898s.
[3OBTO] 10:40:38 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 10:40:38 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 8m0.0069649s.
[3OBTO] 10:51:01 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 10:51:01 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 16m0.0059566s.
[3OBTO] 11:07:01 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 11:07:01 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 32m0.0059155s.
[3OBTO] 11:39:01 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 11:39:01 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 1h4m0.0069535s.
[3OBTO] 11:50:08 INFO: "OneDrive" (hhzfc-fsuvq): Failed to sync 1 items
[3OBTO] 11:50:08 INFO: Folder "OneDrive" (hhzfc-fsuvq) isn't making sync progress - retrying in 1h4m0.0291176s.

Sorry for confusing my terminaology. I’m changing one file. It appears that the one change triggers a rescan. The rescan causes the second file change to take 92-ish seconds. Is this reasonable given the situation?

I don’t need real-time but 92-ish seconds is not at all what I’m looking for.

Would it be better reduce the amount of data by 50% and break up the root folder into 2 or 3 syncing folders? Or is syncthing not able to handle even that?

I suggest you fix the errors I pointed out first, redo your test, and come back if you have issues.

Syncthing is able to handle alot as per yet you are testing stuff while it’s in a bad state so any advice I will give you is meaningless.

The errors that you point out are for a folder named Onedrive that was removed prior to the test. You can see by the log file that the last execution was using a new folder named sw which has no errors.

Go ahead and verify that sw is totally clean. So what’s the issue? How do we discover why the second file copy takes 92-ish seconds to copy? I’m willing to stick around a while. I think it will lead to either finding a bug or a documentation change.

Can you please reproduce the issue in isolation (potentially new folder/installation) and provide the logs for that specific run with the debug logging enabled as suggested?

Providing logs of hours of operation and then just throwing the number “92 seconds” out there, with no file names, no folder names, no nothing.

I am willing to try and help, but so far your responses just seem like a stuck record on “92 seconds, 92 seconds” repeated in every post with very little of other meaningful information that could be used to debug this.

I’ve spent another day trying to understand the problem. Here’s what I’ve confirmed.

I easily duplicated the slowness on a second Windows computer. Using a directory of 300,000 files of 29GB, Linux takes a few seconds to process a single file change while Windows takes minutes by rescanning. The file change often doesn’t occur until the rescan is done. Windows is perhaps 100+ times slower than Linux. The disparity between Windows and Linux makes me think something is wrong with Windows scanning.

However, when changing a file on a nearly empty directory, Windows and Linux are instantaneous (after 10s delay).

I’ll spend another day on trying to discover any further information. I’ll also try to find a public dataset that exhibits the problem so that you guys can duplicate it.

I don’t see why it’s scanning at all in order to process a change. One of the errors indicated you might have a case conflict (something having been renamed from foo to Foo or such). Syncthing doesn’t deal well with that.

Also, if you enable scanner logging as suggested, you would see what it’s scanning and how much time it’s spending doing that, and then you’d be able to compare what on Linux looks different from Windows.

Both Windows and Linux run the same code, so I’d be surprised if it’s us, there might be a difference how Linux and Windows reports file changes however, which is somewhat out of our control. You can also enable watchaggregator logging to see how file change notifications are aggregated.

There’s no errors reported in my new test folders.

On Windows, a single file change always triggers a rescan. Looks to me like the rescan will scan all subdirectories of the changed file. So a change in the root directory can be expensive.

I think that’s intended, and we do the same for both Linux and Windows, but scanner logs as suggested should confirm that.

What does the scan do? For every file, doesn’t it check the database’s hash against a newly generated hash. So if the change is in the root, possibly 100% of the files will be opened, read, hashed?

I’ll have a look at the scanner log files tomorrow. I suspect the issue is going to be Windows either having far slower database operations per minute, far more files being scaned, or a huge file caching difference.

Another clue is that a file change in directory /ABC is instantaneous but when the same directory is several levels deep in a much larger folder, the same file change takes minutes. This suggests to me that the database speeds or caching are the difference. A scanner log file might help to refute that theory.

Excerpt from log files, with scanner switch enabled, is shown below for Linux followed by Windows. The excerpts were produced by creating a tiny file. Both entries appear after the initial 10s aggregation delay.


  1. [VDV2U] 2020/04/23 12:52:03.946575 walk.go:104: DEBUG: Walk [ai/syncthingtest351.txt] Matcher/[]@0xc006958870
  2. [VDV2U] 2020/04/23 12:52:03.947018 walk.go:372: DEBUG: to hash: ai/syncthingtest351.txt File{Name:“ai/syncthingtest351.txt”, Sequence:0, Permissions:0644, ModTime:2020-04-23 12:51:53.942600848 +0200 CEST, Version:{[{VDV2UK6 1}]}, Length:30, Deleted:false, Invalid:false, LocalFlags:0x0, NoPermissions:false, BlockSize:131072, Blocks:[]}
  3. [VDV2U] 2020/04/23 12:52:03.947062 walk.go:193: DEBUG: real to hash: ai/syncthingtest351.txt
  4. [VDV2U] 2020/04/23 12:52:03.948287 walk.go:171: DEBUG: Walk progress done 3c5ue-lvtpx [ai/syncthingtest351.txt] Matcher/[]@0xc006958870

Linux behaves exactly as we expect. syncthing is immediately ready to sync another file. The UI immediately updates to “Up to Date”.


  1. [U2RFB] 2020/04/23 13:25:13.824587 walk.go:104: DEBUG: Walk [ai] Matcher/[]@0xc00a76d710
  2. [U2RFB] 2020/04/23 13:25:49.634773 walk.go:372: DEBUG: to hash: ai\syncthingtest310.txt File{Name:“ai\syncthingtest310 .txt”, Sequence:0, Permissions:0644, ModTime:2020-04-23 13:25:03.0214323 +0200 CEST, Version:{[{U2RFBF3 1}]}, Length:10 , Deleted:false, Invalid:false, LocalFlags:0x0, NoPermissions:false, BlockSize:131072, Blocks:[]}
  3. [U2RFB] 2020/04/23 13:25:51.467870 walk.go:193: DEBUG: real to hash: ai\syncthingtest310.txt
  4. [U2RFB] 2020/04/23 13:25:51.467870 walk.go:177: DEBUG: Walk 3c5ue-lvtpx [ai] current progress 0/11 at 0.0 MiB/s (0%)
  5. [U2RFB] 2020/04/23 13:25:51.468851 walk.go:171: DEBUG: Walk progress done 3c5ue-lvtpx [ai] Matcher/[]@0xc00a76d710
  6. [U2RFB] 2020/04/23 13:28:05.255682 walk.go:104: DEBUG: Walk [ai] Matcher/[]@0xc00a76d710

Timings are very different.

  1. Line 1: Linux shows “DEBUG: Walk [ai/syncthingtest351.txt] Matcher” where as Windows shows “DEBUG: Walk [ai] Matcher”. Why are they different?
  2. Line 2: Windows took 36 seconds longer for “to hash” to appear.
  3. Line 3: Windows took 2 seconds longer for “real to hash” to appear.
  4. Line 4: Windows has a “current progress” line. Maybe its just informative?
  5. Line 5. Windows stdout outputs “real to hash” to “walk progress done” at the same instant. Same as Linux.
  6. Line 6. Windows line 6 is shown only to indicated when another file is ready to be synced. It shows that syncthing can only sync the next file around 3 minutes after the previous file creation. So there’s a mysterious gap 130 second gap between “Walk progress done” and ready to sync (UI’s “Up to Date”).

Summary of issues:

  1. Why is there a difference in filename in line 1 (matcher)?
  2. Why does “to hash” take 36 seconds longer on Windows?
  3. Why does “real to hash” take 2 seconds longer on Windows?
  4. Why does Windows take 130 seconds after appearance of “Walk progress done” for the sync to be ready for the next file?