Preparing to sync query / issue

I have one receive only folder of 960Gb. On an earlier version this folder would sync eventually but since 1.4 it will plod along syncing then for no reason decide to go to ‘preparing to sync’. The receiving end has no watcher or rescans enabled, and initially the send only end was watcher disabled, rescan 10 days, but that’s all now disabled incase this was invoking the recheck.

Both ends data are on raid 10 drives, both ends are not touch by anyone other than myself for saving an offsite copy of their server backup. Which I tend to do once a month.

I can see on resource monitor / disk activity it’s going though the file, but I have no idea why it keeps wanting to stop syncing and rescan itself when nothing is changed.

By design or something I can tweak to get it to keep syncing all the time?

Cheers

Syncing happens in cycles, so it might be that to fully sync it needs a few cycles.

If it’s making progress, I don’t see an actual issue here?

Maybe some background info: Previously there was only “Syncing”, which prompted many complaints that it should do that, as the data is already identical. However it still needs to sync, i.e. compare files to know that they are identical - no file transfer happening. To alleviate for that, there are now two phases: “Preparing to Sync” is the phase happening locally only, comparing stuff in db and on disk and updating db as needed to get in sync, and then “Syncing” once there’s actually network transfer involved.

I’m finding that on a very large file, the progress is being slowed considerably, mainly due to the long amount of time it’s rescanning. So preparing to sync can take several hours, then it will sync for a day or two and then the cycle repeats.

I was really inquiring why it would need to stop and resync if nothing changed to the file and the dbs are tracking all the bits of data coming in. Seems odd that it doesn’t just keep running until a change has been detected which then stops the sync and to check whats changed in a file.

When it needs to retry there should be failed items with relevant error messages displayed. If that is not the case, please enable model debug logging and share those logs - something isn’t working as expected then.

will do. might be a few days before replying as i’m guessing you will want to see the log when it changed from syncing to preparing

1 Like

If it’s syncing then something has changed.

To check Audrius’ point (did something change on the remote, as opposed to a local problem), also enable db debug logging (prints information about internal versions which I am not sure whether they are included in model).

I have a folder on a remote server that I drop a backup into, it’s not shared by anyone and I would only update it once I have a full copy on the remote end. Unless the servers built in AV scans it, nothing will have changed.

On the receive end I run a dedicated server for receiving files and other than looking at syncthing, I don’t touch the files.

It only seems to happen on very very large files as another folder did the same, but as it was smaller it eventually synced.

Sure, yet I stand by my argument that if it is syncing, something has changed.

I wiped away the receive only files to start again. On a restart on 1.4.2 it started syncing. then 1.5 rc1 popped up, so it’s updated, now its gone back to preparing to sync but it appears to be doing nothing. No disk activity, no data transfer on the net.

I will send the current log to Simon

Just to clarify, “preparing to sync” is essentially everything about the old “syncing” except actually downloading data from the network. That includes database operations, potential rehashing of existing partial data, preparing and copying destination files, etc.

I’m not sure what it’s doing if there is no real I/O or CPU load going on, but otherwise I wouldn’t be surprised if a previously ten minute “syncing” pass might now be a nine point five minute “preparing” pass followed by a shorter “syncing” while it gathers a few missing blocks from the network.

It’s just started syncing, although 0 data transfer. Will leave it alone and watch what it’s doing for the next day or so.

Cheers and thanks

Just a side observation, you seem to be syncing hyperv images and you seem are forcing compression on binary data, which seems a waste of resources, as it is unlikely to have any effect.

If you recall, my set up consists of two Syncthing installations on two devices on the same network, with port forwarding to 22000 / 22001. Initially I used to have all 39 folders on one device, but concurrency caused issues with delays on starting to scan, or with -1 set, extreme IO delays. So I split the folders into two. I mention this as I think it’s relevant to the issues of this thread.

On device two (22001), (which I had the ‘preparing to sync’ messages), often on several folders the message would appear, but never on device one (22000). I would however on device two also get a lot of red IO timeout warnings under the remote device name (not the folder name).

So a few days ago I re-merged all the folders back onto the original device (port fwded to 22000) and I have not seen any further errors. The affected massive jobs have not stalled or waited, they have been syncing without issue. There’s been no further red timeouts.

This is now making me wonder if port 22000 is more important to the functionality that if another port is used, or multiple installations are on the same network that St struggles. I turned on/off settings such as local discovery, relay, uPNP etc but none had any affect.

Another observation is I set all jobs to tcp://ip:22000 or 22001 on the send only for folders, For those folders set to 22001 I would often see other ports being used on the remote device ‘address’, but there’s forum threads to indicate that this is normal, and can be dismissed. It probably did the same on the folders with port 22000 but I tended to ignore those folders as everything was working normally.

I’ve also reset the compression to metadata. At the time I set everything to compress as I figured it would help speed things up, however in reality I doubt it made much difference.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.