Syncthing only syncing folders (not files)

I’ve been testing Syncthing for potentially replacing using CrashPlan for multiple devices. I was very happy with the setup and feature set, but after setting it up on two machines and leaving them over night, I’ve noticed a strange behavior. Syncthing appears to only be Syncing folders.

Some special notes about the configuration:

  • Machine A is set up to Send and Receive a folder (User folder on Windows) from Machine B
  • Machine B is set up to Send Only and is on the same LAN as Machine A
  • The connection is direct (not Relayed)
  • Machine A is set up to have the received folder saved to a network folder (addressed via a UNC path)
  • Both machines are running Syncthing via Task Scheduler

Neither machine seems to be spitting out errors and no data is being transferred (other than folders, which are being created on the network drive). The Network drive is a NAS that I can typically get ~30Mb/s via Machine A (which has a wired connection to the NAS).

I’m wondering if the issue has to do with the size of the file tree (lots of small files spread out through a big hierarchy).

Version Information

Syncthing Version: v0.14.36 (Windows 64bit)

OS Version: Windows 10 64bit

Browser Version: Google Chrome Version 60.0.3112.101 (Official Build) (64-bit)

From which side is the screenshot?

What does the dialog say, which opens when you click the link with the out of sync items?

The screenshot is from Machine A (i.e. the one which is attempting to receive the files and store them on a network drive).

The out of sync items shows this:

Edit: One additional note. The folders still seem to be being created (it’s not frozen), but at an incredibly slow pace. Neither machine shows more than 20% CPU usage (with the laptop which is uploading showing consistently more usage).

To anyone who ends up seeing this and having a similar issue. I’m not sure what exactly changed, but after a few restarts on both Machine A and B, things started to speed up. I’m not sure why Syncthing creates the entire directory tree first rather than doing some sort of depth-first traversal so it can begin taking advantage of the network bandwidth while the remainder of the tree is constructed (which apparently in this configuration can take ~24 hours).

I’m not sure I would call this a bug, but perhaps it would be a nice feature to make the first-time sync a bit more efficient. The reality is that using Windows Explorer, the transfer would likely have been done by now - so something is bottlenecking things.

I think we can safely say this issue is resolved though.

I’m glad to hear the sync was successful. My guess is that the NAS is on the lower end of computing power for creating files/folders and Syncthing encryption, etc.

It also looks like you want to replace the backup tool CrashPlan (and yes, I’m also very disappointed about the discontinuation of home products!) with the sync tool Syncthing. If you are always transferring data one way maybe you should take a look at robocopy (command line) or other software for backing up your data to your NAS.

The NAS is definitely slow (I previously tried running Plex on it, but it couldn’t transcode on the fly because of CPU limitations). I’m sure that’s what’s going on. I still think it would be good for Syncthing to not necessarily build the whole directory structure up front - it bottlenecks things unnecessarily. 2 days in and I’ve got 25% synced - so it’s going, just slowly. We’ll see how well it works once the first-time sync is done.

Re: CrashPlan. I appreciate the clarification! I’m still intending to use CrashPlan as my backup service, but as they now charge based on the number of machines (and I have 4; 2 laptops and an offsite desktop in addition to my main desktop) in the business plan, my thought was to use Syncthing on all but my main machine and then just pay the 10/mo for CrashPlan for this one machine. I’m already using a similar workaround to backup my NAS which isn’t supported (I got it working briefly but it wasn’t reliable). Syncthing has basic versioning, so I figured that might provide a little extra protection given the potential for slow transfers of files to the main machine.

Perhaps I’m misunderstanding the use case for Syncthing. I thought it could function similarly to the peer-to-peer “backup” (really a sync with very simple versioning) CrashPlan used to provide. I’m open to suggestions! Still working out how I want to handle the changes to CrashPlan’s service.

Thank you!

Deletion on one node is propagated everywhere. Sounds like a lousy backup.

I don’t think this is a meaningful bottleneck. Syncthing applies the operations it can do locally before pulling stuff from the network - this includes creating directories (which it needs to do before putting files in them) and handling renames and deletes. The directories need to be created at some point, it’s just that creating 50.000 of them apparently took a while on your system given its constraints in disk and CPU performance. It should have shown as state “syncing” and the numbers in your screenshot increasing every few seconds, so I’m not sure I see the issue.

This can be disabled or, better, handled by filesystem snapshots. Syncthing is no worse a backup solution than rsync, and lots of people (me included!) use that for backups.

1 Like

The issue was just that the network connection was virtually unused for the first 24 hours of the sync. This seems a little wasteful. I’m sure it simplifies the process, but as a first time user, it definitely violated my expectation (enough to unnecessarily google the “problem” for an hour and then make this post). The folder creation rate was obviously not Syncthing’s fault (although if that’s a distinct first step on adding a folder, I think had the UI displayed some reference to “Building directory structure…” I would have realized it was an intended operation), but it’s not hard to imagine Syncthing beginning file uploads within directories which are been created on the remote system.

As far as deletions propagating goes, I agree it’s a nonissue. CrashPlan will still be backing up versions and keeping a history of all these files on the main system (that’s sorta the point). Yes, it’s possible that because of the added delay a file could be changed before the changes propagates to CrashPlan, but I don’t see this as any different than what can happen on a local system with just a backup system running (it just involves a potentially longer delay). CrashPlan propagates file changes in batches, not continuously with every file save.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.