Syncthing Configuration for one way sync with large numbers of small files

Hello Syncthing Community!

I’m trying to build a system to move files from one server to another. Lets say it looks like this:

ServerA → ServerB

Where ServerA is set to send only and ServerB is set to receive only and to ignore deletes (as we will clean up files from ServerA as they get to a certain age to keep disk usage down).

After a certain point, say 8 million files, Syncthing on ServerB gets to where it’s very slow to start a sync, and it begins getting backed up with files from ServerA as they’re created continuously. I’m thinking this has to do with the size of the database, especially since I had gone through and deleted all of the files from ServerB and it didn’t help the preparing to sync time.

For all of this, I’m going to ask if there is a way to remove files from synching’s database to keep its size down and manageable? It appears the size of the database is causing the slow sync issue. I’ve looked through the REST API documentation but it seems there is not a way to do that and Syncthing just considers a deleted file as “Locally Changed” and leaves it in the multi-gigabyte sized database.

Thank you for any help y’all can give!

Would you mind sharing some hardware specs and OSes on the two servers?

Since you deleted the files from ServerB while Syncthing is set to ignore deletes from ServerA, would setting both servers to send-receive and using a scheduled task to delete/move files from the Syncthing folder on ServerB (thereby causing ServerA to delete them) be an option? (It would result in Syncthing pruning its database sooner.)

The following might be useful:

With regards to the database, Google’s LevelDB can handle a large amount of data – billions of entries and multi-terabyte-size databases.

If Syncthing’s files are on the same storage device as the 8 million files it’s syncing, that can be a potential bottleneck, especially if the storage medium is a conventional spinning hard drive.

Hardware specs are likely not going to be the bottle neck. They’re virtualized linux servers with 12 cores and 64GB of RAM. The disks the files are stored on is an all flash raid array.

Since you deleted the files from ServerB while Syncthing is set to ignore deletes from ServerA, would setting both servers to send-receive and using a scheduled task to delete/move files from the Syncthing folder on ServerB (thereby causing ServerA to delete them) be an option? (It would result in Syncthing pruning its database sooner.)

I like this but another part I didn’t mention is that we have two ServerA’s and two ServerB’s which complicates the problem. I was in the mindset of “find a way to manipulate the syncthing database” rather than finding another way to do it.

How the “Cluster” is set up is that ServerA1 and ServerA2 are producers, they have files placed within syncthings directory and we don’t need to have the files synced between them. Both ServerA1 and ServerA2 are configured to be Send Only. They are both connected to ServerB1 and ServerB2 which are also connected to eachother and set to be ReceiveOnly and to ignore deletes. This architecture definitely complicates things and each ServerB is responsible for managing its files independently. We have a scripting system that each will have an independent version of.

There’s also a future idea for a ServerC that’ll be setup like ServerB’s but will be long term storage.

Ah, the hardware and software specs should be fine even with 8 million files. :nerd_face:

There are a sizable number of Syncthing users on this forum with Raspberry, Orange and other “Pi”-like SBCs plus NAS appliances with limited hardware specs (one setup had 512MB of RAM).

If you haven’t already, definitely check out Syncthing’s configuration tuning page:

The continuous addition of files on the pair of ServerA’s pushing changes to the interconnected pair of ServerB’s is going to require some performance tuning.

What’s the filesystem on the ServerB’s?

I’ll take another look at it. ServerAs and ServerBs are using xfs as the filesystem on top of LVM. ServerC will likely be NTFS but it’s a future thing and we don’t have a requirement for the files to get there as quickly as possible.

If I understand the topology correctly, with continuous updates from the ServerAs plus the pair of ServerBs syncing with each other, I think the primary issue might be filesystem performance. ServerBs are going to be rescanning every 10 seconds as change notifications from inotify are processed.

XFS is great, but I’ve found that it doesn’t handle lots of small files well (especially directories with lots of files).

I almost always skip LVM on virtual machines. Unlike bare-metal servers, it’s very easy to resize a virtual disk and/or add additional disks, so most of the advantages of LVM aren’t necessary. Performance-wise, since your VMs are on a flash-based RAID, LVM doesn’t add much overhead, so it’s mostly for simplifying maintenance.

Even on Windows, NTFS performance tends to get sluggish as the number of files increases (especially when there are lots of small files).

If ServerC is going to be running Linux, there’s a choice of ntfs-3g (FUSE-based) or the newer NTFS Linux kernel module (kernel 5.15 and above). Depending on how ServerC is going to be used, either one might be okay.

Have you considered using something else for this?

Your use case is not really what syncthing is targeting. It’s targeting continuous bidirectional sync, ideally with content that doesn’t change much (i.e., personal files). That doesn’t seem to be your use case, and you are asking how to make syncthing do what it’s not made to do.

In your case I’d just run rsync in a loop, it supports single direction out of the box without hacks, it doesn’t have a database that grows, it doesn’t suffer the overhead of the database so tons of small files are fine, etc.

Syncthing is just a really poor choice for what you are trying to do.

1 Like

If ServerC is going to be running Linux, there’s a choice of ntfs-3g (FUSE-based) or the newer NTFS Linux kernel module (kernel 5.15 and above). Depending on how ServerC is going to be used, either one might be okay.

ServerC will be Windows which is the only reason we’re using NTFS. It doesn’t have the need to be reasonably real time since we’ll be using it to store the files for the long term.

Even then, it’s still better than what we are currently using to do the same thing (Azure FileSync). We also like that we can leverage the syncthing database to ensure that files are copied over before cleaning them up on the sender side.

Sounds like you could still get away with just rsync

1 Like

If you do decided to use rsync, it has a handy set of exit codes. I use them to automatically prune files after they’ve been successfully uploaded to a NAS.

Another option that may be of interest is Lsyncd:

We really like Syncthing for delivery confirmation. I’m trying out some different configurations and we’re going to look into moving files out of the directory to keep the number of files Syncthing is managing at a reasonable level.

If it doesn’t work I’ll look into lsyncd. The issue I may have with it is the lack of certainly that a file got picked up and moved. From what I was reading, it can get overwhelmed and miss files and that is going to be a no-go for our use case.

Thank y’all for the help and if you have any recommended tuning values I could try or a rearchitect of the layout of syncthing instances, please let me know so I can try them out.

Currently I set it up to where both ServerA’s sync to eachother and each is tied to it’s respective ServerB, but this will likely expand to a mesh allowing them all to sync. I still have the ServerB’s ignoring deletes but that just allows each to manage it’s own file directory and automatically cleaning up the ServerA’s as files are moved out.

Fortunately, it’s not as bad as it sounds. The “problem” isn’t actually unique to Lsyncd, but to all apps that use filesystem notifications.

The event queue is a finite size, so that’s the crux of the problem. Let’s suppose that the event queue has enough slots to handle 10 items (a directory or a file). As the inotify subsystem in the Linux kernel inserts events into the queue, eventually older events have to be pushed out (first in, first out, order). If the rate that events are queued is faster than the process responsible for handling the events, some events will be missed.

Or in other words, if a program is able to process 1 file change every second, but the rate of change is 2 files per second, the program will “miss” 1 file per second (50%).

Whether it’s Syncthing or Lsyncd, if both are configured for a 10-second delay before a scan-and-sync, if a file is created then deleted in under 10 seconds, it’s as if it never exited (except for its event queue entry).

Lsyncd uses inotify to watch for changes, then calls rsync (or another suitable program) to do the actual sync. If the sync delay is 10 seconds on ServerA, don’t prune files for 20 seconds so that there’s been enough time for it to be picked up by rsync within two successive runs.

Or in other words, if one new file is created every second and the sync interval is every 30 seconds, only deleting files older than 60 seconds would ensure that there’s a 30-second buffer.

Rsync has --removd-source-files which deletes files that have been transferred which I think is what you are after. Thats your confirmation bit.

I have a question for you on this, can lsyncd handle syncing to multiple endpoints? Doesn’t seem it can do it in one daemon so I may need two and they can’t clash.

If I understand the question correctly, yes. A single Lsyncd daemon can invoke multiple instances of rsync (or other sync program) to sync changes from a source directory to multiple destinations.

Sounds like I have a new direction for this, especially if I can have it delete files once they’re verified sent. Then the question is how I can do the --remove-source-files of rsync while still ensuring files get to each individual server. I’ll have to research on doing all of this and one thing I saw was using a script to do it and once it fully completes to delete the files that are sent to it. If you happen to have any configuration that could help with this, please share.

Thank you both for your help.

So for my particular use case, a continuous stream of new data files are generated and moved through a chain of servers: ABC

A (multiple unrelated servers) collects, processes and uploads data to B (a server cluster) which in turn transfers files to C into a long-term archive (> 50 million files).

For diagnostic, testing and fail-safe reasons, B must temporarily cache 24 hours worth of data files, so rsync’s --remove-source-files option couldn’t be used. Instead, I used rsync’s exit codes like this:

rsync --recursive --checksum --times --inplace --partial /src/* user@c:archive/ && find /srv/ -type f -mtime +1 -delete

Rsync only returns an exit code 0 when the source files and directories have been successfully transferred.

So the only time find is called to delete files older than 24 hours is if rsync is successful. If rsync encounters any errors, a future run will eventually sync any “missed” files. As long as B doesn’t run out of storage space, it doesn’t matter if C is temporarily offline.

(Additional error checking sends an email alert if there are repeated failures over several days.)

1 Like

I see, and you’re doing the rsync there in a cron job or is it done as part of the lsyncd configuration?

Do you have any issues with lsyncd getting overwhelmed with the deletions? I had attempted to do something similar but starting with a directory then moving new files to individual directories to be synced to their destination servers allowing file deletions in those directories and keeping all of the source files intact in their own directory. This caused a problem with lsyncd getting overwhelmed by all of the file operations.

In the example above, it’s run via cron because there wasn’t any need for updates to propagate in real-time. However, if there ever is a future need, it’d also work fine with Lsyncd.

For a different project, I’ve been using incron ( At a basic level, it’s like Lsyncd, waiting for filesystem events and executing an action based on defined triggers, but incron isn’t nearly as flexible as Lsyncd.

No, I haven’t run into any issues with Lsyncd or incron getting overwhelmed with file operations.

If 10,000 files are deleted in one pruning operation but your /proc/sys/fs/inotify/max_queued_events is set to 8192, 1,000+ events are going to be missed not because Lsyncd cannot handle it, but because inotify’s event queue is too small.

One unknown is your rate of changes on the ServerA’s. Invoking rsync once every second on a source directory with 8 million files isn’t likely to work well.