OK, so like many people, I’m moving over from Sync. Not a philosophical issue – I even bought the Pro (now that it’s one-time, and not a subscription). It came down to two things:
Sync doesn’t handle massive things very well. And as you’ll see below, I have massive things to be handled.
At least at a large scale, reliability breaks down. I’ve had ghost deletions, and zombie files, and things that wouldn’t sync, and all sorts of other issues. It’s a headache for me, but since I’m IT for my mother and sister, it becomes untenable. Particularly because we live 13 timezones apart, and I work 90 hours a week, so it’s difficult to respond to issues quickly.
I use it for a few different things, mainly the following:
Server syncing. One folder for torrents (both manual and automated) and one for the downloads. This goes to both my PC and my mother’s PC.
Phone. This is just for me (goes to both my PC and my laptop), I find it easier to have a sync set up so that I just have to drop a file into my phone folder on my PC than screwing around with the FTP on the phone. Also, photo sync from the phone to the PC, and doing housecleaning on my pictures is easier on my PC than my phone.
Development. I do a lot of programming (c++, php) and materials creation (gimp/video mastering/av transcoding/etc) for my company, and I like to be able to seamlessly transition between my desktop (for at home) and my laptop (for in the office/on the train/etc).
Backups/duplication. Both my mother’s and my files are backed up to each others main PCs (and my sister’s are mirrored there as well). Additionally, 2 other computers in her house have her music/movie/tv collections mirrored to them so that she doesn’t have to go over a network share for them. Also, my mother’s laptop has everything that she wants to take with her when she travels mirrored to it so that she doesn’t have to dump a bunch of stuff over every time she leaves for somewhere.
As for scale, altogether it’s about 7tb and 4,000,000 individual files. The smallest is a single byte, and the largest is almost 200gb. The total number of connected devices is 1 phone, 4 desktops, 3 laptops, and 1 rackmount server. They’re spread between 4 cities on 3 continents, and have Windows (7, 8, 10), Linux (Gentoo), and Android OSes.
I’m still (slowly) migrating over, but so far, it looks like it will at least match, if not beat, Sync on RAM usage, trounce it on CPU usage (after initial hashing), but lose on transfer speed.
Yes, first BTSync and then Resilio.
I can confirm that Syncthing is performing FAR better than it.
I’m currently syncing 4.275TB, 200,000 items, in 14 SyncThing folders on my main PC, to 4 different locations.
The speed is a bit lower, and I have to use a direct IP connection (resolved through DNS) to maintain stable connections, but it’s using <1% CPU and ~350MB RAM. Resilio used ~2x the RAM, and 5-10x the CPU. Even the scanning/hashing was faster with SyncThing.
Actually, it would be interesting to have information on syncthing’s network performance. Have there been some tests?
After a bit more tweaking, the server connection is now exceptionally fast. It’s about practical max (~98mbit). This is, in fact, faster than Resilio (which usually hit about 40mbit). Original post has not been altered. This implies that, while still a bit harder on the CPU than Resilio, it probably won’t make a difference assuming that you can get significantly under the hood. Server is an older dual-core Atom with 4gb RAM, a single 5400rpm drive (XFS), and a true 100/100mbit connection./
Not as far as I know, and it would depend on a lot. However, in my experience, I get a couple of MiBps from my server (cpu-limited on the send side) – slower than Resilio, about 1MiBps from another desktop (likely a connection issue) – same as Resilio, and 4-5MiBps to my phone over local network – 30-40% slower than Resilio.
So, in my experience, it’s a little slower in ideal conditions, probably a wash in most situations over internet, but can be cpu-capped easier (despite lower idle cpu usage).
Can you describe what you tweaked to achieve the faster connection? Maybe someone else can profit from your findings.
Definitely agree with Simon, would love to know what you tweaked to improve your sync speed.
Basically, bump up the pullers significantly and also the copiers. It’s likely a similar problem that FTP often has – some point between the peers limits per-connection speeds, but global speed isn’t limited (or has a much higher limit).
I’m currently running between 128 and 256 pullers and 4 copiers per folder. Note that you should NOT do this if you’re approaching your pipe’s limit, as each puller will add overhead. I’d use a rule-of-thumb that you should add pullers until you hit ~50% of bandwidth or 512 pullers, and don’t add copiers if your disk utilization is over 50% (copiers will also add overhead, particularly on platter-based drives, due to seeking). Additional pullers will also greatly help with large numbers of smaller files, as Syncthing is fairly slow in processing them, so more pullers helps minimize the delay in latency.
Also note that if you’re using relays, this may well hurt as you might overload them. I’m not entirely sure about how relays work, as I don’t use them.
Note: these settings are on a custom waterloop oc’d 1800x w/32gb ram and 4-drive raid 10 Samsung Pro SSDs + 4-drive raid 10 7200 HDDs on a true gigabit up/down fiber line and 5tb/280k files synced (300mb RAM utilization while idle using synctrayzor’s watch rather than timed rescans). YMMV.