I’ve installed syncthing on a MacBook Air and an iMac. When trying to synchronise a large repository (135,000 files, 5.6GiB) the remote machine reports an error in the orange box saying:
Connection to MacbookAir closed: element size exceeded
What does this mean exactly and can it be rectified?
Ouch. It means you’ve exceeded my naive expectations. There’s a limit on 100.000 files per index. I’l bump that by a magnitude in the next release, in the meantime that’s the limit.
I’m getting this error when trying to sync with one node.
I tried different repos and even one without files but the sync won’t start.
###Update:Elements size exceeded means one of these is longer than the allowed size.
I think the issue is related to the version string: if I use a build compiled with ./build.sh the version is v0.8.16-3-gfee8289 while the tagged I can download from github is v0.8.17.
After a try with the precompiled version the sync went just fine.
First, the obvious: there are limits on the number of files (1 000 000) and number of blocks (100 000 per file, so max file size currently supported is 100 000 * 128 KiB = ~12.2 GB). Are any of those exceeded? If not, do you have more than 64 repositories or file paths that are over a kilobyte in length…? If not, it might be a bug. You could run syncthing as STTRACE=xdr syncthing and paste me the last 100 lines or so of output leading up to the crash.
You’re running the master branch which is currently half way to 0.9 and not compatible with anything but itself. You want to compile a tag or use a prebuilt binary.
I’m trying to sync my entire home directory - well over 1million files, is it easy to raise this limit in the source by simply recompiling with a new constant?
Yes, but it’s going to suck anyway for performance reasons. I suggest trying the v0.9 beta. It might still be a bit heavy, but that’s where the development is.
ok, thanks! I gave it a try and it works pretty well except for the scanning killing performance. If I let it do its scan will it eventually stabilize and revert to FSNotify style events to detect changes, or do full disk scans kick in every now and again no matter what?
It’s definitely a lot quicker than BitTorrent Sync for transferring files, so keep up the good work
There is no fsnotify mechanism, unfortunately, so there’s going to be a periodic scan. When it knows about the files it doesn’t actually read them, but still just getting metadata for over a million files is going to be a somewhat heavy operation. For fewer files it’s a non-issue because all the file metadata will be in the filesystem cache, but I doubt that’s the case for a tree this large. You can tweak the interval with which this happens though.