i have installed syncthing on a synology NAS and a W7 PC. My use case is quite simple : backuping NAS data on “external” devices (PC local drive and USB drive).
Directories size are between 400GB to 900GB. During Scanning, the NAS is on the knees and it’s very slow in transfert (i thing because the CPU NAS is a 100%, 700KB/s of trasnfer).
I guess that it’s the hash construction that takes lot of CPU ans IO disks.
As i would like just to backup some files (only one source), could it be possible to base the detection of file changing (creation/deletion) on size/creation date etc… to avoid hash ?
Or to add an option on method to use to detect files changing ?
Thanks for this promissing project !!
As far as I know:
- Add the directory one by one (wait until the initial scan is finish before adding the next one)
- Wait until it’s all scanned before sharing the folders.
- Use syncthing-inotify with a high scan interval to detect changes.
Edit: And based on your files you might want to disable compression
Hashing is only done when mtime changes or file becomes available.
Though we have to build up the initial index to be able to track changes, so it’s a one off cost, given stuff is not changing.
If you search the forums, there is some thread which explains how to perform indexing on a more powerful machine, and then move the index to the NAS which can just use that (the data has to be identical on both devices).
thanks for your answers …
i will check for index on a most powerfull machine …
my NAS is on the knees, and some services become very slow. But it seems that it is more disk IOS that is paintfull. I met this problem at the beginning of BTSYNC and they have made a correction on their software to avoid this problem.