Syncing problem 10+TB from Mac to Windows over the internet

Hello,

I try to sync 10+TB from a 16TB raid 5 external USB drive connected to a Mac Mini over the internet to a 16TB raid 5 external USB drive connected to a Windows 7 based machine.

The problem is Syncthing fills up both the internal HD from the Mac Mini as the internal HD from the Windows 7 pc with *.dbl files. And then stops/crashes when the internal (C:) drives are full. No files are synced.

Is there a way to stop Syncthing building this many *.dbl files and just start syncing the external USB drives? Or am i using the wrong tool for my sync job?

I hope there is a solution :smile:

You probably mean *.ldb files in the index-v0.11.0.db folder. Those contain information and hashes of you files, for 10TB that will be quite a lot. You could move them (and your config files) to another location with more storage by using the -home=<dir> command line option, see the documentation.

And no, there is no way to prevent generating them because the database is essential for syncthing.

Any idea how much data i need to store those files for my 10+TB? Ill buy an extra external USB drive.

at least 32 bytes(sha256 hash) for every 128kb block as described here BEP protocol, filenames, metadata so minimum is 256k for every GB of synched data. 10TB will use at least 2.5GB index.

I have around 900MB for 63233 items, ~39.3 GiB but probably depends on your data (a few big files should need less because there the main data are the hashes like @kisolre mentioned, for a lot of small files metadata like filepath, mtime… can be more than the block data)

If your 16TB drive has enough space left I would put it there for a test because I have no idea how well syncthing works for that amount of data and you may want to test it before buying anything. The maximum reported on https://data.syncthing.net/ is ~20TB, so in theory it works, but CPU and RAM usage could be pretty high :wink:

Ok, wil test it, the files are mainly 4GB video files. From the 16TB i have about 4 TB left so i can try to use that for now.

It becomes rather more than that on disk unfortunately. You’ll see those 256 MiB per TB, but multiplied by the number of devices connected, plus the block index (roughly folder/hash => filename/offset) which is probably another couple of GB per TB. And the database will grow during changes and then shrink back during compaction.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.