Cannot find max file size in documentation and some other specs

I need to transfer 0.5-4 TB files. Is it doable with Syncthing? No versioning, just remote copy (bidirectional).

Does it use multiple connections to increase speed of transfer if single collection bandwidth is limited?

1 Like

There is no file size limit. It only uses a single connection, but it will have to rescan (re-read and re-hash) the whole file on every change, which if the file changes frequently might end up being in a permanent scan stage, if the cpu is weak.

Using multiple connections is not a feature that exists, as we don’t believe it is generally useful for a well configured network.

There is no file size limit.

Very good!

it will have to rescan (re-read and re-hash) the whole file on every change

Not a problem it is immutable files (media, archives etc)

Using multiple connections is not a feature that exists, as we don’t believe it is generally useful for a well configured network.

Main issue it is for geo distributed sync, and many ISP limit bandwidth of single connection. I cannot control that, for example in Europe some providers limit single connection to 50 Mbit, and if you even have 200-300 Mbit it cannot be fully utilized in single connection (I tested it with speedtest which can allow to choose mode and iperf).

Is it single connection per file or per instance? Probably if it is per file then multiple files can be scanned and sent in parallel and that will help.

1 Like

Per device.

I remember seeing a thread in the forums, where someone was running multiple local instances to “simulate” multiple connections, so there is a way to hack around this. Clunky, but would probably work in this case. Of course, each of those instances would have to transfer different files/folders, and not the same one.

Any chance to get native support of any kind of parallelism in the application?

I don’t think it makes sense in most cases, and I don’t think anyone is planning to spend time on the few cases where it would matter.

It can be a trick, but most likely you will have to create own unit files for each instance, separate config files, maybe custom builds.

All that will be out of main update channel. And does not look as good option for regular users.

Of course not :wink:. That’s why I mentioned “hacking around”. Regular users aren’t likely to sync “0.5-4 TB files” either.

There is no need for any custom builds though. You only need to run Syncthing instances using different -home folders for each of them. The separate config files will be created automatically, and the autoupdate will still work fine.

Sad… Though I cannot agree that it is rare situation. Each software I used before has its competitor offering parallelism. Including network tools, for example aria2c supports SFTP parallel (natively up to 16 connections). And it is great and very wanted feature. I read even requests to increase connection limit ))

You are free to use tools that do the job better for you.

If we are talking about housewife then yes ))
But if we are talking about IT-geeks then it is absolutely normal to store big amount of data.
Torrents, media libraries, backups, whatever…

Unfortunately I do not know the tool which can be named “Parallel Syncthing”.
Though I will have to try more…

I think you over-estimate your need for “parallel”.

If you have more than 2 devices, things become “parallel”, also syncthing has to do other things with the data, namely hash it, to verify it, write it to disk, not to buffer insane amounts of memory, so the need of “parallel” fades quite quickly, unless you genuinely have TCP connections capped to 1Mbps per connection, which I don’t think you do. 20Mbps is still considered “good enough”| internet in many parts of the world, so if you get that per TCP connection, I don’t see why you should be worried, other than some mental stigma that it’s not going as fast as possible.

You could also try and find some tunneling software, that perhaps uses UDP and does not suffer from these limits, and then run syncthing via that.

1 Like

Or try forcing QUIC connections, which is usually slower but may act differently depending on the network circumstances. But yeah, no, parallel connections are not on the roadmap.

Only 2 devices (2 home servers with many HDDs in desktop enclosure) in 2 countries (ping ~50ms).

Let’s I assume I am sharing House MD in BDRemux quality which is 1.3 TB. If I downloaded it at one location and want to share that to 2nd one (for backup, to improve further seeding and so on).

20 Mbit/s is around of 2.5 MB/s. 545259 seconds or 151 hrs, or near 1 week. So each 1 TB needs 5 days for synchronization (24/7). If I have more than 100TB then initial synchronization takes 1.5 years.

If I fully utilize at least 200 Mbps then it will be ~10 times less (around 50 days) and I am sure it is much better. Even if it is 4 times less it makes sense (50 and 200 Mbps) - 50 days or 200 days.

HDD traveling is not an option )))

While I cannot say to have experience with every single ISP on this planet, I’ve talked to a few in central Europe. Cases where a single connection is actually throttled by an ISP aren’t that common and if they exist, often based on packet inspection: If a certain type of data (HTTPS, likely video etc) is detected a limiting factor (e.g packets per second, traffic shaping) is applied.

This means that generic or undetectable transport streams are rarely affected by throttling - if they are, chances are that the limit is applied to the customer entirely, e.g more streams would just cause more throttling.

More often full TCP performance is not obtained not due to deliberate ISP throttling, but due to suboptimal network conditions, e.g unstable network latency. Protocols like QUIC (implemented by syncthing) can help here.

Are you 100% confident you work with ISPs that perform throttling of unclassified network streams? If this is really the case you might want to talk to your ISP and see if they offer you a solution, because running every app multi-connection is not a viable solution in the long run. They might be able to offer you an unthrottled VPN connection or something.

… followed by assumptions and calculations based on guesses.

I suggest you just try Syncthing for a week between the two locations and see what happens. Then you will know for sure. :slight_smile: Just my 2 cents.

ADDENDUM: You also wrote:

If the one week testing I mentioned above is not to your satisfaction, you could use some multi-connection tool for the initial copy and THEN use Syncthing to keep the files synchronized.

1 Like

Do you know how speedtest.net works? You have made wrong assumptions and did not read carefully.
I also can be wrong, but in that case it will be biggest my mistake in technical area for years.

P.S. I never used unencrypted channels, I tried Wireguard, OpenVPN, SFTP, and simple HTTPS with nginx.

Yes already considered that.
But 5 days is not OK to append each new TB. So it does not solve the problem fully.
And what if I need for any reason recreate full backup of some partition?
I did that many times when managed the system locally.
So the reason can be any, I am trying to automate it with Syncting (I used it before for some other needs). Otherwise in worst case I will have to do most work manually.

P.S. If you have a BDRemux of season 4 then let me know. So I will give a bit more work to the application.

ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help
--- /.../Cinema/Series/House MD ------------------------------------
  196.2 GiB [############################] /Season 7
  185.5 GiB [##########################  ] /Season 6
  185.1 GiB [##########################  ] /Season 8
  181.3 GiB [#########################   ] /Season 3
  177.7 GiB [#########################   ] /Season 2
  175.0 GiB [########################    ] /Season 5
  162.7 GiB [#######################     ] /Season 1
   51.7 GiB [#######                     ] /Season 4