I guess I have bad news for you in general. Internet is moving on, and with things like HTTP2/3, you will pretty much always end up with a single connection for everything, which will be used to multiplex multiple downloads/requests.
Does the throttling also apply to UDP traffic?
It seems like you still havenât tested and just do calculations based on assumptions. For everyoneâs sake, please install and test with a realistic data set.
Not so easy to say, speedtest uses TCP, wireguard uses UDP, and it was slow but I cannot be sure what is the reason. I just tried it with default out-of-box settings.
The single thing what can say something is iperf, but I did not check it with UDP, I just played with TCP window size - nothing significantly changed.
I cannot repeat tests immediately because nobody can help on other side for the moment. I need to setup environment myself before I can test again.
QUIC is not very popular. HTTP/2 is much more popular. Though I do not see any benefits from HTTP/2 if site resources are well bundled.
In any case it does not matter for surfing. If limit is 50 Mbit/s then 6MB/s is enough to load any even big site in couple of seconds. Well-written web applications usually less than 2-3 MB in size.
And to download huge files we can use downloaders based on multi-segment downloading (GetRight, FlashGet, but me personally just aria2c, and new wget also can do that). But usually it is not needed, files which hosted at big ISP/CDN are usually not limited by providers.
I have such shaping experience only with P2P connections, when it touches âclientâ network (last mile).
Of course I will check, but cannot do it right now. Though it does not mean I should not plan it before copying.
I just tested Syncthing on 10GbE connection and its speed is really not good.
100 MiB/s using connection which can be fully utilized by Samba (930-980 MiB/s).
So I am sure multi-connection (and multi-thread) data-flow is really not bad idea which will increase speed 10x.
You donât need mutiple TCP connections to saturate a link. If syncthing caps around 100MB/s, then the bottleneck is probably caused by CPU or IO on the receiving node. e.g Syncthings database operations can be quite taxing
What often turns out to be the bottleneck is CPU for TLS, which is inherently single threaded for a single connection. In principle multiple connections would help with this, though not for any actual network related reasons. Itâs not something I really feel itâs worth optimizing for, though.
Syncthing eats maximum 2.65 vcores, usually up to 2.
And it is obviously not IO if Samba can do much more on the same instance (most slow endpoint can read write 1+ GiB/s over encrypted LUKS partition).
I am not sure if it is CPU/TLS related (as Jakob said). But if it is not the target for optimization then what?
Most weak endpoint provides the following performance with openssl:
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
sha1 99855.91k 234210.19k 480898.23k 636355.95k 694640.23k 688616.74k
sha256 59384.46k 134106.03k 232840.82k 290554.16k 312560.54k 318299.47k
sha512 46687.52k 186717.09k 302304.45k 434753.04k 509025.89k 510114.19k
op op/s
224 bits ecdh (nistp224) 0.0002s 5882.2
256 bits ecdh (nistp256) 0.0001s 9056.8
384 bits ecdh (nistp384) 0.0015s 665.6
521 bits ecdh (nistp521) 0.0008s 1259.2
253 bits ecdh (X25519) 0.0001s 10226.0
448 bits ecdh (X448) 0.0013s 799.3
And what is the purpose of Threadrippers and 5950x?
In nearest future I do not expect growing of single core performance.
So LUKS with default AES-XTS can be multithreaded, compilation can be multithreaded, even SSH increases its level of parallelism (HPN-SSH).
And those guys do not think it is not worth (HPN-SSH UPDATED 2021 | PSC).
So maybe it does not worth for you considering limited resources you have.
But in general it is worth and data amount with each year only increases.
On the other side of the coin we have the dude arguing vehemently for a way to limit Syncthingâs CPU usage to make things be slower and take longer. In the middle, everyone else where Syncthing spends most of the time idling and only now and then transferring short bursts of updates.
Apples and oranges. Syncthing needs to update its database and compute hashes.
Iâd check IO using iostat
to rule out this potential bottleneck.
For low spec devices, yes but iâd expect syncthing to use an AES based cipher here which would use hardware acceleration.
Apples and oranges. Syncthing needs to update its database and compute hashes.
Iâd check IO using
iostat
to rule out this potential bottleneck.
I am sure you did not understand or read not enough carefully.
On the other side of the coin we have the dude arguing vehemently for a way to limit Syncthingâs CPU usage to make things be slower and take longer. In the middle, everyone else where Syncthing spends most of the time idling and only now and then transferring short bursts of updates.
Reasonable if he has very slow end-point, maybe Pi or some mobile.
Though if he uses âConnections: 1â wanted option will not impact his case.
It is like MS Word, mostly nobody uses its power, usually 5% of all features.
But each user can find in Word his own 5% of features needed for him.
In any case performance matters ))
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.