Hi, I have this set in advanced options:
My understanding is this should limit upload to ~100 kbps, and download as fast as possible.
However, this client seems to then download at a limited rate when this is set as well. I am not sure why. Is this a bug or something else weird? Transfer overhead maybe?
In principle you’re correct. In practice the download protocol is request-response and the requests require a nonzero amount of bandwidth to go out. They’ll require some of those 100kbps, together with outgoing index updates and so on.
Cool, assumed so. Is it a bit of a wontfix kind of issue because it would require a good chunk of refactoring? Especially because I don’t think too many people use it, considering it’s an “advanced option”
It’s not that advanced, it’s available in the regular settings. To me this is how it’s supposed to work - it would be weird if you requested a cap of 100 Kbps outgoing traffic and we exceeded that by sending requests?
Most stuff I know with ratelimiting (eg torrent clients) ignore transport overhead by default in the upload buckets. The overhead usually isn’t so much that people notice or care much about it.
Purpose of ratelimiting, at least for me, is to not saturate the connection’s uplink. Allowing the overhead of downloading won’t change it too much (unless the rate is set extremely close to your max anyways).
I think we’re talking past each other somehow. It’s not transport overhead, it’s actual data going out - requests and index updates. It would not make sense to send 500 Kbps of requests when the limit is set at 100 Kbps.
Ignoring my bad wording, it kinda is the same thing though liksom? The request for more files is still really small (I have not researched BEP, but it isn’t very costly to send a request) I would think?
- Client 1 has a share (Share X) with 50 GB of stuff in it, and has global upload capped to 100 kbit and download unlimited.
- Client 2 with unlimited/unlimited joins Share X. It downloads at 100 Kbps, and all is well.
- Client 2 now puts some of their own big files into a different share (Share Y).
- Client 1 joins Share Y. Now the trouble happens when Client 1 begins to download Share Y; it will download at a rate much less than their connection could handle, simply because they do not allow enough bandwidth in the bucket for requests to go out unhindered.
So I don’t see why no.4’s requests from Client 1 should count in Client 1’s upload bucket; the data in the upload request is probably so tiny anyways, making the pro outweigh the cons.
But I could just be totally missing the point and having bad England. So you are free to just close topic if I am totally wrong, since I still probably won’t get it then (but thanks!)
In your example client one is limited to 100 Kbps. It’s uploading data. It’s also downloading data. In effect it has a 100 Kbps uplink through which it must send data, requests, and index updates. It’s going to be a quite congested uplink, requests will be queued behind the data it’s uploading, and hence those requests will get answers slower than if it were unlimited.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.