Why don’t we use DHT instead of central server for global node announce?
Because it’s more complex, no-one has written the code, and there are no obvious tangible advantages that I’ve been convinced of. Note that the “no single point of failure” argument is false.
Note that the “no single point of failure” argument is false.
there are no obvious tangible advantages that I’ve been convinced of
Well, syncthing can be used in corporative networks without access to internet. Local discovery won’t work cause network can be complex and consist of different subnets, and global one won’t because the global server is inaccessible. With DHT and custom bootstrap nodes setting a node inside the network can be added.
and global one won’t because the global server is inaccessible.
just set up your own “global” server in your network, in the config file there is a setting for that:
DHT could be something for far-future but i also don’t see a important reason why we would want it now.
This is not the same. The DHT discovery works even when the server is down. The master-node is needed for the first bootstrap only.
As they say, “Patches Welcome”
A DHT works without a central server if you’re already connected to it, say, because you’re downloading another torrent. But when the client starts up, it needs a way to join the DHT. This isn’t done by magic or throwing out packets at random hoping to find another node. This is done either by connecting to a well known address of a root node (i.e. a central, global server) or getting a seed list of addresses from a tracker (i.e. a central, global server).
Edit: I see that you know this well and noted “The master-node is needed for the first bootstrap only”, however this is also mostly the time when the discovery protocol is needed.
For Bittorrent this is all awesome, because a client will talk to a tracker when starting up and will, from that point on keep sessions alive to tens of other nodes just for DHT purposes, even if there are no longer any active torrents. If there are active torrents, we get the DHT for free.
For syncthing, this means we could join and piggyback on the Bittorrent DHT. It would mean depending on talking to the root node at startup (so no real gain in resiliency, rather we’re depending on an external, unknown, quantity) and then maintaining sessions towards a whole bunch of fairly chatty other hosts that really have nothing to do with syncthing, except we’ll have to stash a bunch of torrent metadata on their behalf. Add to this that the servers or protocols in questions might be blocked for policy decisions unrelated to syncthing.
We could also build our own DHT that is just used by syncthing. At this point that’s a bit questionable due to the lack of critical mass. It would mean that you would connect to and get connections from a bunch of other people running syncthing, not at all related to you. This might be OK for some people, others might question the information leakage.
The code involved for all of this is a serious complexity increase.
At this point I think the balance of having a known single point of failure that can be easily replaced by user-owned infrastructure works out quite well.
Thanks for clarifying.