Sync routing to multiple instances in Kubernetes

Hello Folks, First of all, Syncthing is amazing! I have been searching for a solution and I kept hitting rsync, lsyncd, unison, ftp roadblock. I am really surprised why this did not show up on my google results.

In my setup, I am planning to install Syncthing instance as a sidecar container for all the pods that has a volume that needs to be shared. We have HA Proxy as Ingress Controller which can do TCP proxying as well, but is also the SSL termination point in our system.

How can I expose only a single port via HA Proxy so that the other SyncThing instances (same setup but on different cluster) can communicate on a single port, but still be able to distinguish and route the requests to the correct instance in the backend ? Is it possible ? I did read up on SNI based thing on another thread but it seems that does not work. Any help on this would be appreciated.


I don’t think you can front syncthing with a load balancer, or route based on sni, as syncthing doesn’t use sni nor is there anything in the certificate to say which device the connection is expecting to go or coming from (the device id is a hash of the certificate bytes).

I am also not sure why you’d want to do that.

I think its best to start of explaining the problem you are trying to solve, as I think you are over engineering.

I think running a small nfs server for shared storage is a better approach. Or using some sort of object store.

Thanks. The problem I am trying to solve is to not open multiple ports in the kubernetes LoadBalancer for external access i.e. one port for each backing instance.

Cannot use NFS/Ceph/Gluster as the cluster nodes do not spread across the two kubernetes clusters over which we want to sync the files…think of it as a geographic replication for on-prem setup.

Assuming you want your Syncthing instances to sync with something outside the cluster, I’d turn it around so they are the ones making an outbound connection instead of expecting an inbound one via a load balancer or ingress controller. This should even happen automatically, eventually.

For incoming connections if it’s cluster-to-cluster you might have to designate an instance or two and give them port mappings.

Or use relays.

Yeah, it is cluster to cluster. Anyways, thanks for your response, I can for now manage with one port per instance exposed.

Do you think it would make sense to provide proxy routing for such use cases ? This is a really good tool for scenarios where public cloud utilities for replication are not available.

I don’t think you can provide proxy routing. The protocol is end to end mutual TLS, so the proxy has zero knowledge of whats going on and would not be able to route anything.

1 Like

In theory, we could add SNI support, there isn’t really a good reason why we’re not sending the SNI.

Perhaps privacy, given you’d be leaking who you are connecting to.

I think there may be a valid privacy concern nowadays.

It used to be the case (TLS 1.2) that the certificate exchange was in plaintext and then it was anyway obvious that Syncthing is Syncthing and who the devices were – perhaps you had to run the hash over the certificate manually to get our device ID, but the data was there.

Nowadays (TLS 1.3) it’s my understanding that all this happens after handshaking the cipher, so now the eavesdropper doesn’t get to see our certificates. Apart from the port number and other circumstantial evidence it might not even be obvious that it’s Syncthing speaking. I’m not 100% certain of this though. I can confirm that the handshake looks like gibberish at least, and doesn’t contain our CN=syncthing that used to visible, etc.

It would be a shame to expose the identify of the speakers unnecessarily, but there could of course be an on switch for this anyway, for those who want to run Syncthing behind a proxy of some sort. (I’m routing gRPC through Traefik based on SNI, without it touching the actual connection, so this could work for Syncthing as well.)

1 Like
openssl s_client -connect localhost:22000

Connects via TLS1.3 and displays the certificate.

Sure, the question isn’t whether the certificate is available to someone who connects – it obviously must be.


If the first thing that happens is the cipher negotiation, how does SNI work in general in that case? I assume to negotiate ciphers, you need certificates, to have certificates, you have to know where the client wants to connect to, to get the right cert, so seems like a chicken and egg problem?

I guess if cipher negotiation happens without certs somehow, then including SNI would not leak anything in plain text. It would leak stuff in the TLS stream, but I am not sure if that is really a big privacy concern?

I can confirm that TLS 1.3 does exchange certificates after rolling it’s initial set of crypto keys (the handshake keys), so it is technically encrypted.

It is however not authenticated at this point, so an active attacker can obtain certificates via a MITM attack. A passive attacker cannot see certificates (in TLS 1.3).

Yes, right now you can do fingerprinting of syncthing connections via Client Hello fingerprinting* (syncthings configuration of cipher suites, ALPN…). It does require specific targeting of the application though.

I’m not a fan of sending more stuff plaintext and sadly everything send via SNI is currently plaintext. The IETF currently works on something called Encrypted Client Hello which would hide this information eventually, but it is a very long road until this is a widely implemented standard**. Previous proposals also required external configuration (like DNS records) and so this may not even fit into syncthing - anyway, it’s future talk.

*This is actually something done in practice by some entities.

**ECH might also interfere with the idea of having a proxy routing something.

1 Like

It works differently in TLS 1.3 than it does in 1.2.

1.3 removed a lot of old cruft, including the static RSA exchange which basically enables the shorter round trip, as all key exchanges are now Diffie-Hellman.

In 1.3, the client initially sends its Client Hello, including its cipher support and a bunch of extensions, notably SNI (if any) plus some public keys for Diffie-Hellman - all plaintext here.

The server then chooses one of those public keys* (+ generates a random one for self use) and immediatly computes the shared secret used for encryption of the handshake - this is now done with less than 1 RTT.

The server replies with a Server Hello, unencrypted, which contains the public key of the server and the chosen algorithms (cipher suite and stuff). Everything after this message is encrypted in TLS 1.3, including the following messages of the handshake - the handshake continues with certificates, signatures and integrity verifications - all encrypted.

*This can fail, because the client needs to guess the supported DH algorithms by the server. If the client has guessed badly, the server will inform the client about this by sending a Hello Retry Request, which basically resets the handshake and a new Client Hello is send.

It leaks whatever you write into the SNI field, nothing else. The server then decides what it wants to do based on what the client has written in the SNI field.

It’s usually used to decide which certificate to send. This is the same in 1.2 and 1.3, the point at which the certificates are send just differs, because 1.3 rolls the crypto much earlier than 1.2

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.