Just checked my firewall logs
and I can see it is blocking discovery probes from syncthing
They seem to come from 3 ip addresses
220.127.116.11, 18.104.22.168, 22.214.171.124 all port 443
but be aimed at random ports:
58872, 42996, 46106, 43030, 46114, 59096,…
I would like to allow syncthing’s discovery probes
and I don’t mind allowing traffic from these 3 IP addresses,
but don’t really like to do a rule
"allow to all ports"
Is there any small subset of ports I can allow traffic to
in order to allow discovery probes?
I think these are actually outgoing connections to discovery servers, so the outgoing port is always random, hence yoy should allow the destination port which is always 443.
No, they are incoming. The firewall doesn’t block outgoing connections.
Detail of 1 entry
(as it is the UFW log on the local machine, 192.168.nnn.xxx is the lan address of this machine running syncthing)
In fact, I wonder how it reaches syncthing given that it is connecting to an (apparently) random port. Unless these are probes left over from earlier now expired connections which set that port.
That’s just telling you which way the packet is going, not which way the connection was made.
I am positive that the client sent a packet to discovery server saying where is X from local port 58872 to remote port 443, and discovery server replied X is at Y from remote port 443 back to the port where the request came from (which is 58872), and that’s the logs you are looking at.
These are simply responses to outgoing connections syncthing client is doing, part of a normal HTTP request.
Thanks. I agree this is likely right:
client sent a packet to discovery server saying where is X from local port 58872 to remote port 443
However if the connection had really been made by the client,
then the packet would be allowed in and would not be blocked.
Only ‘unsolicited’ packets are blocked by firewalls.
However there are some scenarios in which a packet from a client-initiated connection could be blocked:
a. server sent packet after the connection was closed;
b. server sent packet after the connection timed out.
There may be other possibilities,
but they all seem to relate to possible bugs or protocol timing issues.
It is somewhat irregular, some days none, today a lot:
Jan — number
10th — 16
11th — 0
12th — 2
13th — 5
14th — 149 to a few hours ago
It’s Audrius, not Andrius.
We use basic HTTP calls (https://github.com/syncthing/syncthing/blob/master/lib/discover/global.go#L140), so I don’t see how there could be protocol timing issues, as HTTP is probably one of the most basic protocols out there.
The best thing I can suggest is to get a tcpdump, and see perhaps it’s caused by TCP retransmits which might be due to your firewall (or client) not handling a fin or finack.
I don’t see how there could be protocol timing issues, as HTTP is probably one of the most basic protocols out there.
HTTP protocol timing issues do occur
- read any firewall log and you’ll see inbound packets
from a server where the client thinks the connection is closed.
Sometimes it is the client (eg browser) gets shut down before the server thinks the transaction is complete.
which might be due to your firewall (or client) not handling a fin or finack.
Firewalling is handled by the linux kernel, UFW is just the interface to it.
It is unlikely the linux kernel is handling such a basic function incorrectly.
Also, as noted above, some days there are none, all day, so again unlikely to be a basic linux kernel error.
Could possibly be the client - the client here is syncthing,
or the server, which is also syncthing.
In any event, it is easiest just to ignore this error
unless it appears to be getting out of control.
Today so far, none.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.