Repeated ClusterConfig messages

So currently we disconnect all connections after every folder related change, namely folder being un-ignored, folder being paused, folder being removed, unpaused, added.

The protocol also mandates that there should be a single ClusterConfig message ever per connection, which I guess is why we do it.

I’d like to change that, and change the spec.

It seems that if a device receives a second ClusterConfig message, it will close the connection itself:

	switch msg := msg.(type) {
		case *ClusterConfig:
			l.Debugln("read ClusterConfig message")
			if state != stateInitial {
				return fmt.Errorf("protocol error: cluster config message in state %d", state)
			}
			c.receiver.ClusterConfig(c.id, *msg)
			state = stateReady

so I think it’s safe to start sending secondary ClusterConfig messages for new devices, and they should be handled the same way for old devices (instead of us disconnecting, the other side will disconnect on a protocol error).

Also, I know @imsodin did a lot of clever tricks fixing ClusterConfig for non-existing devices etc, I guess I’d have to relax the tricks for other-than-first ClusterConfig messages, as I suspect out of bound, post close ClusterConfig messages will become possible.

This does not fix anything in particular, just potentially makes these operations more smooth.

Thoughts?

The reason for this originally, in my mind, is that there’s a lot of connection level state that depends on the data in the clusterconfig message. Being able to accept another such message means you need to be prepared to tear down and start up index senders, stop pulling, etc. for a connection while it’s up and running. None of that is impossible of course, but it’s an extra complication and opportunity for things to go wrong. The specs just says you can’t do it because we can’t handle it. :slight_smile:

1 Like

I guess doing it properly might be a bigger refactor than I anticipate.

Except for a comment about progress emission getting broken by multiple CCs it seems pretty straight forward. We’d just have to track index sender tokens instead of stopping them on connection close. I don’t think the pulling/folder service in general cares at all about what happens in CCs, unless the config changes, which is handled through CommitConfiguration already. I am sure I am not seeing some corner case and it definitely adds more things that can go wrong, but not loosing connections also seems like a pretty nice improvement to me.