UI problems - changes are ignored / dropped

Continuation of UI is not updating nor saving settings

I’m still having these problems. This time on a different PC though.

I tried to enable config logging:

How can i activate config logging, when no config-post is processed by the server?

My guess is that the large number of requests creates I/O activity, which is the bottleneck on a slow computer like this one. Thus the requests don’t get processed as they should. Maybe some mechanism schedules (usually short) read actions higher than write actions like the “config” post. But since there are so many gets constantly fired at the server it won’t ever process the other requests.

A solution could be to throttle the requests fired at the server, if the client detects, that the server is answering slowly. One idea could be to not requests new events or status updates if another request is still pending and only retry after 1 or 2 minutes or so or if the request officially failed.

A fix for this would really be appreciated, as i can not manage my systems with this problem. It’s config hell right now…

Greetings Fred;

I think i could narrow it down:

First off i have set maxConcurrentScans to one, since my HDD and PC is not very strong, to reduce paralell HDD access.

The problem seems to be there mainly when (several?) folders are scanning and i then try to change any setting in the UI. The config posts are then not replied to. Normal status gets are replied to as you can see in the screenshots.

Here is an excerpt from the log with config-debug enabled:

2020-02-24 08:45:05 Completed initial scan of sendonly folder "My Share 1" (my-share-1)
2020-02-24 08:45:09 connections.limiter verifying configuration
2020-02-24 08:45:09 connections.Service verifying configuration
2020-02-24 08:45:09 ur.Service verifying configuration
2020-02-24 08:45:09 api.service@0xc00037a160 verifying configuration
2020-02-24 08:45:09 model@0xc0000cd760 verifying configuration
2020-02-24 08:45:09 ProgressEmitter@0xc0000f0180 verifying configuration
2020-02-24 08:45:09 aggregator/"My Shared" (my-shared): verifying configuration
2020-02-24 08:45:09 aggregator/"My Shared" (my-shared): committing configuration
2020-02-24 08:45:09 connections.limiter committing configuration
2020-02-24 08:45:09 connections.Service committing configuration
2020-02-24 08:45:09 ur.Service committing configuration
2020-02-24 08:45:09 api.service@0xc00037a160 committing configuration
2020-02-24 08:45:09 model@0xc0000cd760 committing configuration
2020-02-24 08:45:09 ProgressEmitter@0xc0000f0180 committing configuration
2020-02-24 08:45:09 Connection to XXX at XXX:22000/tcp-client/TLS1.3-TLS_AES_128_GCM_SHA256 closed: restarting folder "My Fotos" (my-fotos)
2020-02-24 08:45:10 Puller (folder "Share 2" (share-2), item "ABC.jpg"): pull: connection closed
2020-02-24 08:45:10 Saving .stignore: remove \\?\F:\Fotos\.stignore: Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird.
2020-02-24 08:45:10 Ready to synchronize "My Fotos" (my-fotos) (sendonly)
2020-02-24 08:45:10 Restarted folder "My Fotos" (my-fotos) (sendonly)

2020-02-24 08:45:48 connections.limiter verifying configuration
2020-02-24 08:45:48 connections.Service verifying configuration
2020-02-24 08:45:48 ur.Service verifying configuration
2020-02-24 08:45:48 api.service@0xc00037a160 verifying configuration
2020-02-24 08:45:48 model@0xc0000cd760 verifying configuration
2020-02-24 08:45:48 ProgressEmitter@0xc0000f0180 verifying configuration
2020-02-24 08:45:48 aggregator/"My Shared" (my-shared): verifying configuration
2020-02-24 08:45:48 aggregator/"My Fotos" (my-fotos): verifying configuration
2020-02-24 08:45:48 aggregator/"My Fotos" (my-fotos): committing configuration
2020-02-24 08:45:48 connections.limiter committing configuration
2020-02-24 08:45:48 connections.Service committing configuration
2020-02-24 08:45:48 ur.Service committing configuration
2020-02-24 08:45:48 api.service@0xc00037a160 committing configuration
2020-02-24 08:45:48 model@0xc0000cd760 committing configuration
2020-02-24 08:45:48 ProgressEmitter@0xc0000f0180 committing configuration
2020-02-24 08:45:48 aggregator/"My Shared" (my-shared): committing configuration

There is nothing in the logs indicating the saving the config took a while.

It would best be to get stacktraces when it’s stuck, but the only way I can think of getting that is on Linux.

The error in the screenshot is “400 Bad Request”, which we only return on invalid config (and that would be logged by Syncthing too at warning level). For a timeout I’d expect a 408.

You could run with STPROFILER=:9090 env var and use go’s pprof to get the stacktraces, in case it is some sort of locking issue.

The 400 should not take long to return I guess?

I don’t know what that is nor how it works.

Maybe are you interested in doing a remote-desktop-session (eg. AnyDesk) when i have the problem again?

I have the problem again on my local computer (the other one was remote).

I found this in the logs:

[WBZZM] 2020/02/25 21:39:02.175232 folder.go:565: INFO: Completed initial scan of sendreceive folder "Shared" (shared)
[WBZZM] 2020/02/25 21:39:02.176232 api.go:479: DEBUG: http: GET "/rest/events?since=123": status 200, 507 bytes in 1047.29 ms
[WBZZM] 2020/02/25 21:39:02.179226 folder.go:565: INFO: Failed initial scan of sendreceive folder "#Folder1" (Folder1)
[WBZZM] 2020/02/25 21:39:02.186214 model.go:360: INFO: Ready to synchronize "#Folder1" (Folder1) (sendreceive)
[WBZZM] 2020/02/25 21:39:02.186214 model.go:527: INFO: Restarted folder "#Folder1" (Folder1) (sendreceive)
[WBZZM] 2020/02/25 21:39:02.191208 api.go:479: DEBUG: http: GET "/rest/events?since=125": status 200, 506 bytes in 1.00 ms
[WBZZM] 2020/02/25 21:39:02.196201 api.go:479: DEBUG: http: GET "/rest/stats/folder": status 200, 10965 bytes in 9.99 ms
[WBZZM] 2020/02/25 21:39:02.214171 api.go:479: DEBUG: http: POST "/rest/system/config": status 0, 0 bytes in 212666.84 ms
[WBZZM] 2020/02/25 21:39:02.214171 api.go:827: WARNING: Decoding posted config: read tcp 127.0.0.1:8384->127.0.0.1:49393: i/o timeout
[WBZZM] 2020/02/25 21:39:02.215169 api.go:479: DEBUG: http: POST "/rest/system/config": status 400, 54 bytes in 212365.81 ms
[WBZZM] 2020/02/25 21:39:02.216169 api.go:827: WARNING: Decoding posted config: read tcp 127.0.0.1:8384->127.0.0.1:49394: i/o timeout
[WBZZM] 2020/02/25 21:39:02.216169 api.go:479: DEBUG: http: POST "/rest/system/config": status 400, 54 bytes in 212329.79 ms
[WBZZM] 2020/02/25 21:39:02.217166 api.go:827: WARNING: Decoding posted config: read tcp 127.0.0.1:8384->127.0.0.1:49372: i/o timeout
[WBZZM] 2020/02/25 21:39:02.217166 api.go:479: DEBUG: http: POST "/rest/system/config": status 400, 54 bytes in 206834.63 ms
[WBZZM] 2020/02/25 21:39:02.217166 api.go:827: WARNING: Decoding posted config: read tcp 127.0.0.1:8384->127.0.0.1:49417: i/o timeout
[WBZZM] 2020/02/25 21:39:02.218166 api.go:479: DEBUG: http: POST "/rest/system/config": status 400, 54 bytes in 116287.19 ms
[WBZZM] 2020/02/25 21:39:02.543652 api.go:479: DEBUG: http: GET "/rest/system/config/insync": status 200, 27 bytes in 1.00 ms
[WBZZM] 2020/02/25 21:39:02.553645 api.go:479: DEBUG: http: GET "/rest/events?since=127": status 200, 97492 bytes in 13.98 ms
[WBZZM] 2020/02/25 21:39:02.564621 api.go:479: DEBUG: http: GET "/rest/system/config": status 200, 84954 bytes in 22.96 ms
[WBZZM] 2020/02/25 21:39:03.038876 api.go:479: DEBUG: http: GET "/rest/system/discovery": status 200, 599 bytes in 0.99 ms
[WBZZM] 2020/02/25 21:39:03.045869 api.go:479: DEBUG: http: GET "/rest/system/status": status 200, 1056 bytes in 2.99 ms
[WBZZM] 2020/02/25 21:39:03.073820 api.go:479: DEBUG: http: GET "/rest/system/connections": status 200, 2978 bytes in 2.98 ms
[WBZZM] 2020/02/25 21:39:03.077814 api.go:479: DEBUG: http: GET "/rest/system/config/insync": status 200, 27 bytes in 0.00 ms
[WBZZM] 2020/02/25 21:39:03.079808 api.go:479: DEBUG: http: GET "/rest/system/config": status 200, 84954 bytes in 26.96 ms

There you can see the extremely long running config post requests and the 400 response plus some “i/o timeout” errors. Maybe you can find out why this happens?

I want to point out: It’s a pretty slow computer with an external USB HDD (not SSD) and it is currently scanning several folders but with maxConcurrentScans set to 1.

Well at least it feels slow …

grafik

I had another idea what could be the cause of the problem:

It seems like the backend gets an i/o timeput when reading the request from the client.

What if the client did not properly finish/terminate or send the request and now the backend is expecting more data until it runs into the error?

Or maybe there is a bug in the backend which results in the same behavior…

You have something in between syncthing and the browser that is causing this. Be it firewall, proxy or antivirus.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.