Could we get more frequent FolderCompletion events?

The situation is as follows. We have machines A and B, we copy 6Mb to ./Sync on A, and we wait for it to be uploaded to B. I open web gui on A, the upload speed is constant, roughly 50kb/s, but during most of the uploading time the progress is not updated.

These are the events reported on A:

{"id":1380,"time":"2015-10-04T19:37:45.992014964+01:00","type":"Ping","data":null},
{"id":1381,"time":"2015-10-04T19:38:20.255840728+01:00","type":"StateChanged","data":{"duration":59.668125132,"folder":"default","from":"idle","to":"scanning"}},
{"id":1382,"time":"2015-10-04T19:38:21.466968425+01:00","type":"LocalIndexUpdated","data":{"folder":"default","items":41,"version":23540}},
{"id":1383,"time":"2015-10-04T19:38:21.844767893+01:00","type":"StateChanged","data":{"duration":1.588922137,"folder":"default","from":"scanning","to":"idle"}},
{"id":1384,"time":"2015-10-04T19:38:23.572953176+01:00","type":"FolderSummary","data":{"folder":"default","summary":{"globalBytes":1263485884,"globalDeleted":5137,"globalFiles":6568,"ignorePatterns":false,"inSyncBytes":1263485884,"inSyncFiles":6568,"invalid":"","localBytes":1263411644,"localDeleted":4557,"localFiles":6568,"needBytes":0,"needFiles":0,"state":"idle","stateChanged":"2015-10-04T19:38:21.84476524+01:00","version":47676}}},
{"id":1385,"time":"2015-10-04T19:38:24.480113424+01:00","type":"FolderCompletion","data":{"completion":99.50020610402072,"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default"}},
{"id":1386,"time":"2015-10-04T19:38:25.003716565+01:00","type":"RemoteIndexUpdated","data":{"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default","items":24,"version":24160}},
{"id":1387,"time":"2015-10-04T19:38:26.049663311+01:00","type":"StateChanged","data":{"duration":4.204889971,"folder":"default","from":"idle","to":"syncing"}},
{"id":1388,"time":"2015-10-04T19:38:26.385118225+01:00","type":"StateChanged","data":{"duration":0.335449327,"folder":"default","from":"syncing","to":"idle"}},
{"id":1389,"time":"2015-10-04T19:38:28.770779363+01:00","type":"FolderSummary","data":{"folder":"default","summary":{"globalBytes":1263485884,"globalDeleted":5137,"globalFiles":6568,"ignorePatterns":false,"inSyncBytes":1263485884,"inSyncFiles":6568,"invalid":"","localBytes":1263411644,"localDeleted":4557,"localFiles":6568,"needBytes":0,"needFiles":0,"state":"idle","stateChanged":"2015-10-04T19:38:26.385115292+01:00","version":47700}}},
{"id":1390,"time":"2015-10-04T19:38:29.254033794+01:00","type":"RemoteIndexUpdated","data":{"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default","items":2,"version":24162}},
{"id":1391,"time":"2015-10-04T19:38:30.100692467+01:00","type":"FolderCompletion","data":{"completion":99.52243945033771,"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default"}},
{"id":1392,"time":"2015-10-04T19:38:30.328207512+01:00","type":"StateChanged","data":{"duration":3.94308391,"folder":"default","from":"idle","to":"syncing"}},
{"id":1393,"time":"2015-10-04T19:38:30.713010811+01:00","type":"StateChanged","data":{"duration":0.384800087,"folder":"default","from":"syncing","to":"idle"}},
{"id":1394,"time":"2015-10-04T19:38:31.205481247+01:00","type":"RemoteIndexUpdated","data":{"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default","items":6,"version":24168}},
{"id":1395,"time":"2015-10-04T19:38:32.272125696+01:00","type":"StateChanged","data":{"duration":1.5591107640000001,"folder":"default","from":"idle","to":"syncing"}},
{"id":1396,"time":"2015-10-04T19:38:32.684031875+01:00","type":"StateChanged","data":{"duration":0.41190101100000004,"folder":"default","from":"syncing","to":"idle"}},
{"id":1397,"time":"2015-10-04T19:38:32.732162927+01:00","type":"FolderSummary","data":{"folder":"default","summary":{"globalBytes":1263485884,"globalDeleted":5137,"globalFiles":6568,"ignorePatterns":false,"inSyncBytes":1263485884,"inSyncFiles":6568,"invalid":"","localBytes":1263411644,"localDeleted":4557,"localFiles":6568,"needBytes":0,"needFiles":0,"state":"idle","stateChanged":"2015-10-04T19:38:32.684029151+01:00","version":47708}}},
{"id":1398,"time":"2015-10-04T19:38:34.273213106+01:00","type":"FolderCompletion","data":{"completion":99.53188847800556,"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default"}},
{"id":1399,"time":"2015-10-04T19:38:44.864864499+01:00","type":"FolderSummary","data":{"folder":"default","summary":{"globalBytes":1263485884,"globalDeleted":5137,"globalFiles":6568,"ignorePatterns":false,"inSyncBytes":1263485884,"inSyncFiles":6568,"invalid":"","localBytes":1263411644,"localDeleted":4557,"localFiles":6568,"needBytes":0,"needFiles":0,"state":"idle","stateChanged":"2015-10-04T19:38:32.684029151+01:00","version":47708}}},
{"id":1400,"time":"2015-10-04T19:38:45.828251488+01:00","type":"FolderCompletion","data":{"completion":99.53188847800556,"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default"}},
{"id":1401,"time":"2015-10-04T19:38:46.033076839+01:00","type":"Ping","data":null},
{"id":1402,"time":"2015-10-04T19:39:46.033248036+01:00","type":"Ping","data":null},
{"id":1403,"time":"2015-10-04T19:40:22.286631138+01:00","type":"RemoteIndexUpdated","data":{"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default","items":9,"version":24177}},
{"id":1404,"time":"2015-10-04T19:40:23.332583627+01:00","type":"StateChanged","data":{"duration":110.648546026,"folder":"default","from":"idle","to":"syncing"}},
{"id":1405,"time":"2015-10-04T19:40:23.681369993+01:00","type":"StateChanged","data":{"duration":0.348781338,"folder":"default","from":"syncing","to":"idle"}},
{"id":1406,"time":"2015-10-04T19:40:25.426858264+01:00","type":"FolderSummary","data":{"folder":"default","summary":{"globalBytes":1263485884,"globalDeleted":5137,"globalFiles":6568,"ignorePatterns":false,"inSyncBytes":1263485884,"inSyncFiles":6568,"invalid":"","localBytes":1263411644,"localDeleted":4557,"localFiles":6568,"needBytes":0,"needFiles":0,"state":"idle","stateChanged":"2015-10-04T19:40:23.681367409+01:00","version":47717}}},
{"id":1407,"time":"2015-10-04T19:40:26.325974771+01:00","type":"FolderCompletion","data":{"completion":100,"device":"TOI77UD-VC7BQCP-3UWJO3J-S5QHDJN-3PRAHKT-PG3EEKX-YEM7UPF-BQC3XQO","folder":"default"}},
{"id":1408,"time":"2015-10-04T19:40:46.033442262+01:00","type":"Ping","data":null}

You can see that between 19:38:45 and 19:40:26 no FolderCompletion events were reported (and the upload speed was correctly reported in the web gui as being roughly 50kb/s all the time). In my personal notification app I actually query /rest/db/completion every two seconds during uploads, but I seem to get only cached values.

So my app, as well as web gui, reports correct upload speed, but annoyingly outdated number of MB which is still left to upload. (In web app it’s perhaps less visible because the upload is in percent not MB, so when the whole folder ./Sync is 1Gb then changing, say, 50Mb means that 95% is reported all the time, which in some sense is roughly correct :-), but you can manually check that querying /rest/db/completion during the process gives an outdated number).

I repeated the same with 50Mb set of data and the periods between foldercompletions were even longer (at least 5min).

FolderCompletion comes from the same service which updates /completion cache, which is once every 2 seconds by default.

Not sure what you are after.

Wait, you’re contradicting what I reported. Let’s try to find out why. Maybe I did my test wrong. I’ll redo it, but in the mean could you say as of which version is the above remark correct? I use 0.11.26.

Actually I am wrong, it’s different and does different things. One returns percentages for peer + folder, the other one returns sizes for a local folder.

We can’t emit FolderCompletion more often than 2 seconds due to the costs associated with calculating the values. The event is not emitted if nothing changes.

Both /rest/db/completion?device= and FolderCompletion report percentages of a remote folder.

Please note this is not relevant to the problem I reported. To sum up my report, I query /rest/db/completion?device=B every, say, 3 seconds, and I get the same answer for almost 2 minutes, although there is a constant 50KB/s transfer from A to B (as evidenced by web gui and other means). Similarly no FolderCompletion events are reported for almost 2 minutes.

The database isn’t updated until a file is completed. So incompete files currently being downloaded are not included in the API results and don’t cause events to be generated. Once it’s complete you should get event within a couple of seconds.

I see, but (since this post is filed under “Feature”) perhaps it woudn’t be difficult to add more FolderCompletion events? After all FolderSummaries (i.e. local info) is shown correctly also during a download of a single large file (i.e. the inSyncBytes changes continuously).

Or if that’s against design, maybe a new rest call like RemoteinSyncBytes?device=&folder= which would explicitly ask the remote host for providing current inSyncBytes.

What do you mean by more FolderCompletion events? Its emitted as the completion changes/every 2 seconds, so I am not sure what would more of them change?

I’d be againt adding additional messages to the protocol just for the sake of reporting more accurate remote percentages.

Perhaps this could be derived from the temp index stuff once that is in place.

Audrius, I’m trying to be as clear as possible, but let me try again. Let’s say you have two machines A and B. A is local B is remote. Your up/down link between A and B is constant 100KB/s. Now consider two situations.

  1. someone on the remote machine B copies 100MB file into ./Sync. Then my app on A will tell you

Downloading 1 file (100Mb @ 100KB/s).

After 10 seconds it will tell you

Downloading 1 file (99MB @ 100KB/s).

and so on.

  1. Now you copy a 100MB file into Sync on A. My app will tell you

Uploading to B (100MB @ 100KB/s)

After 10s it will tell you

Uploading to B (100MB @ 100KB/s)

and so on.

I hope you agree that it is desirable that in situation 2, an app could also give correct MB progress. As far as I can see right now it’s possible only to estimate it via counting outgoing bytes sent to B, which doesn’t seem an optimal situation.

When I say “more FolderCompletion events” I mean, of course, more FolderCompletion events which give correct percentage, even if the file is still being sent, so that situation 2) could be improved. If that’s not possible than another way to improve situation 2) is to have a rest call which asks for remote inSyncBytes.

This would only be possible once the temp indexes are in place, or we deliberately change the protocol.

And what are those temp indexes? Is it some new feature being developed?

In any case, I think that the REST calls which query for remote data, like inSyncBytes etc., are a very natural thing to have, and could be useful in future for many things. For example (in some distant future) syncthing could try to actively find out what’s the best topology for the cluster: I mean the following, in my setup with three machines A, B, C sometimes it would be best speed-wise if A and B didn’t talk to each other at all, only sync through C.

Currently files are only shared once the full file is available on the peer. Temp indexes would allow parts of the file to be shared while it is still being downloaded.

We already provide you with inSyncBytes for remote nodes, what we don’t provide you with is progress of the file the node is currently downloading. Whatever we are downloading is not considered in sync until the file is actually done.

I think the choice of who to talk to is already solved by the fact that we will always fire off requests to nodes who have the least requests in-flight, so if your node is slow at answering requests it will natrally get less of them.

How does inSyncBytes represent the quality of the network link or even latency?

This sounds like it’s a huge thing which is not gonna be available for many months if not years. I imagine querying remote inSyncBytes would take very short time to implement, and thus ameliorate situation 2) I described before. But I get it, you’re not going to do it.

For the record, I doubt that’s true. Note that inSyncBytes is updated continuously (i.e. also during a download of one single large file) - you can see it by querying /rest/db/status. If you really provide access to the current value of inSyncBytes for remote nodes, then please tell me how.

This is completely off topic, but it’s not solved optimally. My situation is: A is a remote server, B,C two machines in a local LAN, with 10MB/s link between them. Upload speed of A is 1MB/s. If someone copies a 100Mb file to ./Sync on A the best solution would be for A to send the whole file to B (or C), and then for B to send the file to C. This is far from what syncthing does.

(If there was interface for querying remote parameters, and another interface for telling syncthing who to talk to, then it would get interesting because your users would start coming up with all sorts of interesting recipes for which node should talk to whom, suited to particular use cases)

I doubt that it will need that long :wink:

I see, I’m looking forward to it then.

Though thats temp indexes, not more finegrained stats of remote nodes.

Even then, temp indexes are exchanged once every 10 seconds, meaning that the data wouldn’t be ‘live’ anyway.