Syncthing 2.0.0-beta.5

Dude I’m already fending off what feels like hundreds of people in all the threads and commits hating every improvement I make :stuck_out_tongue: but yeah I do believe that’s on the agenda

3 Likes

It was on 1.29.4

I do have a significant number of files probably over 6M across 6 drives, some server images are up to 1.5Tb in size. The old index folder is 12Gb in size

I have to say, the scanning is much faster, often maxing the IO across all the drives

2 Likes

Yeah those are some bigass files, that’ll do it. I have the fix committed, will be in the next update, probably tomorrow.

2 Likes

If the feature gets removed, maybe block the update on setups with ignoreDelete enabled? Otherwise, I’m sure we’ll see a flow of very angry users on the forum after having their files deleted, with no backup available.

This is assuming that removing ignoreDelete will cause all those deletions be synced to the device that had it enabled before.

1 Like

We would definitely not roll out an update that would start surprise deleting people’s files, I promise.

2 Likes

I get your point. I also have to keep up with changes in various dependencies in various projects. Note that this was really not about the command-line flags that were mentioned earlier by others but more about the REST-API (as API changes probably won’t be trivial to adapt to).

My point was just that Syncthing Tray and other maintained integrations will not be magically spared from breaking in contrast to abandoned ones. It will always take effort (and time) to keep up.

1 Like

If we give you a grpc contract, code will be auto-generated, and it’s just a matter old with new, better, than makes more sense.

Extensions that are maintained will not work, and I think that’s ok.

I’ve built it for Android and currently looking for changes that need to be done to the wrapper. Meanwhilst, I did upgrade the command line using the non-two-dash commands and set STHOMEDIR=… instead of using the --“–home=” parameter which is no longer supported on the commands the wrapper used before.

Feedback:

  1. Could the “device-id” please respect the STHOMEDIR variable? I see in the log it is accessing the wrong path on android despite STHOMEDIR being set. This can be reproduced from shell.
emu64xa:/tmp $ export STHOMEDIR=/tmp/4
emu64xa:/tmp $ ./libsyncthingnative.so generate --no-default-folder
2025/04/02 21:28:45 INFO: Generating ECDSA key and certificate for syncthing...
2025/04/02 21:28:45 INFO: Device ID: MMVWVES-PH4ESZL-PERSOMR-G54BYO2-DYMSTFC-XT2LZXQ-O3TU4VR-34MWMAO
2025/04/02 21:28:45 INFO: We will skip creation of a default folder on first start
emu64xa:/tmp $ ./libsyncthingnative.so device-id
2025/04/02 21:28:58 WARNING: Error reading device ID: open /.local/state/syncthing/cert.pem: no such file or directory
  1. What is the equivalent to “reset-database”?

“serve”, “–reset-database”, “–logflags=0” does not work.

ref: Syncthing-Fork v2.0 beta by Catfriend1 · Pull Request #1333 · Catfriend1/syncthing-android · GitHub

Doesn’t it?

% STTRACE=all STHOMEDIR=/tmp/foo ./bin/syncthing generate --no-default-folder
2025/04/02 23:37:37.894128 debug.go:31: DEBUG: Enabling lock logging at 100ms threshold
2025/04/02 23:37:37.895075 control_unix.go:35: DEBUG: SO_REUSEPORT supported
2025/04/02 23:37:37.895321 filesystem_copy_range.go:26: DEBUG: Registering all copyRange method
2025/04/02 23:37:37.895337 filesystem_copy_range.go:26: DEBUG: Registering standard copyRange method
2025/04/02 23:37:37.896939 logfs.go:83: DEBUG: logfs.go:83 basic /tmp/foo MkdirAll . -rwx------ <nil>
2025/04/02 23:37:37.896961 logfs.go:125: DEBUG: logfs.go:125 basic /tmp/foo Stat . {0x14000446680} <nil>
2025/04/02 23:37:37.896981 utils.go:64: INFO: Generating ECDSA key and certificate for syncthing...
2025/04/02 23:37:37.899914 generate.go:86: INFO: Device ID: AJSYRN4-5JMMYZS-LKZAUN4-IC2IIRO-QYVIALU-JNEWUEG-3MEVGTZ-DG4SXQC
2025/04/02 23:37:37.900189 utils.go:80: INFO: We will skip creation of a default folder on first start
2025/04/02 23:37:37.900389 logfs.go:95: DEBUG: logfs.go:95 basic /tmp/foo OpenFile .syncthing.tmp.439627769 2562 -rw------- &{0x14000049cf0 {0x1400007a3a8 .syncthing.tmp.439627769}} <nil>
2025/04/02 23:37:37.906293 logfs.go:71: DEBUG: logfs.go:71 basic /tmp/foo Lstat config.xml <nil> lstat /tmp/foo/config.xml: no such file or directory
2025/04/02 23:37:37.906406 logfs.go:119: DEBUG: logfs.go:119 basic /tmp/foo Rename .syncthing.tmp.439627769 config.xml <nil>
2025/04/02 23:37:37.906443 logfs.go:89: DEBUG: logfs.go:89 basic /tmp/foo Open . &{0x14000049cf0 {0x1400007a3c8 .}} <nil>
2025/04/02 23:37:37.911333 logfs.go:107: DEBUG: logfs.go:107 basic /tmp/foo Remove .syncthing.tmp.439627769 remove /tmp/foo/.syncthing.tmp.439627769: no such file or directory
syncthing debug reset-database
1 Like

Ah sorry, I mistyped it. It’s the “device-id” command which doesn’t respect the env var.

2 Likes

Ah, and also not the reset-database I think. I’ll clean that up.

2 Likes

Just had a quick look to see how v2 was doing before bed and it’s used all the C drive space. Bit worried that the new database is going to be very disk space hungry…

image

The first clue was most of the remote devices had disconnected and virtually no logs, but one said disk is full, so expanded the vm by 100Gb, will see in the morning if it needs more.

Thought I would restart St to see if the remote devices would reconnect. however it’s taking a very long time to load, specifically due to it reading the 60Gb index file

Apparently, 7 minutes…

[start] 2025/04/02 23:12:06 INFO: syncthing v2.0.0-beta.3 “Hafnium Hornet” (go1.24.1 windows-amd64) builder@github.syncthing.net 2025-04-02 10:26:24 UTC

[RTF25] 2025/04/02 23:19:30 INFO:

Yikes. That should hopefully improve with the latest checkpoint and transaction adjustments done by @calmh

1 Like

The 60Gb wal file has shrunk and the index-v2 has grown to 16Gb. However whilst it’s a much faster scanner, it’s becoming too aggressive to the drives.

the disk queues are getting longer and with less throughput. St is still scanning the drives where the v1 index would have finished. I will restart St and invoke concurrency to see if that helps.

@terry this one should have hopefully a bit better WAL handling

3 Likes

I have tested synchronization on folder aren’t using the encrypted feature and has large files were updated frequently, incremental block synchronization is normal on the 1.29.3 official version and is fast after file was updated, but incremental block synchronization does not seem to be available on the 2.0 beta version. I wonder if the 2.0 beta has changed the incremental block synchronization feature?

1 Like

Can you clarify what you mean by that and what you’re seeing? I did a few tests just now with overwriting parts in the middle of a large file and got just the changed pieces transferred, which is what I expect.

1 Like

With the newest beta, I’m seeing the following when trying to --reset-deltas from the command line.

syncthing.exe: error: unknown flag --reset-deltas

Try

syncthing serve --debug-reset-delta-idxs