Just a different observation, but it seems that the memory usage as listed in Anonymous Usage Reporting is wrong when using the new database. Basically, the usage reported there is much lower than the actual memory taken by Syncthing.
For example, on one device, the usage reporting shows just
"memoryUsageMiB": 69,
while in reality Syncthing is using 330 MB of RAM.
I think that’s because the memory usage reporting from the runtime package used by syncthing only counts heap + stack allocations made by the go runtime. It probably doesn’t know about the C-world and the mallocs made by libsqlite.
However, after the shutdown, when I try to run the current executable, I get these errors about the database:
[start] 2025/03/24 22:24:03 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/24 22:26:38 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/24 22:26:39 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/24 22:26:40 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/24 22:26:41 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/24 22:26:42 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/24 22:26:43 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/24 22:26:44 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/24 22:26:45 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/24 22:26:47 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
Just to be clear, I’ve double checked that Syncthing isn’t running in the background. Is there anything that I can do in this state or do I need to wipe the database clean and start from scratch?
Unfortunately, I’ve now experienced the same failure on another system. This one has decent hardware and I was just rebooting Windows. After doing so, Syncthing fails to start with the same error as on the other device.
[start] 2025/03/25 13:53:49 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/25 13:53:57 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/25 13:53:59 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/25 13:54:04 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/25 13:54:05 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/25 13:54:11 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
[start] 2025/03/25 13:54:12 INFO: syncthing v1.29.4-rc.1.dev.118.g7a475169-tomasz86-v1.29.3-sqlite "Hafnium Hornet" (go1.24.1 windows-amd64) tomasz86@tomasz86 2025-03-24 21:19:38 UTC [noupgrade, stnoupgrade]
[start] 2025/03/25 13:54:17 WARNING: Error opening database: init database: runscripts (UPDATE indexids SET sequence = (
SELECT COALESCE(MAX(remote_sequence), MAX(sequence)) FROM files
WHERE files.device_idx = indexids.device_idx AND files.folder_idx = indexids.folder_idx
)): NOT NULL constraint failed: indexids.sequence
Thanks! I’ve managed to “resurrect” the two Syncthing instances using the current iteration of the PR.
However, after doing so, I’m now experiencing a very weird issue. In one folder, Syncthing keeps “undeleting” files. Basically, if I try to delete a file, it immediately downloads it again. I can keep trying deleting the files, and they keep being restored with no end to it. The folder global and local states also appear corrupted.
I’m not sure what to do. I’ve noticed the problem, because I’ve got a script running that automatically creates ICS calendar events from text files, and once created, the script deletes the original text file. As the deleted files keep getting restored, I’ve got flooded with literally thousands of the same calendar events.
Unfortunately, the device is located on a very slow network and the database is huge (1.3 GB packed with 7-Zip). I’m going to send it to you privately, but first I need to upload the file somewhere, so it will take a while .
during the index exchange phase, I’m seeing the WAL file having grown up to 15 GB (with a database of 1.5 GB). Is there anything that can be done about it? I’m just worried that with much larger setups exchanging indexes for the first time, this WAL file will grow to some extreme sizes.
I’ve also observed very poor performance during the index exchange phase, especially on slower hardware, and I’ve got a feeling that this may be related to the ever-growing WAL file, which, as the size grows, becomes a bottleneck to write and read, especially with slower storage involved. This is just a guess on my part though, as I haven’t done any specific benchmarks to find out while Syncthing feels so slow during that phase.
Just for the record, I’m doing my testing with v2.0.0-beta.2.
The WAL file will grow without bounds until a checkpoint takes place that reaches the very end of the WAL file
what happens here during initial index transfer is we have two competing processes ongoing:
There are incoming index updates, which are a continuous stream of small write transactions
There are outgoing index updates, which are a continous stream of slightly longer read transactions
From my understanding of WAL and read transactions, a read transaction will note the position in the WAL file where it starts, because its view of the database is that of the “real” database file plus everything in the WAL file as it exists at that point.
The write checkpoints (which will happen after every write transaction once the WAL file has grown a bit) can only truncate the WAL file up to the point of the most of recently started read transaction (which will have the hold on the furthest point of the WAL file).
End result, as long as the WAL file grows the read transactions will progressively keep it from being able to shrink. I thought I could mitigate this by limiting the time the read transactions are live, but that doesn’t really help at all. Essentially we’d need to make sure there are no read transactions at all, and then let a checkpoint happen. Maybe a periodic process that grabs a lock from all the index handlers and does a checkpoint truncate…