Odd, I don’t see it blocked, and it’s in code that haven’t changed radically due to the database switch… There is some potential to improve the context handling there though, but for a followup PR rather than in this one I think. To be sure, this doesn’t happen under the same circumstances with the current db?
Yeah, just tested with the old database. Removing the folder is instant there. With the new one, it can take up to 30s. Just to be clear, I’m talking about removing it while scanning is still going on.
Yeah, that doesn’t happen for me. I add a folder, let it get through the first phase of scanning so it starts showing a percentage, let it run a little while, then remove the folder. The “saving changes” dialog pops up for a second or two, then goes away, the folder is removed, the logs say “failed initial scan of $folderid” (which is expected).
I think the problem gets more pronounced the further you are into scanning, e.g. here with the test folder:
if I remove the folder at the very beginning of a scan, the wait time is very short, however if I do it once scanning has reached 50% or higher, then it takes much, much longer. This isn’t the case with the old database.
Yeah, I found the issue, my bad. It’s not Windows. The last part of the scan wasn’t cancelled properly after my database changes. (wip · calmh/syncthing@be645b1 · GitHub)
On the two servers where I’m running the v2 beta, the btrfs volume has block-level compression enabled (zstd at maximum supported setting). I’ve been seeing a fairly consistent ~7% savings:
$ ls -l index-v1.db
-rw-r--r--. 1 syncthing syncthing 3605712896 Mar 16 13:38 index-v1.db
$ btrfs filesystem du index-v1.db
Total Exclusive Set shared Filename
3.36GiB 3.36GiB 0.00B index-v1.db
A couple days ago I did a full vacuum for comparison and was surprised that the difference was almost the same (or sometimes worse). I’d fully expected compression to do be much less effective given the type of data.
[start] 2025/03/17 11:54:54.612310 main.go:538: INFO: syncthing v2.0.0-beta.4 "Hafnium Hornet" (go1.24.1 windows-amd64) builder@github.syncthing.net 2025-03-17 09:35:39 UTC [mattn-sqlite, stnoupgrade]
[start] 2025/03/17 11:54:54.612310 utils.go:63: INFO: Generating ECDSA key and certificate for syncthing...
[start] 2025/03/17 11:54:54.615307 main.go:557: WARNING: Failed to load/generate certificate: save cert: write R:\test\syncthing\syncthing1\cert.pem: The process cannot access the file because another process has locked a portion of the file.
[monitor] 2025/03/17 11:54:54.617307 monitor.go:199: INFO: Syncthing exited: exit status 1
The hidden complexity behind CGO_ENABLED: 1 is that you need to have a C compiler available, and if it’s not the the one Go expects by default you may need to tell it about it with -cc. But you can explore that on your own, I didn’t go into detail on it which is why there isn’t a native-SQLite build for 386. The GitHub Actions runner has the required C compiler by default.
As mentioned by @calmh, you need to specify a C/C++ compiler that can compile for your target. Since we’re talking about C/C++ here you also need platform headers/libraries and the compiler runtime library for your target. As Syncthing Tray already uses cgo and also targets Windows and Android I have already some experience:
When compiling for Windows I usually use GCC. On GNU/Linux there’s often a version of GCC for targeting Windows available in the package manager. On Windows itself I recommend installing GCC via MSYS2 (where you can also get Go from btw). Using Clang works as well when targeting Windows and also allows to compile for aarch64. I also tried my luck with MSVC but ran into issues as it is seemingly not really supported by cgo. So I wouldn’t recommend using MSVC.
When compiling for Android you need to specify the Clang compiler from the Android NDK which you simply download and extract somewhere. I think you don’t need the SDK or Java when just compiling Syncthing and SQLite themselves.
When using Clang you don’t need a different build (of Clang) for different targets. You can just add e.g. --target=aarch64-w64-mingw32 to the compiler arguments. (Of course you still need the compiler rt and platform headers/libraries for the target.)
I mainly tested this using a separately conduced build of SQLite by adding -tags libsqlite3 to the Go invocation. Probably it is easier for you to let the build system of GitHub - mattn/go-sqlite3: sqlite3 driver for go using database/sql handle this for you, though. I just mention the possibility of building SQLite separately in case you run into issues on the SQLite build or want to tweak it otherwise.
I guess one can/should use exFAT on Android these days as filesystem on the SD card. So this would only be an issue on very old Android versions not yet using/supporting exFAT. For the internal storage Android doesn’t use FAT32 as far as I know so this shouldn’t be problematic in any case.
It looks like via ATTACH it would be possible to split the database into multiple files. Not sure whether the additional complexity is worth it.
According to the documetation on limits SQLite will at least behave “sanely” when running into filesystem limits.
That reminds of when I ran out of disk space while testing in an Android VM. There the situation wasn’t handled sanely. It wasn’t SQLites’s fault, though. The problem was that I stopped Syncthing gracefully which therefore probably still tried to save the config file. That probably failed as there was no more disk space so I ended up with a truncated config file effectively loosing my config. It was just a test setup so it wasn’t a problem but maybe something to be improved.
Is this actually needed? I’m asking because in the meantime I’ve already compiled Windows binaries (both amd64 and 386) using the standard procedure from https://docs.syncthing.net/dev/building.html#building-windows (without CGO_ENABLED), and they just built with no errors.
On the other hand, Android has always needed CGO, but here too I’ve simply followed the standard command-line instructions from https://github.com/syncthing/syncthing-android#building to create arm and arm64 binaries, and they also built as usual.
Just for the record, I compile everything under Windows.
@calmh have you been able to reproduce the hang you experienced with the WASM sqlite lib? At least from a support standpoint it might be better if we only have to deal with two instead of three different database drivers.