Syncthing is such a massive resource hog!

Per folder, as they are essentially completely independent of each other. You don’t want to set pullers to one though, regardless. And copiers should seldom be set to something other than one.

At some point, one may possibly be able to … “pause” a folder if this is a bother… :smiling_imp: :grimacing:

I’m not trying to push SQLite over LevelDB, but just to add…

Yes it does, when using WAL journalling. Readers and writers do not block each other. See Write-Ahead Logging.

It would be interesting to see the query plan here: EXPLAIN QUERY PLAN SELECT SUM(blah) FROM ....

You’re right of course, but I saw otherwise. I tried to reproduce that now and failed, so either I was drunk or something has changed. However now I instead see some updates taking just over a second (when it should be in the microsecond range normally), not sure what’s going on there;

[quote=“TDA1541, post:14, topic:5494, full:true”] Once the initial scan/hash was complete, I ran into a second issue. Ram usage with Syncthing is, what I would consider to be, abnormally high. My share is around 300gb and after the initial scan (and a few application reboots due to configuration changes), Syncthing was reporting almost 800mb ram usage[/quote]

I’ve noticed this sort of RAM usage before, but after a minute or so a lot of it will be released. 56K items indexed here totalling 206GiB, 215MB private bytes in use. It was 455MB a few minutes ago whilst Syncthing was busy. (Windows 7).

Interesting. The fact that they’re consistently a fraction above 1s is very suspicious. There is a transaction timeout, but it’s 5 seconds. I can’t find anywhere in code that the Go adapter changes it either.

Aha, looking at http://beets.radbox.org/blog/sqlite-nightmare.html, it seems that if sqlite is compiled without HAVE_USLEEP=1, the transaction contention backoff is an entire second (ouch). Maybe check how your build of sqlite was compiled?

EDIT: Looks like there’s an open PR: https://github.com/mattn/go-sqlite3/pull/211

Ah, yes. Going by the mentioned issue it seems the “database is locked” thing may have been resolved fairly recently for databases opened in WAL mode. I gather the one second retry thing is for conflicting writers?

The docs I’ve seen have just said “tries to acquire a lock”, although I’m not sure exactly what locks are taken when concurrent reads/writes are occurring with WAL.

The “database is locked” thing happens when SQLite can’t acquire a lock for 5 seconds due to repeated contention: not defining HAVE_USLEEP makes this much more likely to occur (since we only get 5 attempts at acquiring the lock, instead of presumably thousands of attempts, and there’s no backoff making contention more likely to occur), but isn’t a direct cause, methinks.

This sleeping and polling thing seems like a really ghetto way of implementing locking. :frowning: There’s a reason man invented mutexes and semaphores.

Anyway, the pull request above was merged, so the new default for Windows and Mac is one hasher only.

Although the low end iMac is the only current gen Mac to ship with a spinning disk by default and it’s probably the most uncommon in the group, so we’d probably be better of defaulting to all cores there as well… Ah well.

6 posts were split to a new topic: Improving performance by limiting concurrent I/O

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.