I’m currently experimenting a bit with Syncthing (on amd64/linux), and I’ve noticed something. It appears that Syncthing uses unbuffered IO for either large files, contiguous blocks, or just entirely, which makes a lot of sense, so that the scanning of files does not exempt the buffercache for other potentially more relevant blocks/files.
However, I’d like to hear if it’s possible to somehow tell Syncthing to never use unbuffered IO, or to somehow use some kernel/process tricks, to force buffered IO for the Syncthing processes?
I have a really edge-case situation where it would benefit me a lot.
No, there is no way to do that. I suspect you might be able to force it in the kernel
Sorry, reading this again, I think I misunderstood. I’d expect all of syncthings IO to be buffered, up to the point when it closes the file. It does an fsync after closing the file, yet all the writes are buffered. No there is no flag to disable fsync.
Yeah, this. If your edge case is “millions of small files on rotating storage” I can see where you’re coming from. Otherwise, please explain.
My use case involves a lot of deduped data in lot of different files, that has the same raw chunks being read again and again from the disk, where it being read from disk buffer/memory would be preferable.
No small files, no HDDs.
It was also just an inquery as to find out if it was possible.
Thank you for your answers guys, and for a great product.
Oh, I’m only interested in buffering of reads; the files are placed there by another process; not syncthing
We just do regular file reads; if things can be cached by the fs layer it should be so.
However, there may be some interaction with the database and how we read blocks, if you have many copies of the same block. That is, I’m not sure how we pick from where to read a certain block if there are many copies of it. It might vary and then the cache won’t help.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.