Sugestion of use: CryFs to encrypts your files

I saw some people commenting here on the forum about the security and privacy of the files when they are hosted on remote servers.

I read that some people have used some methods like truecrypt/veracrypt and EncFs.

And also, some people find that the EncFs has been audited and found in it small safety lapses.

Well, I come here to make a suggestion of a system that I have been used: CryFs (https://cryfs.org).

5 Likes

Interesting! It’s not yet stable, but this project looks really promising.

Promising but no support for Windows. That might break SyncThings OS portability model.

I have just read about but I would recommend to take a look at gocrypfs also https://nuetzlich.net/gocryptfs/

2 Likes

I have also been looking at CryFs as a replacement for encfs+cryptkeeper, to provide sync-friendly encryption of my data.

cryptkeeper has some known weakenesses, leaks file metadata by design, and was removed from Debian due to a serious bug (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852751)

So I’m currently considering a move CryFs. It is relatively young and unproven (was is audited yet?), but seem less risky than encfs+cryptkeeper from what we currently know.

IMHO CryFs’ main downside are the lack of GUI, and security audit.

Testing CryFs quickly revealed another downside: a significant disk usage overhead.

This seems to be explained (in part) by CryFs’ design where files are split in 32KiB blocks, with padding where necessary.

Relevant discussion on CryFs’ bug tracker: https://github.com/cryfs/cryfs/issues/11

Yeah I was just discussing this with Calmh on another thread if anyone were to add padding to a 32k 64k or 128k block, but for each file being treated as a block, If you have a situation of thousands of small files of just a few bytes (not as uncommon as you think, especially for OS installations and some games) then you have massive gross overhead on disk. possibly more than double the file size in some situations.

Not really a CryFS only problem. It’s an issue for anyone who has to tackle the problem of “not giving away file size” by padding. My view is not a lot of padding is really necessary (maybe just a few extra bytes at most as long as it’s a non-deterministic amount) should suffice. Of course someone will disagree with me I’m sure, this is a highly subjective argument. You could always not bother padding and allow leakage of file size. But then the RIAA / MPAA could fingerprint that illegal album if MP3’s/movies you have downloaded and stored to some cloud. :wink:

Security is always a trade off.

The overhead is not only about padding, I investigated further and added my findings on CryFs’ bug tracker: https://github.com/cryfs/cryfs/issues/11

Just as you said, creating many files or directories on disk also adds overhead.

A while ago I worked on a pooling implementation which reminds me of this problem. In some application you want to pre-allocate some buffers to improve performance. Buffers may be discarded/reused multiple times, so you want to maximize re-use of buffers to avoid memory overhead.

I found that restricting buffer sizes to the next larger power of 2 works nicely, and the overhead is typically 20% or lower.

This might be possible trade off for our situation. It would leak the approximate file size (approx by a factor of 2), but would greatly reduce the number of files on disk, and minimize padding.

Update: in our case we might want to round to the next smaller power of 2, thus creating multiple blocks of decreasing size, until you reach the smallest block (4k, or 16k, or 32k).

This would avoid creating too many files, in the O ( K * log(size) ) instead of O ( K * size ) with fixed block size approach. This would also reduce padding overhead