Disable data encryption

Hi!

Is it possible to disable encryption? I don’t really need it and I don’t want to use my CPU for it :frowning:

1 Like

Maybe a setting like btsync has would be nice for this: “disable encryption in LAN”

Not going to happen.

4 Likes

How about adding AES128 option? This may be helpful for mobile devices.

Oh, and crc32c for checksums )

The block hashes need to be cryptographically strong to enable some optimizations (we want to know that two blocks with the same hash are in fact the same block). But we could fairly trivially change to AES128, without any tangible loss of security. I don’t know how much faster it would be, some benchmarking would seem to be in order.

Looks like about a 7% performance difference so probably not worth it. Might be more on something like a phone? But it’s a bottleneck, that’s understood: 102 MB/s vs 109 MB/s on my laptop.

Ha! You tricked me. We use AES128 today, specifically TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256.

OK, but what about crc? ) crc32c uses in iSCSI and production-ready btrfs (wich fully supports by Oracle and SuSE). crc32c is MUCH faster then SHA-256.

Compression is much more expensive, might want to make that a tuneable.

BenchmarkDeflateRandom	     500	   5293590 ns/op	  24.76 MB/s
BenchmarkDeflateASCII	     500	   6196456 ns/op	  21.15 MB/s

crc32 is fine as a quick guard against corruption, but that’s not what we use the hashes for.

Isn’t SHA1(~2-3 times faster than SHA-256) enough for this?

> CityHash appears to be very nearly as fast as a CRC-32 calculated using the Intel crc32 hardware instruction! I tested three CityHash routines and the Intel crc32 instruction on a 434 MB file. The crc32 instruction version (which computes a CRC-32C) took 24 ms of CPU time. CityHash64 took 55 ms, CityHash128 60 ms, and CityHashCrc128 50 ms. CityHashCrc128 makes use of the same hardware instruction, though it does not compute a CRC

Maybe. But it’s being questioned and is slowly being replaced by SHA-2 and friends. I’m not certain that we won’t appreciate the added security of a good hash, and give it a year or so and we will have twice faster computers anyway, and be using the same protocol hopefully for many years to come.

Stop talking about checksumming algorithms and non-cryptographic hashes. That’s not what we do.

OK. I’m sorry for your time.

Not at all! This gave a valuable result, namely highlighting how fucking slow the compression is! I had assumed that a light compression algorithm in it’s speed-over-compression mode would be a quick win.

That’ll be going away, unless I find something brutally wrong with my benchmarks.

Finally AES (128\256) test from my laptop:

openssl speed aes-128-cbc aes-256-cbc
...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc     101993.28k   112789.08k   112856.23k   116360.19k   112377.86k
aes-256 cbc      75975.16k    80961.54k    80826.88k    82967.89k    83211.61k

Yep. Transfers are much faster without compression.

I had to stop after this, but in my head firmly lodged that SHA-256 uses for file’s blocks checksum. I need more sleep )

This is a tricky subject. :slight_smile: Let me see if I can summarize it correctly:

  • For the connection crypto, we use the Go TLS implementation and it chooses the strongest cipher combination it has. Currently that means TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, which in turn means that data is encrypted with AES-128, authenticated by GCM. The “SHA256” in the string is, as far as I understand, only used as a pseudo random number generator…

  • For block hashes, we do use SHA-256. We want to continue doing this, because it gives us the peace of mind of knowing that if we need a block with hash 1234, and we have a block with hash 1234 somewhere on disk, they are the same. It also doesn’t matter that much since it’s a one time cost per file to hash it, amortized over however long that file lives.

  • In v0.8 and earlier, we use an expensive compression. In v0.9 (current master) I’ve removed that. Large data files are probably in a compressed format already, where it matters.