Questions about Untrusted Device Encryption

Hi folks! When evaluating a solution for syncing of files via an untrusted host, I stumbled upon the “receive-encrypted” mode of Syncthing.

I read through Untrusted Device Encryption — Syncthing documentation and a few questions popped up (in no particular order). I hope feedback and (potentially stupid) questions are welcome in order to review and potentially improve the current in-beta design (and its docs). Note that I’m not a cryptographer, but I do have a little experience with the design of crypto protocols. Note also that I only looked at this documentation page, and not at the source code.

Key Derivation

  • Which Scrypt parameters are actually used for key derivation? They aren’t listed in the docs.
  • Can these parameters be upgraded, for example when you realize in a few years that the chosen parameters are considered insecure by then? (In my experience it’s important to store the parameters somewhere in a way that they can be upgraded by newer code without breaking old code.)
  • Can the KDF itself be replaced in the future without breaking compatibility? (Could be solved by an “encryption format version” mechanism.)
  • Why is HKDF used for the fileKey and Scrypt for the folderKey?
  • To derive the passwordToken from the folderKey, AES-SIV is used. I realize that this is probably fine because the folderKey has sufficiently high entropy, but if you need a KDF, would it hurt to use an actual KDF instead (especially if the same KDF is used consistently)?
  • Note: A switch from Scrypt/HKDF to Argon2id might be worth considering.


  • The docs mention: “The folder key is used to encrypt file names using AES-SIV without nonce”. However, as far as I understand it, AES-SIV does require a IV/nonce. Will a random nonce be generated and appended/prepended to the ciphertext, or something like that? Or os an empty or fixed IV/nonce used, in order to get deterministic ciphertext for a plaintext?
    • AES-SIV should be one of the few modes that should be able to deal with that securely, but I’m not that familiar with SIV to know whether nonce reuse is perfectly OK.
    • In RFC8452, the docs note: We stress that the nonce misuse-resistance property is not intended to be coupled with intentional nonce reuse; rather, such schemes provide the best possible security in the event of nonce reuse. Due to all of the above, it is RECOMMENDED that AES-GCM-SIV nonces be randomly generated.
    • However, they also say: We stress that nonce misuse-resistant schemes guarantee that if a nonce repeats, then the only security loss is that identical plaintexts will produce identical ciphertexts.
    • So it might be fine for your use case? If this was analyzed before, a short explanation in the encryption docs might be a good idea.
  • Why is AES-SIV used in some cases, and XChaCha20-Poly1305 in other cases? Both are used to encrypt blocks (not streams), both are authenticated encryption algorithms. Often XChacha20-Poly1305 is used when hardware acceleration is not available, and AES otherwise.
  • Is there an analysis of the max number of bytes that is expected to be encrypted with a certain algorithm? For example, for AES-GCM-256, if I remember correctly the max number of bytes that may be encrypted safely is around 64 GiB.


  • A block diagram in the docs with inputs, key derivations and encryption functions would be great.

It looks like you’re confusing algorithms here. For one, there’s AES-GCM-SIV (described in RFC 8452) which indeed takes a nonce. However, there’s also AES-SIV (described in RFC 5297), where nonces are generally not required for encryption. Syncthing uses the latter where specified.

Yup, appears you have to dig into code for this. The current values were apparently taken from official Golang documentation:

The recommended parameters for interactive logins as of 2017 are N=32768, r=8 and p=1

I don’t think there’s such an upgrade path at this time.

AES-SIV is deterministic encryption and hence not IND-CPA secure. It is only used where deterministic encryption is necessary (it’s required in some places to have reproducible encryption, otherwise the encrypted device would only get conflicts for the same data). Everywhere where this is not absolutely required, IND-CPA/IND-CCA secure encryption is used instead.

Well we can run the numbers here real quick. For example, in TLS 1.2 AES-GCM runs with a 64 bit nonce (note that this is particularly short and no longer modern). According to the birthday paradox, the risk of nonce collisions becomes non-negligible at around sqrt(2^(nonce_length)). For a 64-bit nonce and AES block length of 128 bits = 16 bytes, that means after 16 bytes * sqrt(2^64) = 16 bytes * 2^32 = 64 GiB we have a non-negligble risk of collisions. Due to AES-GCM’s “sudden death” problem, a single nonce reuse is potentially catastrophic. [*, see remark]

XChaCha20-Poly1305 uses a 192 bit nonce, combined with a block length of 64 byte that gives you approximately 5.071×10^30 bytes (that’s about 5 quetta-byte) before nonce reuse becomes non-negligble.

For the KDF stuff I’ve no idea, someone else needs to chime in here.

PS: * I just realized the way I worded this sounds like TLS 1.2 with AES-GCM cipher suites is totally insecure. The original spec for AES-GCM in TLS 1.2 (RFC5288) already warns about the nonce-reuse danger and basically tells implementations to avoid this:

Each value of the nonce_explicit MUST be distinct for each distinct invocation of the GCM encrypt function for any fixed key. Failure to meet this uniqueness requirement can significantly degrade security. The nonce_explicit MAY be the 64-bit sequence number.

In practice this results in many TLS imlementations choosing the nonce deterministically instead of random. For a deterministic algorithm the probability of nonce collision is always either 0 or 1 and generally trivial to compute and has nothing to do with birthday paradox. RFC 9325 reiterated that implementations should choose the AES-GCM nonce deterministically and reuse-free and TLS 1.3 fixed the problem altogether by forcing implementations to do it this way.


Thanks @Nummer378 for your reply!

Oof, you’re right! I wrongly assumed that “AES-SIV” and “AES-GCM-SIV” are the same thing :upside_down_face:

Thanks for the other explanations as well!


Actually, I have thought about this and I do have one additional comment:

(Disclaimer: I wasn’t involved with the encryption protocol design or implementation - these are just my 2 cents, the actual reasons may be different)

I think this makes sense because we have two different requirements here:

  1. The HKDF is used for key derivation from a (presumed) high-entropy “master secret” (apologize for the outdated terminology, this is how it’s generally named in literature). The goal is to derive multiple keys (in this case one key per file, the file keys). It’s generally best cryptographic practice to not reuse keys for different purposes to avoid unexpected leakage across unrelated parts of the design. The only requirement we have is that the KDF must have a trapdoor property: Assume one “child” key (= folder key in this case) leaks, we want to ensure that unrelated sibling or parent keys (the folderKey and/or other file keys) are unaffected. Hence we use something that fulfills the requirements of a trapdoor function. In practice that generally boils down to HKDF: It’s cheap to compute (just 2 hashes in essence) and cryptographic hash functions are the practitioner’s trapdoor function of choice. Note that the file keys have the same assumed entropy as the “master secret”, there’s no attempt made to increase the “quality” of that secret.

  2. scrypt on the other hand tries to fulfill a different purpose: Here the goal is to create the above high-entropy “master secret” from a (presumed) low quality user-defined character string. That means that we have to somehow “create entropy” out of nothing. The theorists tell you that’s impossible, but the practitioner argues that slowing the attacker down results in the same effect. By artificially slowing ourselves down, we also slow the attacker down, hence making bruteforce as hard as having a high(er) quality secret from the start (at least in theory…). This is also generally considered a one-time-per-session process - we’re not expected to recompute this value over and over again, unlike the file keys of which there may be millions and which should be cheap to recompute.

1 Like