Destination stays encrypted (encryption at rest like Duplicity software) for an untrusted destination server?

Hello,

I’m using ST for various devices and it’s great.

I currently want to do a backup of 1 TB on an untrusted destination server (since I don’t have access to the hardware, I only trust it 99% but not 100% ;)).

Is there a mode in which the data stays encrypted on the destination, and is never decrypted there, like with duplicity?

If so, how to enable this feature in the ST settings on the destination server?

I read:

but it does not really answer this question.

All the best

That feature is in development and exists for testing in the latest release candidate, which is described in the first thread you linked to.

It is not production ready. I suggest you not use it, except for experimenting with expandable data that you don’t mind being potentially lost, destroyed, or leaked.

Thank you @calmh for your answer.

I’ll have a try for data which is already replicated in many places, so I don’t mind losing it.

Last question: let’s say I have 1TB in 500k files, that has been successfully synchronized with encryption at rest:

  source  (1 TB) ------->  destination (encrypted at rest) (1 TB)

Then I modify a few files on source, I wait a few days, and I start a sync again.

Does it require to download all the metadata containing the list of files/chunks present on destination?

If not, the local computer doesn’t know what is already on destination, is that right?

How big do you estimate this metadata database for 1TB / 500k files?

I guess at least 100 bytes per file, i.e. 500k*100 ~ 50MB ?
So ~ 50 MB has to be downloaded from destination to local computer before each new sync, so that the local computer knows what is already on destination?

Its more than 100 bytes per file, but yes, every time something changes metadata (and potentially data) needs to be transferred.

In encryption at rest mode (experimental), if a sync has already been done, when starting a re-sync, does it need to transfer from destination to source the whole list of chunks/files already present on destination?

i.e. on each resync of 500k files, do more than 50MB of data have to be transferred from dest to src before starting to sync?

Or does the local computer keep a database of the files/chunks already present on destination?

There is no start sync, resync, it’s a continuous process. Only the files that change are synced and only their metadata is retransferred.

Thanks. Does this mean the local computer keeps a database of the chunks/files/sha hashes which are present on a remote? In which directory/file/database is it stored?

(so that when starting syncthing.exe after a computer reboot, it does not require to re-download the whole “file list” from this remote to know which files are on remote and wihch files aren’t)

Yes, in the index database beside the configuration etc.