Untrusted machine errors...UI? maybe, partially.

Several times I’ve seen the dreaded 95%, 0B issue arise. I’ve tried several fixes.

It turns out that deleting and re-creating the folder on the untrusted machine causes a re-transfer of all files, which is a big deal in my slow-connection/250GB environment. Also, as I discovered previously, trying to restart with --reset-database on the untrusted machine causes the same issue. Every file/folder shows up as a local change.

I’ve seen this dismissed as a UI issue. Yes, it is at least that. If I’m seeing a bunch of 0B files on the untrusted machine, I have zero confidence that I can successfully decrypt/restore my files from it if a catastrophe occurs and I actually need to do that. Maybe it’s okay. Maybe it’s not. From a UI perspective, this makes me happy I also have a restic repo.

I can understand that the untrusted machine cannot verify the hash of each file the way the trusted machines do–but it does seem as if it should be storing a hash of the encrypted files and checking that before deciding to classify files/folders as local changes. I can see an issue with this approach, since re-creating the folder or resetting the database would obviously lose the hashes…but in that case there really needs to be a user-friendly fix for the 0B issue.

Restarting with --reset-deltas is an interesting alternative approach. It worked just now, but only after I did that from the untrusted machine. Not from any of the others. I don’t know whether that’s reliable, and in any case it’s not something I want to have to check every day (every hour?) to see reassurance that the database is okay.

Trying to revert local files simply does nothing on the untrusted machine once the “0B” problem is shown.

I really think this should be addressed, so users (like me!) unfamiliar with the codebase do not see UI elements implying a corrupted database, and worry about (among other things) a later decryption creating potentially thousands of 0B files to be sorted through…though it wouldn’t be hard to fix that, what if the 0B files ended up overwriting the files with actual content? That would not be good.

In addition, I posted separately that the only way I’ve discovered to safely share an untrusted folder with several trusted machines is to create the folder on the untrusted machine first, before sending/accepting requests to share. Without that safeguard, I have seen encrypted files showing up on the trusted machines and clear-text files showing up on the untrusted machine. I don’t know if this is relevant to the other issue, but I’m mentioning it because (1) it might be, and (2) I very much agree that the untrusted device feature is not ready to move out of beta.

All that said, I do really like the app overall. I just can’t trust the untrusted/encrypted folder to actually work in the event of an emergency, unless I happen to have recently resolved the 0B files issue.

v1.29.2 on all machines

I’m still digesting what the situation is, but on a somewhat related note, you mentioned “catastrophe” and “emergency”, so is Syncthing being used as a backup tool?

Since you’re also using Restic (a great backup tool), instead of Syncthing’s untrusted device feature, an alternative solution is to use Syncthing to mirror your Restic repo (which I assume is encrypted) to your untrusted devices. Any trusted devices can decrypt the Restic repo, so there’s no loss of access.

(If a trusted device is running Linux, FreeBSD or macOS, Restic’s FUSE-based “restic mount” can be used to transparently have read-only access to snapshot contents.)

Sure, in a sense any sync app is also a backup–thus the use of, for example, Dropbox. I have many times restored files to new computers or previously-broken systems via Dropbox. (Which I no longer use for anything.)

Yes, restic is better/faster for that purpose, but it also needs to be run on a schedule and has the potential to lose work that happened in the interim. Though shouldn’t I be pushing my repos to GitHub more often? Heh. Probably.

I am also using the untrusted device feature, just because I want to figure out how to make it truly work. I like syncthing. And yeah, I like your idea of syncing the repo–but I’m already storing it with two s3 providers (suspenders AND pants!)(But also, I think the “eleven nines” reliability stuff is mostly nonsense, as a business relationship doesn’t have anything close to that–so reliability on that level is the primary problem to solve imo).

I very much appreciate your time and help, but I have the non-Syncthing issues ironed out. I just wish I could rely on the untrusted device/folder feature within syncthing. And having to run reset-deltas just go get rid of possibly-meaningless error messages, and having to run it specifically on the untrusted device, seems problematic (again, speaking as a guy wholly unfamiliar with the codebase). I kind of cringe to think of what might happen were I to include more than one untrusted device/folder within a web of trusted machines.

Yes, there is the assumption though that with a new/broken computer the restoration is a deliberate process…

… I recently experienced an incident where a co-worker checked out a years-old version of an important script from our repo and replaced what they thought was an older version in a Syncthing folder – within 15 seconds the “update” rippled across all of the Syncthing peers before they realized what had happened (fortunately we also have a backup system, so it was a quick restore).

So it’s a double-edge sword – Restic on a schedule risks missing interim work, but Syncthing / Dropbox / Box Sync / Google Drive Sync / OneDrive / … alone (i.e. without versioning) risks losing all the work. :smirk:

If you’re using Linux, incron paired with Restic can be a viable solution to running strictly on a schedule. Use Restic’s --skip-if-unchanged so that a snapshot is only created when there are actual changes saved. (There might be a similar solution for Windows.)

Hey man. Real people use Windows! Or so I’ve heard…personally I have a couple of Windows VMs lying around, because of a couple of apps I periodically need (mostly Dragon for speech recognition). But they get no network access; only shared folders via Virtualbox. Don’t get me started on Windows. I won’t stop.

I liked your story about your coworker. Sounds like something I’d do! Luckily there’s nobody in a position to do that in my network besides me…

incron is interesting–thanks for pointing it out. I used to use a self-rolled inotify/rsync system sometimes; it’s possibly reasonable to think of doing something similar with restic. Have to think about that one.

you know, the fix for this is so simple it bugs the hell out of me that it hasn’t happened yet: send hashes of encrypted files to the trusted sync partners. upon a reset-database, get the list of hashes from a trusted machine/folder. rescan the encrypted data. proceed from there. OR let reset-deltas work properly, trusting the trusted folders to be correct, so the untrusted side never develops this 95% problem in the first place. i just went through a nightmare of an issue where i had run tests on my code, then did a git commit, and in the interim INCORRECT files were written between the passing tests and my commit. so git said there was no problem, i had changed no code, everything was fine–but I pushed bad code to production invisibly. really, really not okay that this can happen so easily, when it’s so preventable.