Troubleshooting the dreaded spurious "revert local changes"

Hi all:

I am using Ubuntu MATE 22.04.3, which comes with the oldish syncthing v1.18.0-ds1. But I am guessing my questions apply to newer Syncthing versions too.

I have a local Linux PC with an external NTFS USB disk configured as “send only”, and another remote Linux PC with a similar external NTFS USB disk configured as “receive only”.

A few months ago I sent many gigabytes over the Internet, which took a very long time. Now I am getting on the remote, receiving PC a prompt to revert local changes. The thing is, I haven’t changed anything locally there, and the PCs, Linux and Syncthing versions, and USB disks are the same. I searched the Internet, and it seems like I am not alone. However, the answers I found are not entirely satisfactory, so I am trying my luck here.

I am worried about confirming the “revert local changes” operation, for fear it may delete files, or retransmit a lot of data again. On another place in the web interface, it said something about “local additions”, and not just general “changes”. The list of unsynchronised files on the remote, receiving PC just lists names and filesizes, suggesting it may transmit all the data again.

The sending side apparently knows what files the receiving side wants to resynchronise too, they are more than 22,000, around half of the files. I picked the first filename and did “stat” on both sides. The “last modification time” (mtime) is exactly the same. On the sending PC, there is no creation time, which I wonder, as both systems are nearly identical. The “last change time” (ctime) and the “last access time” are different, but those should not matter, as far as I understand.

The permissions are different, (0777/-rwxrwxrwx) on the sending side and (0755/-rwxr-xr-x) on the receiving side. I also wonder why, as both setups are very similar. But this difference may be an issue, more on that below.

The file types, sizes and number of blocks are the same too.

The user and group names and IDs are different, but those shouldn’t matter either, if I understood it correctly. I haven’t enabled any funky options to synchronise such details.

Command “syncthing cli debug file” did not work, because I should have enabled debugging, which I did not have time to, and now I haven’t got access to the remote computer for a while.

I did manage to enable option “ignore permissions” on both sides, but that did not have any visible effect at the time.

In the meantime, I wanted to learn how to troubleshoot this issue once and for all. I have read far more documentation and user posts that should really be necessary.

So let me get the following straight:

  1. There is no way to preview what Syncthing will do with a file it considers that it has changed (an unsynchronised file). Or is there?

    Possibilities would be “I would retransmit the file contents”, “I would only adjust the file permissions”, “I would delete the file as it no longer exists on the sending side”, and maybe more.

  2. There is no “dry run” mode, so that you could see what Syncthing would actually do. Or am I wrong?

  3. There is no way to find out why Syncthing thinks that a file has changed, right? It maybe the timestamp, the permissions, the size, the data hash, or whatever Syncthing comes up with.

    You cannot find the “unsynchronised” reason neither in the web interface nor with the API. You can dump JSON data, manually compare interesting values, and then guess. For each file, of course. Or am I wrong here?

  4. Option “ignore permissions” is only really important on the receiving side. Is that correct?

  5. Once files are marked as “unsynchronised”, changing option “ignore permissions” on either side does not trigger a recalculation of the “unsynchronised” file status, or does it?

    The reason I am asking is that it looks like only the file permissions on both sides are different, but enabling “ignore permissions” had no visible effect at the time.

    If Syncthing knows the metadata on both sides (does it?), then it should have flagged all files immediately as “synchronised” (or so I think).

  6. There is no way to trigger a “reset-deltas” or a “reset-database” operation from the web interface, right?
    I am asking because there is an occasional user at the remote side, but without enough IT skills to use a command console.

  7. There is no easy way to see a file’s metadata, like its last modification time, with the web interface. You have to resort to “stat” on the shell, “syncthing cli debug file” or the API. Correct?

  8. Did Syncthing really run out of ideas about making troubleshooting and fixing spurious “local changes” even harder, or should I hold on tight for yet more refined evilness next time around? Okay, that’s not really a serious question… O8-)

Thanks in advance,

If you’d like to use the latest version, there are official Syncthing packages for Debian/Ubuntu:

Quite a few questions to digest, so let’s cover this in sections… :smirk:

First, using a NTFS volume requires extra care for a few reasons including, but not limited to:

  • NTFS has a date resolution of 100 nanoseconds while most native Linux filesystems are 1 nanosecond. It can sometimes result in timestamps that are a difference of a second after rounding.

    Rsync has its --modify-window option while Syncthing has its modTimeWindowS config setting.

  • This isn’t related to the problem you’ve described, but it’s still worth noting nonetheless. The default NTFS driver is usually from the “ntfs-3g” package. It’s a mature and well-supported driver. The only issue is that depending on how many files and total size is being managed by Syncthing, the I/O overhead for FUSE needs to be accounted for. (Search this forum for earlier posts regarding combining USB + NTFS + Linux.)

If you want to be certain before reverting changes, use rsync or other one-directional sync tool that has a dry-run mode to compare the two drives.

Instead of comparing file size and block count, it’s more reliable to compare their digital signatures using a tool like md5sum.

Syncthing’s logging system is chock full of debugging options under Actions → Logs → Debugging Facilities.

No, unless it’s a hidden feature I’m not aware of.

After clicking on the link to open the out-of-sync panel, hovering a mouse pointer over a file displays the extended info in a pop-up tooltip. It often includes the specific error that is causing the problem.


Toggling “Ignore Permissions” just requires sending some meta-data, and if that meta-data is the only thing that’s causing the difference, the sync status will also be updated.

No. There’s a catch-22 because the database cannot be reset while it’s in use and Syncthing needs to be running to provide the web UI.

Although not entirely related, given your current setup, it’d be well worth looking into setting up SSH, or some other remote access option. Not just for this, but for easier future maintenance. (Remote management is a frequent topic on this forum.)


To be honest, if at all possible, avoid using NTFS volumes when sharing files between two Linux machines. It’s not that it doesn’t work, it’s that it requires more time and attention set up so that it works well.

FWIW timestamp precision differences should be handled without setting an explicit window. Lack of permission handling does require checking ignore permissions though, and the advanced syncing stuff (owners, matters) won’t work, of course.

This is of course always good advice. However, I would rather stay with Ubuntu’s version for the time being. It’s not just fear of new problems, or compatibility issues between different Syncthing versions, it’s also a general lack of time. Upgrading all involved systems takes time, and it’s one more think to keep in mind when upgrading Ubuntu the next time around.

From reading the documentation, I think that this worry is unfounded, and this thought has also been confirmed by calmh (Jakob Borg) in this thread, so using modTimeWindowS shouldn’t be necessary. I haven’t had the chance to test it yet though (the problematic remote computer is not reachable at the moment).

That does not really help. First of all, using rsync is not easy. I would have to set up SSH access over the Internet, which is much more work than setting up Syncthing, and has security implications to consider. Besides, even if rsync or the like behave one way, it is still not a guarantee that Syncthing will not behave differently. I would rather understand and solve all Syncthing problems within Syncthing.

Yes, that’s right of course. But again, it would be a lot of work, as I haven’t got SSH access over the Internet yet, and it is no guarantee that Syncthing will not decide to delete and/or retransmit the data anyway, no matter what md5sum does.

But that wouldn’t help, for I would hate to see in the log messages about all local files that Syncthing has deleted which it shouldn’t have. Or is there a way to make Syncthing dump its possibly evil intentions to the log, before actually pressing the red “Revert Local Changes” button?

OK, thanks for confirming this.

Unfortunately, that does not seem to apply to my scenario. On the receiving system, I am not getting an “Out of Sync Items” link to click upon, but a “Locally Changed Items” link. I am guessing this is because the folder is configured as “receive only”.

In the “Locally Changed Items” window, there is no hovering tooltip, or any other indication of why a file is considered to have changed locally.

Or was such a tooltip introduced in a later Syncthing version? I searched the web for screenshots, but that “locally changed” item list apparently look the same across all Syncthing versions.

OK, thanks for the confirmation.

This did not seem to be the case when I was playing with the remote system. But maybe I did not pay enough attention. Do you expect the change to be immediate? Or at least to trigger the metadata resend operation immediately?

I do not believe that half of the files had actually locally-modified contents. If so, the automatic rescanning on start-up would have taken a long time. Besides, I am pretty sure that the remote USB disk was not in use in the meantime.

There is also the question about why Syncthing would have to re-synchronise the metadata if you just enable the “ignore permissions” setting on the receive side.

First of all, thanks for the confirmation that such an option does not exist (I haven’t see the latest Syncthing version yet).

I do not think that providing such reset operations would be an insurmountable catch-22 situation, though. After all, the web interface even has an option to completely restart Syncthing. The web interface can apparently automatically reconnect even if Syncthing gets abruptly killed and restarts later on.

That would be far easier if there were an SSH solution as easy to use as Syncthing. Or at least I haven’t come across anything like that yet.

In fact, it is tempting to suggest using the same Syncthing infrastructure for such purposes. Can you imagine? Syncthing already has automatic peer discovery across the Internet and public server data relaying as fallback. Syncthing could automatically set up a TCP tunnel in order to run SSH through it, or at least tell SSH where the peer is (IP address and TCP port number). Like you mentioned, you could even envision some sort of remote access to the peer Syncthing, or even a complete remote desktop, without having to create and distribute SSH keys, configuring DynDNS, etc. Nice dreams, aren’t they?

I think this is a serious shortcoming, especially because so many users seem to be troubleshooting similar spurious “locally modified” problems. I hope Syncthing gets this feature soon.

You are right again, but you know, I am already very happy that the other system is not running Windows this time… :wink:

I can certainly relate. My primary desktop machine is also running Ubuntu (22.04 LTS), but didn’t switch from Ubuntu’s package to Syncthing’s until about a year later.

I use the LTS edition because of the longer span between major upgrades, but the downside is that Syncthing 1.18.0 will be it for the lifespan of the Ubuntu release. So if I stick with 22.04 LTS until support ends, it’ll be thru 2027, and Syncthing 1.18.0 will be nearly 6 years old.

But the main reason I finally switched was because I’d read about things on the forum, and it was simpler to use a version closer to what everyone is referring to.

I generally agree with what @calmh said, plus Syncthing makes a great effort at avoiding transferring chunks unnecessarily.

To add some context… as a sysadmin managing a lot of data across multiple data centers, I’ve transferred over a 100 million files using rsync to a variety of devices and filesystems. So far, FAT and NTFS have been the only ones where time resolution was an issue (resulting in re-transfers of files that had matching digital signatures). It doesn’t happen on a regular basis, but I’ve seen it happen often enough to be on the lookout for it.

I was actually thinking you’d use rsync locally since it’s a pair of USB drives. It would be faster and less complicated compared to doing it over a network connection.

The reason for suggesting rsync was because you mentioned wanting to know if the drives were truly out-of-sync. So it’s analogous to getting a “second opinion”.

Doing rsync --dry-run, and optionally rsync --checksum, would provide at least one answer you’re seeking.

No, not in the way you described. Syncthing REST API does offer ways to query for info about each file it’s tracking, so perhaps it might fill out some of what you’re looking for.

Possibly, but I haven’t looked closely enough at the changelog to be certain.

If the file watcher is enabled for a Syncthing folder, by default there’s a 10-second countdown (user configurable) after Syncthing is notified by the OS. Syncthing does that so it can bunch up multiple changes that might occur in rapid succession. Once the countdown ends, Syncthing rescans the folder to determine what changed relative to its stored view in its database.

Metadata that was sent by the upstream device is normally already available from Syncthing’s local database, so any additional exchange should be minimal.

On Linux, macOS and Windows, permissions are synced by default. On Android, “Ignore Permissions” is on by default because most Android phones and tablets use FAT or one of the specialized filesystems that don’t have file permissions.

Explaining it is more complicated than how it works. :slightly_smiling_face:

When Syncthing starts, it spawns a child process. One of the tasks for the child process is to act as a mini web server.

When a web browser is pointed at the URL, Syncthing’s web server returns a bundle of HTML + CSS + JavaScript that’s rendered by the web browser as Syncthing’s web UI.

The web UI – essentially a program running inside the web browser (this part is important to keep in mind) – sends commands via the web browser to Syncthing’s web server where Syncthing’s REST API responds accordingly.

Because Syncthing’s web UI is a web application running separately (in a web browser) from the Syncthing server, it can send a restart signal to then keep polling for a response.

However, if you were to shut down the Syncthing server with a pkill -9 syncthing or something else suitable, the Syncthing web UI will eventually time out with an error message. Wait longer, and the web browser will eventually clear the web page and present its own error message. But if you restart Syncthing soon enough, the web UI will automatically reconnect – as long as it or the web browser hasn’t timed out yet.


Since Syncthing’s web UI runs inside a sandbox in a web browser, security measures prevent it from interacting directly with the host OS – i.e., it cannot simply run /usr/bin/syncthing because it cannot directly see/access the host filesystem.

Because Syncthing needs to shut down before its database can be reset (it’s an offline operation), and Syncthing’s web UI cannot issue the command syncthing --reset-database (for the reasons described above), somebody else needs to do it. The pair of parent and child Syncthing processes cannot do it because they need to shut down first (with the current design).

Now, it doesn’t mean that it’s impossible. There are all kinds of options including forking an instance of Syncthing that detaches from the parent (sounds easy, but there are caveats); setting up a scheduled task; redesigning Syncthing so that it releases its database on-demand via its REST API, etc. But for certain, the web UI cannot do it.

Of course it still means that there will be instances where a user – even with copious warnings – clicks that [Reset Database] button and then wonders why their system suddenly slows down or some other problems occur. :crazy_face:

There’s a seemingly endless number of choices, but if ease-of-use is the top priority, these three make Syncthing seem difficult to use in comparison: :wink:

AnyDesk and TeamViewer come in portable editions with a single executable. No installation required. Just run and enter an access code.

A very interesting idea, but I suspect it’s way out of scope for Syncthing. :slightly_smiling_face:

However, what you’re describing has already been done. In alphabetical order:

Those aren’t the only options available, and I’m certain that I missed someone’s favorite mesh VPN. Personally, I’m a fan of Tailscale – linking two or more devices is easier than even in Syncthing.

There are quite a few users on this forum who tunnel Syncthing thru a point-to-point or mesh VPN if you need more recommendations, how-tos and/or tips.

While I could also potentially find it useful, having done my share of UI development, some of the seemingly simple things can be a tall order.

Right off the bat, I can think of at least one. With small numbers of files displaying metadata would be fine, but some users have thousands or millions of files being synced. Web browsers just weren’t designed to act as file managers.

For the past few months I’ve been involved in weekly software developer meetings. There’s the programming aspect, but even just as demanding is the UI design aspect. No design choice pleases everyone.

:grin: … I couldn’t agree more.

At least Windows has resulted in millions of jobs over the past ~40 years for me and other IT staff. :moneybag: :money_mouth_face:

I just downloaded and ran the latest version of Syncthing on Windows (v1.25.0), added a receive-only folder, and let it receive a couple of test files. I then updated the file timestamps with Cygwin’s ‘touch’, rescanned, and Syncthing showed promptly the red “Revert Local Changes” button, with the scary warning “files newly added here will be deleted”.

Syncthing even displayed the misleading notice “Local Additions” on the folder status title, suggesting it will probably consider all your precious files “new” and therefore candidates for deletion when you press the red button.

I know the situation from experience, but I can image the same with many other unwary users around the world: after transferring gazillions of gigabytes for days to finally build your one and only reliable data backup in a remote and safe location, you can feel the cold shower running down your spine as you look at that red button. That’s truly evil. I love it!

And it is a really refined approach. If you open the “Locally Changed Items” list, it lists all files with their full data size, stating “The following items were changed locally”, but not saying exactly what changed. And in this thread, we have determined that there is no easy way to find out what Synthing thinks and what is really going to happen when you press that button. All ways even an advanced user could think of are carefully blocked, very hard to use, and/or not really trustworthy.

To top it all, the sending Syncthing, an older v1.18.0-ds1, suggests it is already “Syncing” your files, but is not really doing it, for it stays at % 0.

It’s just perfect. I see I still have much to learn… I want to join the Syncthing team now! Please accept me!!!

A long long time ago, syncthing had only one folder type, send and receive, and that was it. This was because syncthing is not a backup application, it makes two locations look the same. If you accidentally deleted files, tough luck, they are gone everywhere.

A lot of people that can’t use (for whatever reason) made-for-purpose backup software complained so hard. We kept telling them, that this the wrong application for backups, its not a backup application, but the complainers came in large masses and wasted tremendous amount of our time, having to justify to them why its not an application for backups. It wasted such incredible amounts of our time, and we eventually caved in and added new half arsed folder types just to shut them up, with many caveats. The caveats earlier on were even worse, folders looked like they were permanently out of sync etc forever, and got slightly better over time, but the caveats will always stay there as they are fundamental to how syncthing works (file equality is not based on content, it’s based on content, metadata and lineage)

So yeah, we caved in, ended up with a 10x more complicated piece of software because of some features we didn’t want, but the unable people that can’t run backup software for backups stayed, and just moved to complaining about the next thing.

So yeah, welcome to the team.

And remember, as part of the team, we didn’t want to add this feature, and we run backup software for backups.

The worry about losing files or having to retransmit large amounts of data is not only valid in a backup situation, but also in a plain mirror situation, or in any other synchronisation scenario actually.

The “tough luck if you accidentally deleted files” is not a reasonable position to defend. There is no reason why Syncthing cannot tell you what files it thinks have changed, before transmitting, receiving, adjusting permissions or deleting files, with any kind of folder sharing. And it should tell you about the nature of the change, per file, should you wish to inspect it. And there is certainly no reason to misrepresent the local changes, like I describe above. Again, even if you have a 2-way sync tool, there is no reason to be unforgiving, especially if the user already suspects something is not quite right.

There is also no reason why Syncthing should not show you the file metadata it holds, easily accessible on the web interface. That would probably benefit everybody, if you need to troubleshoot something. For example, you may want to catch the odd pice of unruly software modifying a file’s contents but reverting its last modification time.

You can do lots of damage with a proper backup tool, and that is why they tend to have built-in features like a dry run or data verification, which Syncthing could also benefit from.

I would personally be more than happy to create a project that grows so that thousands of people find it useful in other ways not intended in the beginnings. But if you really think that Syncthing should be used only as a 2-way sync tool, you should display a prominent warning in the documentation and in the user interface next to options “receive only” etc., mentioning the current and concrete limitations which make it unsuitable for any other scenario.

Syncthing strikes a good balance between features and ease of use. That is why many people try to use it for “non ideal” scenarios. There is probably no good reason why a tool like Syncthing cannot be used for backup purposes too, if you use it correctly and understand its limitations. Like I said, a real backup tool can only protect you from some mistakes, you can still shoot yourself in the foot with it too.

I can understand that, if you just want to have an unforgiving but simple-to-maintain 2-way sync tool, the constant complaining is annoying. I think the best way out of this is to clearly state Syncthing’s limitations, down to the details. The general, unconcrete reasons given in the FAQ under “Is Syncthing my ideal backup application?” are not enough really. Other than that imprecise warning, the general tone is “Syncthing is great otherwise”.

I would add a “Known Problems / Caveats / Limitations” section describing the shortcomings that the user should be aware of. Such an accurate and honest communications strategy has better chances to succeed at cutting the unwated feedback. I would start like this:


You should not use Syncthing in the “receive only” mode, because it will not allow you to determine the nature of any eventual local-only changes (file contents, timestamp only, permissions only, …), or how much data would be deleted or retransmitted the next time around (before you actually start the synchronisation), so you may end up inadvertently deleting and/or retransmitting large amounts of data.

For backup purposes, you should use tools XXX, YYY, because a) you can usually recover a file deleted or changed inadvertently, and b) you can verify the data integrity of your backup database.

For mirroring purposes, you should use tools XXX or YYY instead, because of XXX. (rsync has serious limitations too, so I wouldn’t recomment it either, so you will probably be hard pressed to suggest anything better than Syncthing even for mirroring purposes).

Although Syncthing has a database of file data checksums, it lacks the ability to verify the data integrity. That is, there is no way to re-read all files and verify that their contents haven’t changed when they shouldn’t have according to their “last modified timestamps” for unintended reasons, like a virus or a hardware defect.

If you are worried that your disks are not completely reliable (who isn’t), you should use other means like ZFS or btrfs scrubbing or some other file hashing tool. We understand that such filesystems have issues (there does not seem to be anything mature and/or easy to use yet), and having a separate hashing tool has performance implications (re-reading all data to calculate/verify the same kind of data checksums), but the Syncthing project is not willing to implement that feature as the code is getting too complex already / it does not fit the project direction / is perhaps already planned / is waiting for volunteers / whatever.

… more caveats here …


If I had seen such caveats beforehand, I wouldn’t have posted here. It’s not like I can read all forum posts and GitHub issues before trying to use a new tool! I really thought the issues I had were related to my system, my configuration or my lack of skills with Syncthing.

Best regards, rdiez

It’s not really about whether Syncthing “should” do this, as everyone will likely agree that more detailed information would be nice to have, but at the same time implementing all this requires a lot of work which someone has to do in their free time at no cost. This is true in this case and with many other potentially useful feature requests as well.

There is no reason to discuss hypothetical things what syncthing “should do” in your opinion. It’s only a “should” because you imagine it to be what it is not.

I’ll reiterate what I already said:

This was because syncthing is not a backup application

I think after hitting some unexpected behaviours, I would have gone and read the FAQ, the docs etc, and in the FAQ I would have found:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.