Lots of pull: no available source device

The fact that it has no available source is not really an issue. The device went away while we were in the middle of something, it will resume next time the device is around.

Well, that’s exactly what it doesn’t do.

If you have a lot of files, it might take a while to start as they have to exchange indexes. Plus after it stops due to no source available, it might take a minute for it to start up again.

But looking at your logs I don’t see any disconnection messages though which is strange.

I also get these:

[GYAUK] 16:01:22 INFO: Puller (folder “Things”, dir “Build/Source/Urho3D”): delete: remove /home/frode/Sync/Build/Source/Urho3D: directory not empty [GYAUK] 16:01:22 INFO: Puller (folder “Things”, dir “Build/Source”): delete: remove /home/frode/Sync/Build/Source: directory not empty [GYAUK] 16:01:22 WARNING: Folder “Things” isn’t making progress - check logs for possible root cause. Pausing puller for 1m0s.

Could there be a connection?

Check whats left over in the directory (I am keen to know), but no, I don’t see how these two are connected.

I have had similar messages, blocking further syncing of the directory.

In my case it occurred after deleting a non-empty folder on the windows side (perfectly normal).

The linux side didn’t like this.

I bypassed this by manually removing the directory on the linux side. (rm -r xxx)

Not sure which version of st I was on at the time, but it looks like a bug to me.

I am seeing these INFO messages after removing files from another node. It is referring to files that no longer exist in these messages, which in turn triggers the Folder isn't making progress warning, so I assume its index is temporarily out of sync with reality?

The warning is a bit confusing. (seen in 0.11.2)

Can you raise an issue for this on github

I had this behavior, too, for my windows pc “one-way-syncing” to my ReadyNAS (read-only on NAS, both running Syncthing 0.11.3, both connected via GBit lan). I think this appears when the files on the windows pc, that syncthing tries to sync, are opened by another application. I my case it could not sync files from my firefox profile when firefox was opened. As soon as i closed firefox, almost all of the files were synced (unfortunately not all of them but maybe there were some open file handles from other applications?)

Hope this helps to nail down the issue.

Best regards BartManson

Firefox uses memory mapped files which don’t get mtimes updated upon writes which prevents syncthing from working correctly. You can try ignoring them as they are usually not temporary and not critical. You can also search the issue tracker and forums as I am sure there are tons of chat about it.

Plus, databases are usually not very syncable to start with as they change many times per second.

I’m not sure if this is related but I was testing how well SyncThing handles larger files (20 Gb). I configured just two nodes, one setup as a master. After SyncThing had been running for a few minutes (I’m assuming it was calculating a checksum for the file) it started synching. While synching, I made a change to the file on the master node. And then the “no available source device” messages started appearing. I noticed then that the master node appeared to be calculating another checksum (based on the amount of cpu being used). While this was happening, the “no available source device” message appeared a few times. Once the checksum calculation was completed, the sync continued.

I don’t know for sure if these observations are correct, but this pattern seemed to happen fairly consistently.

This is expected.

Yes, we ‘pull’ partial blocks of files from other nodes. However, if files change while pulling, we cannot find the required blocks. Thus, if Syncthing has finished calculating the new checksum, the pull process will update itself and only fetch the new information (and reuse the old blocks if possible).

I have the same issue in a setup of two connected machines. I think, the following was the case: locking a .zip file for extraction. Then closing the .zip file and deleting it. syncthing wanted to sync the file after closing (no more locked) and informed the other machine. Then, the file was deleted on the source machine. The target machine still requests the file even though it was deleted.

After restart and rescan, it keeps saying Folder "TEST" isn't making progress - check logs for possible root cause. Pausing puller for 1m0s. without issuing any cause in the log. How can I stop that message?

I can file a bug if desired.

Can you reproduce the case with 100% certainty? It’s best we get the steps and write a test case for it.

With syncthing v0.11.13 issues got worse: Syncthing somehow seems to track versions of a file internally. May it be the case that following happens:

  • Machine M1: V1 -> V2 -> V3 and
  • Machine M2: has V1, gets notified about V2, tries to sync V2, but when it starts to sync, V2 has gone and V3 is available and it starts complaining, that the file is not available

It says

[4QJNC] 14:37:19 INFO: Puller (folder "BTSync", file "Elephant\\Inbox\\Markdown.md"): pull: no available source device

But on the source device, that file exists and is newer than the file stored on the machine complaining.

A complete rescan doesn’t stop that output. How can I instruct syncthing to completely reset its internal status about the failed files?

I’m following the hint provided at Useful .stignore Patterns and have Thumbs.db in my .stglobalignore. syncthing, however, states that Thumbs.db is not synchronized. How can I instruct syncthing to really ignore that file?

There is no global ignore not sure where are you finding that.

Restarting syncthing clears the remote state it stores.

I’m seeing this as well. Opened an issue:

I found the global ignore in the linked post

One should include the .stglobalignore in the .stignore as follows:

// .stignore
//
#include .stglobalignore