(Sorry no time for a more substantial contribution ).
(Sorry no time for a more substantial contribution ).
Thank you @imsodin for your contribution Time to downgrade …
" Just to avoid any confusion: The functionality added by @xarx is not following symlinks, but treating directory junctions like directories and detecting path loops."
So, when will also the not symlink newly arrived “feature” adverse effect issues be treated?
I wish this feature will not kill the ST soul. And also I thought that usage pattern will matter, as well the known limitations.
A good day,
You should explain your setup and problem. It seems different than the OP’s.
Thanks for your comments (and for your brilliant android port). Yes, mounting with samba or some such thing would probably work. But the convenience has it for me and am using vb for simplicity and a (tiny) bit of privacy/security. So, whilst my go-fu is not at all profound, I can rise to a comment or two, to kill the return nil, change the logging, and let st die out of memory if it goes infinite. I promise not whinge if I lose my files!
It’s brilliant to have the source in git, your whole dev process exposed, modular and clean design, and, above all, the devs on hand to tell you wtf’s going on! Thank you on all those counts.
I’d defo vote for a switch to turn off infinity checking, but in the mean time, I guess I’ll hack the code which would be way more preferable (for me, at least) to spinning up samba, changing firewalls, and other hacks.
I’ll try to report this issue upstream to vb, as well.
You should explain your setup and problem. It seems different than the OP’s
Thank you for stepping in and asking
I have two different geographically storage setup’s, that are kept in Sync with the help of ST. Both use the hard drive virtual pooling software Stablebit Drivepool. Both have more than 8 perfect shape NTFS HDD’s under the virtual filesystem.
You can read a quick info about the Drivepool software in the links below (5 min. read)
–Features: Drivepool Features
I’m using this setup for about a year, and it worked like a charm. I think that with, the v1.7, it’s the first time when I did seen a Notice/Warning. I have received this Notice “infinite-filesystem-recursion-detected”, for some folder paths. I deleted one path (folder), involved in the notice, and after a re-scan the notice didn’t reappear. There was no shortcut’s or any *link to different path, in the mentioned folder.
All drives volumes are reported clean by the “fsutil repair state”
This is the fsutil reported Drivepool volume output:
fsutil.drivepool.txt (488 Bytes)
I would like also to ask if this “Warning” will stop scan for the infinite recursion detected path, or it will stop the scan for the entire Syncthing root Folder Path?
I assume the software you are referring to implements a virtual filesystem, in which case it depends on how it implements it, and whether it does the right thing with file ids.
It prints a warning and stops descending, but does not abort, but if you have the same issue like OP, I suspect it might result in deletions, which thinking about it now is probably not ideal.
The Drivepool fileystem software is advertised like this (from developer Features section):
Advanced File System
StableBit DrivePool features CoveFS, an optimized file system specifically designed for disk pooling. It has a virtually unlimited pool size (many Petabytes). It's compatible with existing applications*, and is designed to function like NTFS. It's a 100% kernel mode implementation. No user mode service dependencies or any such hacks are involved. It works like a real file system. Advanced features: Alternate stream support and extended attributes. Full NTFS security. Full Windows disk caching support. Read-ahead and lazy writer caching supported along with memory mapped files. Full oplock support. Oplocks improve network share performance by allowing a network client to cache files on their end. File change notifications, for applications that watch directories for changed files. Sparse files. Completely parallelized: Reads and writes to duplicated files happen in parallel. An optimized fast directory listing algorithm queries all the disks at the same time and combines the results in memory to return the list of files and directories, in real-time, as they come in from all the disks. Zero dependencies on any external metadata: Plug in any pooled disk to any system running StableBit DrivePool and it is instantly visible on the pool. No special RAID-like format, no "tombstones" and no SQL-lite databases are involved. Everything is just plain old files. Always shows the actual free space on the pool**. No need to reserve imaginary free space that you can't use.
“*” Some disk imaging applications may not be compatible.
“**” Some space may not be usable for file duplication, depending on drive layout
Data and file management is done, from the user side experience, all-over through the the pool side (e.g B:<yourfiles&folder> structure path). The Drivepool driver logic-engine will take care of everything, like data balancing, folder duplication etc. (works like art, flawlessly).
Indeed, fsutil report the Pool (virtual filesystem) as NTFS, so I expect to work also the same from a Windows OS point-of-view:
C:\Windows\system32>fsutil fsinfo volumeinfo B: Volume Serial Number : 0xddb5743c Volume Name : DrivePool Max Component Length : 255 File System Name : NTFS Is ReadWrite Supports Case-sensitive filenames Preserves Case of filenames Supports Unicode in filenames Preserves & Enforces ACL's Supports file-based Compression Supports Sparse files Supports Reparse Points Supports Object Identifiers Supports Named Streams Supports Extended Attributes Supports Open By FileID
Thank you for explaining the behavior when recursion Warning Notice occur. My detected paths are reported strangely without the root folder:
17:32:40 WARNING: Infinite filesystem recursion detected on path ‘PHONES DATA\FULL N73 card copy\Resco\Viewer\Images\200810’, not walking further down
The Fullpath should be: “b:\LIBRARY\PHONES DATA\FULL N73 card copy\Resco\Viewer\Images\200810”. Don’t know if matters.
fsutil file query:
fsutil file queryfileid “b:\LIBRARY\PHONES DATA\FULL N73 card copy\Resco\Viewer\Images\200810” File ID is 0x00000000000000000000000000005200
All other files under “LIBRARY” are scanned and without problems and synced:
As precautionary measure, I made the setup one way-only (Send->Receive) with File Versioning on the target system, but just wanted to change to “Send&Receive” for all the root directories, on both side, yesterday . But the recursion “Warnings” came up, and I have rolled-back to ST 1.6.1.
The error only crops up if two folders report the same file id, so if you run the query against each directory, then group by value, and if you end up with 2 files/directories with the same id, then the filesystem doesn’t respect basic principles that applications are relying on.
The marketing material or the technical specs are moot at that point.
I’ve started with the help of CYGWIN and the above @rahrah “find” command, to look for the same file-id, and the command does indeed return a file system loop on the similar ST paths:
find: File system loop detected; ‘./LIBRARY/PHONES DATA/FULL N73 card copy/Resco/Viewer/Images/200810’ is part of the same file system loop as ‘./LIBRARY/PHONES DATA/FULL N73 card copy’.
Now, how I got this happened, I’m not sure. This is a really old (cold) directory.
As far as the DrivePool filesystem reparse points, junction points / mount points implementation, a lot has been developed in 2013, and support the followings:
The architecture has these positive aspects to it:
It supports file / directory symbolic links, directory junction points, mount points, and 3rd party reparse points on the pool. It is a 100% native kernel implementation, with no dependence on the user mode service. It follows the 0 local metadata approach of storing everything needed on the pool itself and does not rely on something like the registry. This means that your reparse points will work when moving pools between machines (provided that you didn't link to something off of the pool that no longer exists on the new machine).
Some of my previous attempts had these limitations:
Requires the user mode service to run additional periodic maintenance tasks on the pool. No support for directory reparse points, only file ones. Adding a drive to the pool would require a somewhat lengthy reparse point pass.
The new architecture that I came up with has none of these limitations. All it requires is NTFS and Windows.
Implementation challenges: Stablebit-drivepool-reparse-points
Here’s the full explanation on Release 126.96.36.1992 BETA https://blog.covecube.com/2013/11/stablebit-drivepool-2-1-0-432-beta-reparse-points/ . I’m on StableBit DrivePool 188.8.131.529, so the challenges has been resolved and shouldn’t be a relevant Drivepool actual issue.
Since the reported loops are not so many (4 directories paths actually) I’m looking to modify (move/re-copy) the involved data, and leave it to the filesystem taks to update the file-id.
I will update the post with the relevant info.
Just checked under both Linux and Windows, and os.Getwd() returns a full path, including, under windows, the drive.
Wouldn’t changing your push/pop stack to having full paths, rather than these file/inode IDs be a better check for cycles before you walked into a dir? Am I missing a huge elephant here?
My vote would now be for a folder level switch for infinity checking.
Sorry to belabour this!
The recursion is not represented in the path.
Yup, I spoke to quickly, sorry. All the unix code I looked at like realpath(3) expects a knowledge of when a file in a path is a symlink. Sorry.
After archiving the data in the reported infinite recursion paths, and deleting the directories/files, no recursion were anymore reported. I have upgraded successfully to 1.7.
But… I still think that THIS is a dangerous feature. Because of the data loss that can occur. I’m sure that there are cases where one will leave trustfully the data management in the hand of Syncthing, and will not check/see the error, either on webpage or log itself. Only with, “It prints a warning and stops descending”, could just go unnoticed. Syncthing will just have to STOP and quit any activity, once such infinite recursion paths are to be found, to prevent unexpected deletion. It’s the user side responsibility to ensure that the underlying filesystem it’s in perfect shape.
For the moment, I will keep using Syncthing in one-way only setup. Thank you for all the support and provided answers.
Keep in mind though that the root cause here is a filesystem extension doing something really odd. Syncthing surfaces this oddness, but it’s simply using the Windows APIs as they are designed and suffering from the filesystem effectively lying. That things appeared to work previously is nice, but I would be worried about what’s going on behind the scenes at the filesystem level.
There is a problem that we return no error from the scan when this happens tho, which in case of the poor filesystems that invent different file ids a lot of them being duplicate on different scans might lead to us deleting files. The files will probably come back on next scan, but I wonder if the recursion should be a scan error instead.
Hi, I’ll step in.
In previous versions of ST, directory junctions were not synchronized at all, so no data loss should occur, IMO.
If ST stopped with error on first detection of infinite recursion loop, one wouldn’t be able to synchronize e.g. the user profile folders - there is an in-built infinite recursion there (in
Switching off the infinite recursion check is not an option, that could crash ST. The only possible option would be to turn off the directory junction traversing, i.e. return to the pre 1.7.0 behaviour, perhaps using a checkbox in advanced settings.
Edit: I think (but have not tested) that you might also prevent infinite recursion on a particular directory using .stignore.
The issues above are not specific to junctions, though. Granted there are weird filesystem shims involved that lie about the file IDs, but that triggers the inifinite recursion detection without there being infinite recursion, in effect rendering some files invisible to Syncthing and hence liable to get deleted remotely or otherwise mishandled. Turning this into an error condition seems correct.
Actual recursion should be correctly handled by ignores, I think. Except perhaps not in the filesystem watcher case?
If this will be turned into error, it’s important to consider whether ST will be able to recover from the error state when the errors get resolved. I have permanent issues with out-of-sync items, locally-changed items, devices in permanently 95% synchronized state etc. where there seem to be no working ways to recover from the issues.
Yeah those issues, whatever they are, are different.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.