Syncing the same folder on the same machine on two locations.

I have a laptop with Windows 8.1 installed on it’s internal HDD. I’m running Xubuntu from an external harddisk. What I’d like SyncThing to be able to do is to sync the same folder to two locations on the same machine. So when Xubuntu mounts the internal drive it should sync with the folder on the external and vice versa in Windows.

This doesn’t seem possible since the FolderID needs to be unique and therefor the same folder cannot be added a second time with a different path on the same machine. Is there a workaround for this other than having a second machine turned on? I can imagine this would require a big change in SyncThing’s code.

Also thanks for providing this already quite awesome alternative to clouded solutions. :smiley:

Syncthing is syncing between 2 or more computers.
As far as I understand your case it is not possible as always only 1 system is up and running.

Access Linux -> Windows HD should be no big Problem, Windows -> etx3/ext4/… is not as easy

I would setup a virtual computer (with linux) within win 8 to access the external HD.

I don’t really understand what you are trying to do. If you want to be the same device in both OS’es and sync it to somewhere remote, you can.

If you are syncing internal drive to external drive and vica versa, you are using the wrong tool. Just run rsync on boot on both OSes, or unison or csync.

I have tried to run multiple Syncthing on one windows machine and the problem is that you getting an error “Local discovery unavailable” and that is after editing the configuration that every client has it’s own web port and sync port.

So I had to make other plans to get the sync content on multiple places on my system.

As was already said: Syncthing is specifically to sync files between several devices. What you need is something like rsync. On Windows you can use Qtdsync, on Linux there is more choice. One option would be Lucky Backup.

I hatethiskeyboard. New one tomorrow. Ugh.

*EDIT: Wow, good lord this was unreadable. Let me fix this.

No that’s not the problem . . . The problem is complex.

It involves synchronization of latency, mutex_locking on block areas that area being written to void double-writes… Critically, there is not a way for Syncthing to resolve ‘merge conflicts’ in peer filesystems, so to speak … ie. Between the master and another peer… It really all comes down to how we share a view of the FS among peers…

There are problems related to filesystem throughput and network timeouts, there are sometimes RST packets set to the TCP port inducing session renegotation, and someone also discovered today that their are essentially ‘birthday collisions’ in the current folder implementation – BECAUSE THERE IS NO SHA function haha.

So anyway, i’m tired now as I’m writing this, but we need to decide how to fix [problems related to file synchronization among peers]. Personally I’m leaning towards a quick patch of the critical issues… If that works, great, let’s do a public release. [edit: As proposed below, an overhaul is a massive project, so perhaps incremental patches make more sense].

Then we’ll have the time to do the long term fixes,which will take longer. Let’s see where we are in about 4weeks. . . [edit: I’m going to be slow getting running since I don’t know Go]… When I’m done in an ideal vision which is probably not realistic, but if it was…

I’d like to be able to mount a sync’d filesystem without downloading everything first… Like an NFS mount… Where I can stream my favorite movie to my cousins home 100 miles way where he only has a copy of Syncthing and the PSK …

That is of course, if my cousin is on Verizon Fios, and we’ve added the ability to mount ‘streaming’ filesystems to syncthing (like in Bitcasa) , and I have duplicated copies of the avi networked on Syncthing on Amazon EC2, VPS, or other peers.

@cydron I should have clarified that my answer was specifically to @Modanung, who wants to use syncthing for local file synchronization between two operating systems. This can’t work, because one of them is always offline, so the syncthing instances can’t possibly talk to each other.

The problem @fpbard describes is certainly solvable, but I think there are bigger fish to fry right now.

PS: You have a lot of interesting ideas and appear to be really knowledgable. It is however a bit difficult to get an overview of what you envision, because many ideas are spread across different threads, which aren’t strictly related to the ideas you mention. Could you maybe create one thread where you collect all the ideas which are part of your vision from a 10.000ft view in a numbered list style? This would make it a lot easier to get an overview of what you would like to change, improve or discuss.

2 Likes

Yeah, sorry about that. Those posts were absolutely unreadable, thanks to a repeatedly disconnecting wifi adapter and broken keyboard. I went back and edited for clarity.

Okay, I’ve written up the main idea of what I’m proposing here…

https://forum.syncthing.net/t/proposal-for-revision-of-filesystem-transfer-and-sync/2056/1

Click the above link for the details as far as I’ve thought things out anyway.

But to summarize…

I think some adaptation of a new method of the modelling of the filesystem and the exchange of FS-related messages makes the most sense to me as a sort of ‘must have’ feature… Where each peer has a model of their filesystem as an in-memory object , and every node in the FS tree has a hash. On top of this, the peers have a specific set of packets to ‘share’ their in-memory FS models , as well as efficiently share changes to those models.

I know it would suck to write this in, so I don’t blame anyone for putting it on the back burner, because it’s a big project. And maybe part of it’s already done, or maybe there’s a better way to accomplish the same goals…

But I like the idea of an FS-tree with ubiquitous hashing of all files, folders, and nodes… I like that from the standpoint of ‘future proofing’ … It also seems like doing this would squash a number of bugs and feature requests simultaneously.

I think this sort of technology really opens doors to what we ‘could’ do… For example, it would solve the feature requested in this thread among a few others…Also, I think there is probably a middle ground between what I’ve described in that link vs what is currently implemented.

The ideal way to do this is via incremental change, as that’s much easier to debug than dispensing with the existing infrastructure entirely.

But to summarize, I suppose the most important points would be as follows…

(1) Model each peer’s filesystem in memory as a tree, where each node is a file or folder, and where each node has a hash value dependent upon (A) the state of it’s children and/or (B) it’s data and attributes.

(2) Most important point: Everything gets a hash! Files get hashes which act as their identifiers AND as change-detection methods. Folders get hashes which act as identifiers AND as change-detection methods

(3) If we want to get fancy: Even file-chunks (blocks) each get their own hash, even if each 64k block gets a CRC32 checksum. This lets us exchange only the parts of files that have changed.

(4) Have a method of efficient serialization / de-serialization of FS-tree object (conversion of in-memory FS tree object to a byte-level output) that can be shared among peers and is language independent, byte-efficient, etc . The FS serialization can even involve tree compression so it’s less chatty.

(5) Take these new FS-tree implementations and put them into new explicit protocol-level messages which have the sole purpose of sharing models (or ‘peer-views’) of their filesystems… This way peers can compare and efficiently determine ‘what’s changed’.

(6) By using hash values which automatically ‘propegate’ towards the tree root we can quickly detect if any file or folder changed (and where the change is in the FS tree!) Meaning that any leaf node change implicitly changes the node’s hash, as well as the the value of the hashes for all nodes ‘above’ the leaf node)… This allows efficient detection of changes because a peer may ignore entire subtrees (perhaps containing thousands of files) if the hashes match. This allows efficient traversal of arbitrarily large FS trees.

(7) Clever implementation can limit traffic overhead via exploitation of hashes as both ‘handles’ (for nodes) and ‘versioning’ (for entire subtrees)… ie. We don’t exchange the entire FS tree , except at start of a session – Perhaps we only share subtrees of what’s changed)… This would allow incredible scalability, especially in conjunction with block-level / chunk-level hashes or CRCs.

Interestingly, moving down this path enables the possibility of convergent encryption

Why? Well a file’s hash is it’s symmetric crypto key in convergent encryption, so there is some major overlap here – Alternately, in the more secure version of convergent encryption, we have a global shared secret Global_FS_key, where a file’s symmetric crypto key is HMAC_SHA1(Global_FS_key,file.data)…

This direction also enables the possibility of mounting a ‘streaming’ Syncthing filesystem (if we implement the idea of block-level file checksums which act as chunk ‘handles’ or ‘identifiers’ , coupled with file-hashes and sharing of full FS-tree models among peers).

In the latter case, to stream an avi, I can go to a brand new computer (or smartphone)… Then the software requests Block 0 from Peer 1, Block 1 from Peer 2, Block 3 from Peer 1, and so on… (like RAID striping)…

This allows me to stream the avi without downloading it first, and is made possible by block-level fetching of data from redudant peers… Here I am limited only by (A) the TCP download bandwidth of the destination device, or (B) the combined TCP upload bandwidth of all peers with a copy of the file… Whichever value is lower.

The big ‘what new’ is that since the protocol has shared the structure of the filesystem prior to exchanging data, we have the option of selectively or globally streaming files by only downloading them upon a call to open()… Probably though we’d need FUSE (or kernel drivers) to implement Bitcasa-like streaming / NFS-share functionality, since for this feature we need to intercept the call to open() and initiate a download… We’d also need to intercept calls to fseek to determine which blocks to fetch…

So the idea of a ‘streaming’ file system probably needs the FS model overhaul coupled with either OS-specific kernel-space drivers, or else FUSE drivers. I don’t really see another way to intercept OS-level system calls… But this would be a cool feature because it’d essentially make Syncthing an open-source version of Bitcasa. – in addition to an open-source version of Dropbox, Rsync, and Bit-Sync.

On top of that, the idea that we share the filesystem ‘snapshot’ or structure prior to sharing any data of the files also enables this idea of an NFS-style / bitcasa-style ‘streaming’ torrent filesystem. Anyway, that’s my thoughts at the moment… Subject to change if I’m misunderstanding something important, haha.