Renaming Root Folders to Match Creates Endless sync-conflicts

We’re employing Syncthing to share the media from our feature film to our editors’ external hard drives in Los Angeles, New York and London. This has grown to several TB of data 300,000+ files.

Recently we had to switch from Adobe Premiere’s Team Project system to their new Productions system on account of the former becoming unstable with the volume of media and sequences present.

Unlike Team Projects which supports multiple media paths, Productions requires them to all be identical. So we renamed our editors’ external hard drives to match our NAS paths to make them identical i.e. /volume1/Editors_harddrive/[xxxx] becomes /volume1/EDIT/[xxxx]

After re-linking their sync folders Syncthing seemingly cannot reconcile the differences between the two databases and is generating an endless number of sync-conflict files. I’ve had to suspend everyone’s syncs but overnight it generated 11,000 of them on our server.

My guess is that it’s getting turned around by the near identical folder structures/contents?

Could use any help available. Thanks!

What do you mean exactly by “re-linking” folders? Can you perhaps post some screenshots of the Syncthing GUI?

How did you re-link? Create new shares? Modify the config file?

So, the order of operations was;

  1. Pause all syncs
  2. Delete folders from Syncthing consoles
  3. Rename harddrives, contents are unchanged
  4. Re-accept folder invitations and map them to the same locations inside the newly named hard drives.

I am still not 100% sure of the process, but the files themselves do need to differ on each side to cause conflicts…

If I had to do this kind of renaming of the path with removing and re-adding folders, I would set all folders to Receive Only on the receiving side first, next let everything sync, and then override any local changes before switching the folder type to Send & Receive again.

Ok, at a guess I would say someone made a change to one of the shared files on the central system while the remotes were offline…

Since the remote locations have no history, (ST is treating it as the first time it has seen that folder) any existing files that do not exactly match are going to be in conflict.

If you’re looking to solve this… what I would do is remove the shares on the remote nodes.

Move the data to a different folder and add it to ST as a dummy share that isn’t shared with anyone. ST will index the data.

Re-accept the share to the new, now empty location, and let ST bring it up to sync using the locally indexed data where possible.

Hm interesting. So basically we’d have to start all over? I’d be okay with this except in our situation it would be neigh impossible to secure another several TB of storage for everyone on short notice.

One thing I forgot to mention that this make me recall is that before relinking the shares we reorganized the structure on our end, moving one very large folder up a level so it would be less cluttered. We then duplicated this re-structuring across the shared computers before re-syncing them.

I’m realizing now that this probably screwed us. Correct me if I’m wrong but if I had moved the folder after re-linking and resuming the shares instead of before Syncthing would have made this change for us? Where as since I did it manually it wasn’t properly indexed.

Honestly a folder move shouldn’t have done it.

There should have to be a difference in the files for ST to start creating conflict copies. If there is a slight difference in the naming you would end up with multiple copies though.

I have never dealt with a team/shared pipeline in Premiere but it sounds really strange. You aren’t getting conflicting media files are you? Premiere shouldn’t be making changes to that.

Is it easier to start again from a blank folder?

In my example above you could set their pull order to largest first and delete the files from the temporary folder as they appear in the shared folder… Bit of a manual hack but for me it would be a lot faster than transferring TBs of data. Once the larger media files have moved and there in enough space you let the sync complete and delete the twmp share later.

I’m sure it would be easier to just start all over but it’s way too much data. It took eons to get this data across the world the first time.

I very recently tried a brand new sync to back up some Adobe Premiere projects, linking a virtual NAS (LucidLink) folder to our server and it corrupted the project in both locations. I had to rescue a version from the .stversions folder to restore it.

Currently, I’m doing something similar (unlinking-relinking) some folders that contain .exr’s and I’m not having the same issues as before. Potentially because this time I’m deleting the .stfolder, .stversions, it isn’t having the same problems.

The EXRs should just be seen as binary files and will be fairly straightforward to sync… unless you are letting Premier update the XMP data on media.

Disabled Write XMP ID To Files On Import?

“Disabled Write XMP ID To Files On Import ?”

I’ve never even heard of this one. Does this create issues down the line for Premiere? What does it do to Syncthing?

Premier has a couple of settings that write to the Media’s XMP data… If you don’t dissable these you will probably end up with conflicts.

Chicken stop duplicate media appearing in your projects because each one is given an unique ID. I have disabled it because I usually don’t want to make modifications to the source media. Like when syncing…

I’ve never used production so I’m not sure how it will affect you.

Just googled it… the workflow guide from Adobe tells you to disable the XMP settings.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.