I’m considering using syncthing in a recording studio setting and wanted to get some feedback on any potential problems.
Loading projects directly from the server has proven to be problematic, especially with both osx/win workstations and multiple DAW programs.
So instead, I’m thinking of using syncthing keeping an “active projects” folder synced locally on each workstation, as well as syncing to a server for incremental backups and time machine (osx) or genie timeline (win).
Any obvious issues/pitfalls that anyone sees for this type of setup? Projects are from Pro Tools, Studio One, Ableton Live, and Logic. Any learning experiences?
Backups of course, but ideally there isn’t an issue that causes the need to go into them very often.
I’ve worked with some DAW software previously, and regarding project file formats, nothing springs to mind that could obviously go wrong.
CPU load during scanning could be problematic if you want to keep Syncthing enabled during recording sessions. It generates quite some CPU and I/O load in short bursts when detecting changes. But that could be worked around by pausing the synced folder and perhaps setting the scan interval wisely.
Obviously, concurrent changes are harder to avoid in a decentralized system, as e.g. lock files might get propagated with less predictive timing. Especially when pausing needs to be involved. So you’d need policies and disciplined staff not trying to touch the same project simultaneously. There is always a limit to what technical solutions can achieve there, but it’s usually slightly harder without a centralized server authority.
Yeah I would rather get it up and running with projects loading directly from the server, but it’s always one problem or another- I’ve tried shares from both OSX and Win (10 pro/2019) and it’s always one or two of the programs on one OS or the other that doesn’t work.
That’s a good point on the CPU. The workstations are either 5950x/win or 7980xe/hackintoshes- so usually some extra cores available, but I will have to look into that.
And yeah, projects open simultaneously isn’t a real big concern- it’s rare that anyone is opening anyone elses sessions…but it’s one more factor to consider for sure.
I’ve been using Unison File Synchronizer for such purposes a lot. It’s not a continuous sync, but needs to be run manually (by default–there is a watcher option). I trust it 100 percent to yield accurate and complete results, and having a separate reconciliation step before the sync can even add benefits: Seeing what has changed (possibly unintentionally) and being able to revert it by switching the sync direction on individual files (like a first-level backup). Combine that with a central server also running unison and a proper backup strategy.
It’s harder to setup overall than Syncthing, though. At least in my experience.
If you have spare cores, I would personally not worry about the CPU that much (as long as Syncthing is run with a lower priority), but the I/O may be an issue. If the data happens to be located on spinning hard drives, then I would definitely not edit and synchronise that at the same time.
If a file is changed on more than one system before syncing, a sync conflict will occur. Syncthing will sync both files, one with a prefix to show it’s in conflict. It’s then up to you to decide which is canon.
The more serious problem is that if a file is modified while a program is using it, you’re now in uncharted waters. The program could crash, could corrupt the data and/or overwrite the remotely-changed file with your changes without you knowing. And it could send all of these problems back up the wire for the other user(s).
I do this quite heavily on Logic and Pro Tools sessions amongst ~10 workstations (both on same-site and off-site), and generally it works flawlessly. As has already been noted, staff discipline is important to ensure you’re not trying to modify files at the same time.
I have no problem with CPU or disk I/O load whilst running Syncthing on live project folders - but I’m running with SSDs instead of spinning rust. To be honest, even Time Machine seems to create a higher load than Syncthing.
You might also want to set a ~30-60 second file watcher interval, so Syncthing doesn’t attempt to sync temp files - which Logic creates a lot of - or many partially-written bounce files.
I was just thinking about the potential performance impact (already mentioned in this thread plenty of times ).
This will depend on how demanding your software is. For instance, in my case, if I convert a video file, and that process uses 100% of CPU, then I do not want Syncthing to keep indexing and trying to sync the constantly changing output file. I would only want it to sync the final result.
In addition, if you work on files stored on an HDD, then even without heavy load, there will always be a performance impact if you try to write and read to the storage at the same time. However, with a fast multi-core CPU and decent SSD storage, you may not notice a difference.
Any thoughts on how this would work with hyrbid setups like storemi on amd/win or fusion drives on mac? I have a 8tb hdd, 4tb u.2 storemi setup on my 5950 rig…I’m not sure I’ll be going back to fusion drives on osx though.
My current folder of active clients is about 4TB and some change…I do think I could get everyone to shift fully active clients to a smaller working directory, and back to a larger repository, but it seems like that would increase risk of a mistake.
As you are doing something similar, I’m curious how you keep everything small enough to keep it on SSD’s.
I think that is going to be highly dependent on your particular working practices and whether your content is going to get cached on the SSD, or whether it’s so diverse that it’ll remain largely resident on the HDD. You may need to tweak some performance parameters to keep Syncthing throttled down - but at the expense of needing to allow longer times for sync to complete.
We have a number of SSDs on each system, each with their own shared Syncthing folder; the shared projects are just divided between those SSDs - and then archived off when complete.