Hey, have you already thought about the option to stop syncing a specific folder or a device? So when I’m on machine 1, that there would be an option to stop syncing a specific folder, without deleting it out of the list.
The same with an other device. So that I could chouse in the list of the devices, which one I want to sync or whitch not.
This would be nice, in case I’m working temporary on a machine, and want no traffic;)
An other cool option would be to define in which timeperiod a specific folder/device syncs!!
I has been proposed before (and I suggest you try searching the forums/issue tracker next time):
Not sure if it will happen due to the reasons explained.
You can achieve the behaviour just by switching syncthing off, or setting transfer speeds to 0 for the time being.
Yeah, pausing is easy, but sorting things out with out unexpected data loss when resuming is trickier. I haven’t fully realized what the best way is, so nothing’s committed yet.
I was thinking about this, and how about a pause which pauses scanning and serving content (ala the folder isn’t even there or syncthing is not running for that folder), given that we now try multiple peers when fetching blocks.
I think the scenario I’m most worried about is the one mentioned in the ticket;
- I pause a folder
- Someone changes a file in that folder, somewhere.
- Two days pass.
- I edit that same file on my computer.
- I open syncthing, notice the repo is paused, unpause it.
- Boom, my changes are blown away as the changed file from two days ago is synced in.
Sorting out what is the most recent file in that circumstance is something we’re not really equipped to do currently. There’s basically the same problem if syncthing was turned off in the meantime - then we force a scan before anything else, and if a file is changed on two sides it’s a conflict (which we don’t handle…).
So I guess what I’m really saying is that we need conflict resolution, and then most of this stuff will fall into place…
But the same thing happens if you don’t run syncthing while you modify your file too, so I don’t see how this is different for the time being.
We could have a warning modal the first time you pause explaining possible problems.
Hi, may I suggest that the cheapest and still very useful option in resource shortage situations is to make a pause function that works like shutdown for all intents and purposes, except it leaves the process running and UI working. This way one can stop HDD-hungry scanning from the web browser and easily resume it later, when resources are not needed elsewhere anymore.
You can already do that by setting rescan interval to 0
Yes, but this has to be done for every folder, which is a no-go with many folders, or even just three if custom rescan intervals are used. One central switch to eliminate any HDD and network load is really essential. Shutting syncthing down is also quiet a hassle if you have a cronjob to make sure syncthing is running all the time…
It’s as much of a job as unpausing.
An idea would be to create a “-1” incoming and outgoing rate limit, so that Syncthing keeps running and indexes all changes but does not transfer them. A button then simply changes between original rate limits and -1 or back.
You can set it to 1, and it will pretty much give you the same effect, as 1 is usually not enough to transfer anything, and will just keep timing out, just doing the TCP handshake once every reconnect interval.
Is this all about basically shutting down syncthing and starting it again, but being able to do so from the browser?
I’d like to pause one particular folder and have all others work. I have one folder that has 140GB incoming and I can’t use Syncthing with other devices until that large folder is synced. The only problem is my ISP has monthly limits so I must either delete that folder or not use Syncthing until the next month.
How about just ignoring everything in that folder for the time being? Just put an * in the ignore-rules for the time being. Or- if you know where the 140 GB come from, just ignore that particular subfolder.
Not really a fix but a workaround