Maximum folder size

Hello, I was woundering to keep two servers in sync with syncthing.

Ny questions really is what is the limit. Currently i have 15TB and this just keeps growing.

Can i use syncthing?

There’s no limit and there are bigger deployments than that our there: https://data.syncthing.net/. Practical limits depend on your system and use-case, mostly how often and how much data changes.

Please correct me if I am wrong, but the OP sounds like it is a single folder with 15 TB of data though. If that is located on an HDD, then scanning will likely take forever (i.e. weeks or even months). If possible, it will probably be much more efficient sync specific subfolders separately.

How would splitting into several folders help with that? Hashing 15TB is a lot and will take long, but not weeks with appropriate hardware.

I am not sure about that, unless the folder consists of mostly large files that can be read sequentially and scanned rather quickly. If there are mostly smaller files, and the HDDs are also being used to perform other tasks at the same time, then scanning will be very slow. At least this has been my experience.

When it comes to splitting into many folders, I though about per-folder tweaks, e.g. using different scan interval values, as not everything needs to be scanned every hour. Also, some folders may be set to receive only and thus not need periodic scanning at all. Of course, all of this will not help with the very first setup, but may come in handy later.

Henrik can experiment, but there’s no inherent reason multiple folders would perform better, and some cases where it will perform worse. Details depend on the underlying hardware and such.

2 Likes

sure, that all depends on the use case, of which we know nothing about yet. the only question is whether 15tb is syncable with syncthing, and the answer is priciply yes. anyhing more specific depends on the use case/system.

2 Likes

Yeah, although the title of the topic seems to imply, at least in my eyes, that the question is about a single folder size, not the whole database. I may be wrong though, but it is difficult to say anything more concrete with so little information, so I just threw out some ideas.

To you all! Love how much people are interested in helping. I’ve started a sync. To explain a bit of what kind of files the system will be handeling. The files are all kinds of “medie-files” this means projectfiles, raw files and all other files in between.

I now tree folders being synced, one is 7 GB. The second is 4 GB (both without any problems) but the third folder of 15TB does not work, after scanning all folders and around 700 GB the systems grinds to a halt and i have to hard reset the server for it to work again.

I run TrueNAS Core (12.0) and Syncthing v1.8.0

Same thing happend on FreeNAS 11.3U4 and i belive it was v9.0 of syncthing.

How large are the largest files, and how much ram do you have? If the answers to that are measured in TB and MB, respectively, I could see there being a problem. Otherwise I’d be curious to know why it grinds to a halt.

1 Like

You may also want to re-check the Syncthing version on each of the devices, as the current one is v1.8.0 (i.e. there is no “v9.0”). In particular, make sure you are not running anything older than v1.0 together with the newer ones.

1 Like

I have 16GB of ECC RAM.

Well that answers one third of the questions asked, though the last question is perhaps implicit… :slight_smile:

What’s the reason it “grinds to a halt”? Are you running out of memory? How much memory does Syncthing use, if so? Is it I/O? What’s Syncthing’s CPU utilization? Does it report anything interesting in the logs? What does the UI look like? We can’t help without knowing any details.

2 Likes

I removed all folder from syncthing restarted the jail and the added the folder again. First 8000 folders took a few secounds to scan, the system then stops.

Folder is set to Ignore premissions, else normal.

The system still has 10 GB of ram avaliable and CPU does not indicate something is happening at all. Avarage usage is steady on 0%.

The logs does not show anything special. UI looks kind of normal except that the scanning does not have a indication that its actually working (example Scanning 5%).

One in a while the the you are not connected to syncthing pops up for 1 second.

1 Like

So it’s not a resource thing, then. If there’s no percentage when scanning that means it’s still listing directories and not yet hashing files (or you disabled scanning progress). This is entirely different from what happened to you before though in

so I don’t know what’s going on. Are processes stuck in D state? Maybe your storage is bad.

Edit: The system was working with the folders.

but now it’s halted, after scanning 33%.

Scan Time Remaining: ~ > 1 Month

Global / Local state: 3.95 TB

TrueNAS has crashed and is not responding, but i have the dashboard frozen with final state; RAM: Services is using 5.8 GB of 16 GB, ZFS Cache has the rest.

CPU: Around 15% on all threads.

Perhaps i’m running out of RAM?

Doesn’t really sound like it. I’d look at zpool iostat -v 5 and iostat -x 1 to see what the disks are doing. Are there reads? High qlen and %b in iostat?

Everything work until it just doesnt anymore.

I had to reboot hause nothing was working, i Even the webUI of the server stipend working. System works fine when syncthing isnt on.

Now we are back with «Scanning» on the folder. What is normal for the «reboot scan»? (With around 15TB)

What is reboot scan?

I’m guessing the initial one on startup. It’s just a regular scan, what’s normal in terms of time depends on the system. If it didn’t complete scanning previously it will resume roughly where it was when it was interrupted last time.

I still have no idea what the issue could be, sorry. I can only suggest you try to debug this like any other system issue.

1 Like