I have a RaspberryPi 4 8GB collection data at a remote site. 3 times a minute it collects data from sensors and creates a file for each datacollection. These data files were synchronized to a local machine, entered into a database, locally deleted and the deletion synced back to the Raspberry. Resulting in a deletion there as well… This used to work well. Even when data was not processed for 1 or 2 weeks we were able to restart this process and the collection process restarted properly.
This summer the local machine had a HW problem and had to be replaced. As the sensors collect heating data, we did not realize until now, that Syncthing was not set up on the new local machine. Today there are 650k unsynced files on the Raspberry, however when we do a ‘Rescan’ on the Raspberry it will consistently show only 504,992 files for ‘Local State’ and ‘Global State’.
We tried to restart the synchronization anyway. We did set up Syncthing locally and started synchronization. 1 file was downloaded correctly but after that single download it created only files with the following names:
~syncthing~LaiPower 2023-06-14 09.36.20.dat.tmp
We paused the synchronisation after a few minutes, but this did not cure the problem: the files keep this temporary(?) names (actually we have 14.5k temporary files…
I think usually, the files would get renamed after a few seconds.
Might it be, that 650k files on the Raspberry are simply to much for 8GB of RAM and that the synchronisation now no longer works for this reason?
Versions:
Raspberry: v1.20.1, Linux (32-bit ARM)
local: v1.26.1, Windows (64-bit Intel/AMD)
No idea why Syncthing on Raspberry does not get updated… But as the system is remote and several hours away we try not to experiment and do normally not even restart it as long as the data gets collected…
Tried to lower resource usage on the Raspberry and rebooted it in this process.
Restarting sync downloaded one more file and all others are shown as:
“~syncthing~LaiPower 2023-06-14 09.36.20.dat.tmp”
Even if I Press to interrupt the download…
I drove to the site and manually pulled all files. I found the Raspi to be extremely slow and unresponsive. File Manager took forever to open and to change folders. After deleting the 650k files it was responsive again. So I think Raspi is simply not made for this…
I uninstalled Syncthing in order to switch to the 64-bit build and reinstalled it using Installing Debian/Ubuntu Packages. However: This installed the 32-bit package again!
How would I install the 64-bit package?
P.S.
By the way: I’ll rewrite the data collection procedures in order to get less files! 130k files per month is simply a pain to handle…
This indicates that your system’s architecture is < ARMv8-A (32-bit), when it should be arm64 (64-bit). You can check that via uname -m - it will (most likely) either mention armv7l (32-bit) or aarch64 (64-bit).
If it says 32-bit, you have installed a 32-bit operating system. You will need to reinstall that. If it’s raspbian (“Raspberry Pi OS”), choose the 64-bit version of the image. For other operating systems, you’re looking for arm64 or aarch64 builds.
File manager usually means desktop environment, and browsing a directory with 650K files (stored on the micro SD card?) is going to be a slog even with a faster CPU and all 8GB of RAM.
Since the RPi is doing data collection, and sounds like without anyone regularly at its console, is a desktop environment really necessary? Going CLI-only would free up a lot of system resources which can then be used by Syncthing instead.
Well,. it was never intended to have 650k files on the storage (an SSD of course - I suppose an Micro SD card would have died due to write cycles). When the system works as it should the files get transferred almost instantly, augmenting to a few hundred when communication fails. As said above - I think about changing the way I collect the data and try to reduce that file count drastically.
Yes, that could make sense. Over the years, I’ve become so used to the GUI that I haven’t even considered any other solution… I’ll think about it.
For older SD cards, that’s very true, but older SSDs didn’t fair much better. One of my first SSDs was a 8GB model from Kingston Technology that failed in under a year while being used as the root partition in a Linux system.
Fortunately, NAND technology has evolved a lot since then. Quite a few of the “endurance” category of SD cards now surpass all of the low, and many popular mid-range SSDs.
A Samsung EVO micro SD card I use for one of my backup devices is still fine after a few years of daily use. I’ve seen test results for the Samsung Pro Endurance (~$20 for 256 GB) that logged >800 TBW before the card failed – that’s over 200 GB per day for 10 years.
So although a quality SSD still beats a comparable SD card in overall endurance, in the grand scheme of things it probably no longer really matters for many use cases.
Really astonishing! I did not know about this type of SD cards. The Raspi uses either an 1 TB Samsung 860 EVO or 870 EVO.
It does not make sense in my opinion to risk a system go dead only because storage fails… - especially when it is a several hours drive to fix it! Next time I’ll definitively take the Samsung Pro Endurance into evaluation - would be great for the form factor!
By the way: despite of the Raspi being overloaded, there was a glitch on the local syncthing as well (it was impossible to delete a synchronized folder)! I deleted an reinstalled.
Ditto. I much prefer not having to trek over to another data center if it can be avoided.
Fortunately it’s much less likely that a flash drive fails abruptly compared to a mechanical drive with a spinning motor and platters. As long as the SD card or SSD has sufficient capacity for bad block mapping, it’ll just gradually lose capacity. A flash-friendly filesystem also helps extend the lifespan of the media.
If the SSD is connected via a USB-to-SATA adapter, odds are greater that the adapter fails before the flash media in a quality SD card or SSD does.