Large amount of files and big file size sync problems

It (Syncthing) isn’t just running out of memory, or the box starting to swap too much for progress to be made or something?

I think is the memory. I didn’t see it because on the web interface stated I was using no more than 4 Gb of RAM. But I forgot I am using ZFS, which it eats most of the RAM. I have 64 Gb of RAM and when I am not running Syncthing, there is only 8 Gb free. Once I start Syncthing, it goes down to pretty much 0:

…total…used…free…shared…buffers…cached

Mem:…64404…63619…784…0…216…18437

Low:…64404…63619…784

Thank you for the help. I knew this was not making any sense :wink:

I don’t know much about zfs on Linux, but I would expect it to release memory when it’s needed by an application. So nice as it would be to have this as the explanation, I don’t think it is.

Well, after further testing, you are right. Not the memory. I am now doing another test. With the ignore file list I am limiting the files little by little as follows:

  • Folders and small files first (about 16000). --> this went ok.
  • Medium files after. (on going)
  • Big files (200 to 300 Gb files) last (Pending).

I’ll see if the scan gets stock at some point and send an update.

1 Like

Last test logs with STTRACE=events,main,scanner. RAM is ok 10 Gb Free and CPU high when scanning but down to 0.00some once it gets stock:

*[EFCRJ] 2015/11/19 10:27:37.293053 events.go:144: DEBUG: log 896 FolderScanProgress map[folder:dr_001 current:1075197581291 total:11547897156442]*

*[EFCRJ] 2015/11/19 10:27:37.293086 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:37.670647 gui.go:317: DEBUG: http: GET "/rest/events?since=896": status 200, 160 bytes in 0.30 ms*

*[EFCRJ] 2015/11/19 10:27:39.281759 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1076164368363/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:39.281883 events.go:144: DEBUG: log 897 FolderScanProgress map[folder:dr_001 current:1076164368363 total:11547897156442]*

*[EFCRJ] 2015/11/19 10:27:39.281915 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:39.744498 gui.go:317: DEBUG: http: GET "/rest/events?since=897": status 200, 159 bytes in 0.60 ms*

*[EFCRJ] 2015/11/19 10:27:41.294261 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1077112936427/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:41.294383 events.go:144: DEBUG: log 898 FolderScanProgress map[current:1077112936427 total:11547897156442 folder:dr_001]*

*[EFCRJ] 2015/11/19 10:27:41.294416 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:41.698777 gui.go:317: DEBUG: http: GET "/rest/events?since=898": status 200, 160 bytes in 0.35 ms*

*[EFCRJ] 2015/11/19 10:27:41.742967 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 324 bytes in 41.60 ms*

*[EFCRJ] 2015/11/19 10:27:41.743295 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.45 ms*

*[EFCRJ] 2015/11/19 10:27:41.745017 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.18 ms*

*[EFCRJ] 2015/11/19 10:27:43.281809 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1078060193771/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:43.281918 events.go:144: DEBUG: log 899 FolderScanProgress map[folder:dr_001 current:1078060193771 total:11547897156442]*

*[EFCRJ] 2015/11/19 10:27:43.281952 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:43.282076 gui.go:317: DEBUG: http: GET "/rest/events?since=899": status 200, 160 bytes in 599.92 ms*

*[EFCRJ] 2015/11/19 10:27:45.285618 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1079020951531/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:45.285700 events.go:144: DEBUG: log 900 FolderScanProgress map[current:1079020951531 total:11547897156442 folder:dr_001]*

*[EFCRJ] 2015/11/19 10:27:45.285727 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:45.690303 gui.go:317: DEBUG: http: GET "/rest/events?since=900": status 200, 160 bytes in 0.27 ms*

*[EFCRJ] 2015/11/19 10:27:47.298311 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1080223406059/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:47.298374 events.go:144: DEBUG: log 901 FolderScanProgress map[folder:dr_001 current:1080223406059 total:11547897156442]*

*[EFCRJ] 2015/11/19 10:27:47.298431 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:47.673261 gui.go:317: DEBUG: http: GET "/rest/events?since=901": status 200, 160 bytes in 0.35 ms*

*[EFCRJ] 2015/11/19 10:27:49.281679 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1081352329195/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:49.281756 events.go:144: DEBUG: log 902 FolderScanProgress map[folder:dr_001 current:1081352329195 total:11547897156442]*

*[EFCRJ] 2015/11/19 10:27:49.281787 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:49.682249 gui.go:317: DEBUG: http: GET "/rest/events?since=902": status 200, 160 bytes in 0.31 ms*

*[EFCRJ] 2015/11/19 10:27:51.300419 walk.go:167: DEBUG: Walk /main/dr_001/ [] current progress 1082577459179/11547897156442 (9%)*

*[EFCRJ] 2015/11/19 10:27:51.300487 events.go:144: DEBUG: log 903 FolderScanProgress map[folder:dr_001 current:1082577459179 total:11547897156442]*

Then it stops scanning:
----------

*[EFCRJ] 2015/11/19 10:27:51.300514 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:51.697849 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.24 ms*

*[EFCRJ] 2015/11/19 10:27:51.708471 gui.go:317: DEBUG: http: GET "/rest/events?since=903": status 200, 160 bytes in 1.76 ms*

*[EFCRJ] 2015/11/19 10:27:51.711947 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 324 bytes in 1.88 ms*

*[EFCRJ] 2015/11/19 10:27:51.713415 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 3.79 ms*

*[EFCRJ] 2015/11/19 10:27:52.427311 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:53.940301 events.go:144: DEBUG: log 904 Ping <nil>*

*[EFCRJ] 2015/11/19 10:27:53.940349 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:27:53.940442 gui.go:317: DEBUG: http: GET "/rest/events?since=904": status *
*200, 84 bytes in 1268.87 ms*

*[EFCRJ] 2015/11/19 10:28:03.678659 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 322 bytes in 2.49 ms*

*[EFCRJ] 2015/11/19 10:28:03.679317 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.42 ms*

*[EFCRJ] 2015/11/19 10:28:03.679557 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.13 ms*

*[EFCRJ] 2015/11/19 10:28:14.677741 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 324 bytes in 3.10 ms*

*[EFCRJ] 2015/11/19 10:28:14.678053 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.16 ms*

*[EFCRJ] 2015/11/19 10:28:14.678189 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.49 ms*

*[EFCRJ] 2015/11/19 10:28:25.679538 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 6.61 ms*

*[EFCRJ] 2015/11/19 10:28:25.680309 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.41 ms*

*[EFCRJ] 2015/11/19 10:28:25.680580 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.14 ms*

*[EFCRJ] 2015/11/19 10:28:36.682778 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 6.34 ms*

*[EFCRJ] 2015/11/19 10:28:36.683123 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.46 ms*

*[EFCRJ] 2015/11/19 10:28:36.683249 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.16 ms*

*[EFCRJ] 2015/11/19 10:28:47.673430 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 3.41 ms*

*[EFCRJ] 2015/11/19 10:28:47.673752 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.25 ms*

*[EFCRJ] 2015/11/19 10:28:47.673811 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 699 bytes in 0.44 ms*

*[EFCRJ] 2015/11/19 10:28:52.427523 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:28:53.940521 events.go:144: DEBUG: log 905 Ping <nil>*

*[EFCRJ] 2015/11/19 10:28:53.940649 events.go:197: DEBUG: poll 1m0s*

*[EFCRJ] 2015/11/19 10:28:53.940817 gui.go:317: DEBUG: http: GET "/rest/events?since=905": status 200, 84 bytes in 58272.50 ms*

*[EFCRJ] 2015/11/19 10:28:57.672219 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.58 ms*

*[EFCRJ] 2015/11/19 10:28:57.675418 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 3.79 ms*

*[EFCRJ] 2015/11/19 10:28:57.675920 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 3.46 ms*

*[EFCRJ] 2015/11/19 10:29:08.672161 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.25 ms*

*[EFCRJ] 2015/11/19 10:29:08.679494 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.45 ms*

*[EFCRJ] 2015/11/19 10:29:08.679718 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 8.24 ms*

*[EFCRJ] 2015/11/19 10:29:18.677947 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 699 bytes in 0.45 ms*

*[EFCRJ] 2015/11/19 10:29:18.678201 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 4.61 ms*

*[EFCRJ] 2015/11/19 10:29:18.678509 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.14 ms*

*[EFCRJ] 2015/11/19 10:29:29.675642 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 322 bytes in 3.88 ms*

*[EFCRJ] 2015/11/19 10:29:29.675864 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.31 ms*

*[EFCRJ] 2015/11/19 10:29:29.675955 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.48 ms*

*[EFCRJ] 2015/11/19 10:29:39.689687 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 4.28 ms*

*[EFCRJ] 2015/11/19 10:29:39.690901 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 0.43 ms*

*[EFCRJ] 2015/11/19 10:29:39.692914 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 0.14 ms*

*[EFCRJ] 2015/11/19 10:29:50.682122 gui.go:317: DEBUG: http: GET "/rest/system/status": status 200, 323 bytes in 3.73 ms*

*[EFCRJ] 2015/11/19 10:29:50.682226 gui.go:317: DEBUG: http: GET "/rest/system/connections": status 200, 700 bytes in 3.76 ms*

*[EFCRJ] 2015/11/19 10:29:50.682419 gui.go:317: DEBUG: http: GET "/rest/system/error": status 200, 16 bytes in 3.74 ms*

Okay. Can you get a stack dump of what’s going on when it’s in the locked state? Find the PID(s) of Syncthing by pgrep syncthing or ps aux | grep syncthing, and pick the highest number (that is not the grep process), then kill -QUIT the-PID-you-got. Syncthing should exit, while dumping a lot of information about what’s going on. Drop that in a pastebin or similar? Or here for that matter, although our spam filter gets a little trigger happy over posts that look this automated. :wink:

This is what I got:

SIGQUIT: quit

PC=0x526742

goroutine 37 [syscall]:

syscall.Syscall(0x0, 0x6, 0xc2080b9000, 0x1000, 0x78, 0x1000, 0x0)
        /usr/local/go1.4.3/src/syscall/asm_linux_amd64.s:21 +0x5 fp=0xc208016b68 sp=0xc208016b60

syscall.read(0x6, 0xc2080b9000, 0x1000, 0x1000, 0x78, 0x0, 0x0)
        /usr/local/go1.4.3/src/syscall/zsyscall_linux_amd64.go:867 +0x6e fp=0xc208016bb0 sp=0xc208016b68

syscall.Read(0x6, 0xc2080b9000, 0x1000, 0x1000, 0x78, 0x0, 0x0)
        /usr/local/go1.4.3/src/syscall/syscall_unix.go:136 +0x58 fp=0xc208016bf0 sp=0xc208016bb0

os.(*File).read(0xc20802c0e0, 0xc2080b9000, 0x1000, 0x1000, 0x80, 0x0, 0x0)
        /usr/local/go1.4.3/src/os/file_unix.go:191 +0x5e fp=0xc208016c30 sp=0xc208016bf0

os.(*File).Read(0xc20802c0e0, 0xc2080b9000, 0x1000, 0x1000, 0x7fe22663a220, 0x0, 0x0)
        /usr/local/go1.4.3/src/os/file.go:95 +0x91 fp=0xc208016c88 sp=0xc208016c30

bufio.(*Reader).fill(0xc20805e780)
        /usr/local/go1.4.3/src/bufio/bufio.go:97 +0x1ce fp=0xc208016d30 sp=0xc208016c88

bufio.(*Reader).ReadSlice(0xc20805e780, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:295 +0x257 fp=0xc208016d78 sp=0xc208016d30

bufio.(*Reader).ReadBytes(0xc20805e780, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:374 +0xd2 fp=0xc208016e98 sp=0xc208016d78

bufio.(*Reader).ReadString(0xc20805e780, 0xc2080ba90a, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:414 +0x58 fp=0xc208016ef0 sp=0xc208016e98

main.copyStdout(0x7fe226645f48, 0xc20802c0e0, 0x7fe226645cf0, 0xc20802c008)
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:206 +0x5d fp=0xc208016f90 sp=0xc208016ef0

main.func·012()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:115 +0x75 fp=0xc208016fe0 sp=0xc208016f90

runtime.goexit()
        /usr/local/go1.4.3/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208016fe8 sp=0xc208016fe0

created by main.monitorMain
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:117 +0xfd2

goroutine 1 [select, 54 minutes]:

main.monitorMain()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:126 +0x185c

main.main()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/main.go:401 +0x2652

goroutine 5 [syscall, 2835 minutes]:

os/signal.loop()
        /usr/local/go1.4.3/src/os/signal/signal_unix.go:21 +0x1f

created by os/signal.init·1
        /usr/local/go1.4.3/src/os/signal/signal_unix.go:27 +0x35

goroutine 7 [chan receive]:

main.trackCPUUsage()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/gui_unix.go:24 +0xec

created by main.init·3
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/gui_unix.go:17 +0x25

goroutine 38 [semacquire, 54 minutes]:

sync.(*WaitGroup).Wait(0xc20800ac00)
        /usr/local/go1.4.3/src/sync/waitgroup.go:132 +0x169

main.func·013()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:122 +0x4b

created by main.monitorMain
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:124 +0x10ec

goroutine 36 [syscall, 54 minutes]:

syscall.Syscall(0x0, 0x4, 0xc2080b8000, 0x1000, 0x0, 0x1000, 0x0)
        /usr/local/go1.4.3/src/syscall/asm_linux_amd64.s:21 +0x5

syscall.read(0x4, 0xc2080b8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/syscall/zsyscall_linux_amd64.go:867 +0x6e

syscall.Read(0x4, 0xc2080b8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/syscall/syscall_unix.go:136 +0x58

os.(*File).read(0xc20802c0c0, 0xc2080b8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/os/file_unix.go:191 +0x5e

os.(*File).Read(0xc20802c0c0, 0xc2080b8000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/os/file.go:95 +0x91

bufio.(*Reader).fill(0xc20805e480)
        /usr/local/go1.4.3/src/bufio/bufio.go:97 +0x1ce

bufio.(*Reader).ReadSlice(0xc20805e480, 0x100a, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:295 +0x257

bufio.(*Reader).ReadBytes(0xc20805e480, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:374 +0xd2

bufio.(*Reader).ReadString(0xc20805e480, 0xc20802c00a, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go1.4.3/src/bufio/bufio.go:414 +0x58

main.copyStderr(0x7fe226645f48, 0xc20802c0c0, 0x7fe226645cf0, 0xc20802c008)
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:164 +0x69

main.func·011()
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:109 +0x75

created by main.monitorMain
        /home/jenkins/workspace/syncthing-release/src/github.com/syncthing/syncthing/cmd/syncthing/monitor.go:111 +0xed7

rax     0x0

rbx     0xc20801200c

rcx     0xffffffffffffffff

rdx     0x1000

rdi     0x6

rsi     0xc2080b9000

rbp     0x6

rsp     0xc208016b60

r8      0x0

r9      0x0

r10     0x0

r11     0x246

r12     0x0

r13     0x3

r14     0x17

r15     0x10

rip     0x526742

rflags  0x246

cs      0x33

fs      0x0

gs      0x0

That’s from the monitor process, which should have had the lower of the two PIDs. Can you try again? There should be much more output once you hit the right process. :smile:

Here is the log after killing the right process:

syncthing.log (48.2 KB)

Thanks. That shows Syncthing idling, no scanning going on, no apparent lockups. So it seems the scanner has exited. I’m not sure which way it could take to do that without either completing or logging an error at least at the debug level. I’m going to try to reproduce this, but there’s one more thing you could do to try to narrow it down a little.

  1. Change the config.xml for the folder to <scanProgressIntervalS>-1</scanProgressIntervalS>. This isn’t possible to do in the GUI currently, sorry.

  2. Run it as STTRACE=scanner syncthing | tee somelogfile.txt and paste that logfile. It’ll be a lot of data, but it’s necessary.

In the meantime I’m going to see if I can mock out the filesystem code so I can have the scanner experience files hundreds of gigs in size without actually having to create any of those…

I made the change, restarted Syncthing and this is all I got:

syncthing.log (85.5 KB)

Yeah, no, that doesn’t make us any the wiser. I was thinking it could have been memory related after all - there are a number of parallell hashing routines, by default the same as the number of CPU cores in your box, and they will each grab one file. Each routine will preallocate RAM for the number of blocks it’s going to produce. If you were hitting lots of large files at the same time that could possibly have exhausted something, but the files don’t look to be that big and not too many at a time. And again, no errors. I’m confused. :confused:

Starting again. Same process as before for file size test:

  • First ignoring big files (more that 1 Gb files in general)
  • Then adding those and check the scanning process

Because I know there is no problem with small files at all (about 16000 of them). Big files are not that many but well see how it goes.

Still getting stock when I stop ignoring the big files. I am making a storage with only the big files, same folder depth, but less folders. I’ll reply once I finish that test.

I ran a unit test to hash a 1 TiB file, and that worked as expected (but took like one and a half hour). The block list needs about 1/1000 of the file size in RAM, so 1 TiB of files to hash (in parallell) requires about 1 GiB of RAM, and probably about the same space in disk indexes if not more. I don’t see how this would be a problem for you on a 64 GiB box though…

could it be the folder depth? I am running now just the big files (between 30 gb and 300 gb) of a total of 14 Tb with a max folder depth of 14 subfolders like:

/main/dr_001/2556/a24b/5b5a/4376/b6e4/9f26/0899/f0f1/102695113-2556a24b-5b5a-4376-b6e4-9f260899f0f1/data/objects/102695113/

I don;t know if this could create any issue, but just a thought…

Well, same problem with less files/folders (total of 900 folders and 256 files) and 9.7 Tb. Biggest file is 321 Gb.

Did my last suggestion not have any effect?

No effect at all.

So we can try and produce a more verbose build to see where it stops, or alternatively, not sure if its possible, but get access to the environment where the issue is experienced.

Are you sure it gets stuck rather than taking a longer while?