Panic: leveldb/table: corruption on data-block

My best guess at this point is some sort of hardware or system problem, to be honest. Someone else was running funky software to cache writes to SSD:s that apparently caused problems. Perhaps some antivirus or other thing does something similar. (The index I got in PM was fine on my computer.)

no antivirus, clean install of win8.1, on a new SSD. no services or background processes except syncthing itself, and Plex media server.

Can you run this test program? It’s by the database author, and tests a lot of reads and writes in parallel to the database. In practice this tests two things - the code for bugs, and the hardware for problems. I’m running it now on a few machines and haven’t seen any issues so far. But the database does checksum all it’s data, so memory problems will show up as the error in the topic. I’d be curious to see how it goes on your machine.

Should probably run it both as is and with -enableblockcache -enablebufferpool -enablecompression to better simulate what Syncthing does. In either case, let it run for a day or so or until it exits with an error…

It is interesting that I am getting this error frequently on Arm Linux. This is probably the 4th time in couple months. I had to delete the database to start ST properly. The worst part is that one does not know when this happens so ST fails in the bg and one never knows about it.

Also deleting the index is not a sound solution for couple reasons. The biggest issue is that this is very cpu intensive so battery drain is no cool on any mobile. Thje second thing is that deleting the index will confuse the sync on both ends.

Why cant ST back this up properly? So it can use the backup when there is an error?

@kahun so is this a problem on the mobile or on your arm linux box (raspberry?)?

unfortunately this isn’t 100% reproducible. At this point, all we can do is to make an archive of the index folder asap after syncthing throws the panic, but before restarting syncthing, since it might repair or truncate the database. This archive can then, perhaps, be useful for analysis.

It is Arm Linux on Android via chroot. I think that I had it on another similar setup device. I am not sure about the x64 devices, I only have 2 of those but more Android devices.

got some fresh info… There seem to be 2 new variables to this: resume from sleep, and synctrayzor.

I had another panic just now, system has gone to sleep. On wake, synctrayzor crashed, i tried restarting, and it crashed again. This is synctrayzor log after 1st crash, when the host woke up : This is synctrayzor log after 2nd crash : This is the corresponding syncthing log, including panic :

After all of that, Synctrayzor starts but crashes immediately. Syncthing (without synctrayzor) starts, chews a bit on some .ldb files and panics time after time.

edit : after this last crash, ST can’t repair the database. I’m reverting to an older copy of the index in order to avoid having to re-index 4Tb of files, which would take half a week :frowning:

Mind opening a SyncTrayzor issue with that log?

Edit: Actually I think I’ve already got an issue for that. From an initial read, it looks like Syncthing’s crashing out, and SyncTrayzor can’t handle it.

There’s not a lot else I can do there except be more graceful.

If SyncTrayzor is a factor in causing the corruption, I think it could only be because it’s occasionally telling Syncthing that a file’s changed. Other than that, it doesn’t have anything to do with Syncthing and its database.

Discussion is in

It looks like SyncTrayzor isn’t at fault - it’s giving Syncthing 10 minutes to start, and Syncthing’s not responding after this time.

1 Like


Here is a panic log from a recent corruption business

Panic at 2015-05-17T17:46:00-05:00
panic: leveldb/table: corruption on data-block (pos=310114): checksum mismatch, want=0x748d7ef2 got=0x67140e7a [file=000107.ldb]

goroutine 1 [running]:, 0x10afe448, 0x4, 0x8, 0x1213e320, 0x20, 0x20, 0x0, 0x0, 0x0, ...)
	/go/src/ +0xf8c, 0x10afe448, 0x4, 0x8, 0x1213e320, 0x20, 0x20, 0x0, 0x0, 0x0, ...)
	/go/src/ +0x7c*FileSet).Replace(0x10dad300, 0x4b4f88eb, 0x74abdbbc, 0xeb63e937, 0x6917a53a, 0x775ba0d3, 0x40dae28d, 0x1370dd53, 0xd80b10b6, 0x0, ...)
	/go/src/ +0x364*Model).Index(0x10c853b0, 0x4b4f88eb, 0x74abdbbc, 0xeb63e937, 0x6917a53a, 0x775ba0d3, 0x40dae28d, 0x1370dd53, 0xd80b10b6, 0x10a0ae60, ...)
	/go/src/ +0xc90
	/go/src/ +0x1dd0
	/go/src/ +0x1c98

goroutine 5 [syscall]:
	/usr/local/go/src/os/signal/signal_unix.go:21 +0x1c
created by os/signal.init?1
	/usr/local/go/src/os/signal/signal_unix.go:27 +0x40

goroutine 7 [chan receive]:
	/go/src/ +0x120
created by main.init?2
	/go/src/ +0x34

goroutine 9 [select]:*Supervisor).Serve(0x10adc180)
	/go/src/ +0xd60
created by*Supervisor).ServeBackground
	/go/src/ +0x3c

goroutine 10 [select]:
	/go/src/ +0x248
	/go/src/ +0x114
created by*Supervisor).runService
	/go/src/ +0x140

goroutine 11 [chan receive]:*Wrapper).Serve(0x10a5c400)
	/go/src/ +0x54
created by
	/go/src/ +0x2ac

goroutine 12 [select]:*BufferPool).drain(0x10a0c2c0)
	/go/src/ +0x1d4
created by
	/go/src/ +0x23c

goroutine 13 [select]:*DB).compactionError(0x10ab4620)
	/go/src/ +0x254
created by
	/go/src/ +0x82c

goroutine 14 [select]:*DB).mpoolDrain(0x10ab4620)
	/go/src/ +0x124
created by
	/go/src/ +0x854

goroutine 20 [select]:*Matcher).clean(0x11194ba0, 0x61714000, 0x68c)
	/go/src/ +0x1e4
created by
	/go/src/ +0x1a0

goroutine 16 [select]:*DB).mCompaction(0x10ab4620)
	/go/src/ +0x248
created by
	/go/src/ +0x8d4

goroutine 17 [select]:*DB).jWriter(0x10ab4620)
	/go/src/ +0x17c
created by
	/go/src/ +0x8fc

goroutine 19 [select]:*ProgressEmitter).Serve(0x10dacd40)
	/go/src/ +0x73c
created by
	/go/src/ +0x8c0

goroutine 33 [IO wait]:
net.(*pollDesc).Wait(0x11a6cdf8, 0x72, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:84 +0x3c
net.(*pollDesc).WaitRead(0x11a6cdf8, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:89 +0x38
net.(*netFD).accept(0x11a6cdc0, 0x0, 0xb6f5dcb8, 0x10a0a61c)
	/usr/local/go/src/net/fd_unix.go:419 +0x390
net.(*TCPListener).AcceptTCP(0x10a1c8d0, 0x11a6e03c, 0x0, 0x0)
	/usr/local/go/src/net/tcpsock_posix.go:234 +0x50
net.(*TCPListener).Accept(0x10a1c8d0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/tcpsock_posix.go:244 +0x3c
main.(*DowngradingListener).Accept(0x10afe4b0, 0x0, 0x0, 0x0, 0x0)
	/go/src/ +0x64
net/http.(*Server).Serve(0x11a720c0, 0xb6f7c968, 0x10afe4b0, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:1728 +0x98
	/go/src/ +0x183c
	/go/src/ +0x114
created by*Supervisor).runService
	/go/src/ +0x140

goroutine 34 [select]:*Subscription).Poll(0x10a0a530, 0xf8475800, 0xd, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/ +0x2a8*BufferedSubscription).pollingLoop(0x11a7c000)
	/go/src/ +0x40
created by
	/go/src/ +0x2a8

goroutine 35 [select]:*Supervisor).Serve(0x10e11100)
	/go/src/ +0xd60
created by*Supervisor).ServeBackground
	/go/src/ +0x3c

goroutine 36 [select]:
	/go/src/ +0x510
	/go/src/ +0x24
	/go/src/ +0x24
	/go/src/ +0x114
created by*Supervisor).runService
	/go/src/ +0x140

goroutine 37 [select]:
	/go/src/ +0x290
	/go/src/ +0x24
	/go/src/ +0x24
	/go/src/ +0x114
created by*Supervisor).runService
	/go/src/ +0x140

I moved 20 posts to a new topic: Beta test of Syncthing build with alternate database backend (“boltdb”)