Sonarr endpoints hang after some amount of uptime

Sonarr version (exact version): 4.0.16.2944
Mono version (if Sonarr is not running on Windows): 6.0.13
OS: ArchLinux x86_64 (no docker)
Debug logs (will capture full set after next repro): momou! Paste
Description of issue: After some amount of uptime, Sonarr will get stuck at the loading spinner (with funny text). Nothing fixes it other than a restart.

In the trace logs there are no errors, but what became clear is that the response for many endpoints was never sent or logged with an HTTP code. I paired up the responses (or lack thereof) and the series, customFilter, tag, qualityprofile, importlist and system/status URLs never returned (full list: momou! Paste).

I’ve tried to backup my instance and restore it to a fresh install, but it still hangs after some time (it can be a day, or 3 days, or sometimes a few hours).

The thing is, this really started between 4.0.15.2941 and 4.0.16.2944, this same Sonarr instance has been up for many years without issue. I will roll back at some point to confirm this is the case, but wanted to leave it running so I get another set of trace logs.

Is there any way of inspecting what is happening inside other than the Trace logs? I’ll grab some info with dotnet-stack next time too.

Downgrade SQLite, there is an issue with SQLite 3.51, so far it’s mostly been Arch users that have run into it, though hotio docker images had a similar issue before reverting a recent change.

Interesting, thank you. I will try it out with 3.50 and see if things stabilize.

This is the call stack for each thread when things stall, lots of waiting on SQLite indeed: momou! Paste

I grabbed it from the ALA (ArchLinux Archive), and added sqlite to my IgnorePkg list in pacman.conf, and we’ll see how it goes.

It looks completely fixed with 3.50.

I see SQLite 3.51.2 was released: SQLite Release 3.51.2 On 2026-01-09

Two changes may be of note:

Improved resistance to database corruption caused by an application breaking Posix advisory locks using close().
Fix an obscure deadlock in the new broken-posix-lock detection logic in item above.

I will update and observe.

1 Like

Awesome, please let us know how it goes, I’ve mentioned it on Discord as well since others there were seeing the same.

Early feedback is that it is rock solid. Needs 2-3d to be 100% sure.

1 Like

I think that we’re good. It’s been stable since the upgrade, and I’d rarely get more than 12h before.

1 Like