Sonarr version (exact version): 2.0.0.4916 Mono version (if Sonarr is not running on Windows):5.0.1.1 OS: Centos 7
Description of issue: Immediately following after an update, Sonarr crashes and the web interface dies. If I stop the systemd unit (service), the web page immediately comes back with a 504 gateway timeout (I use nginx). A manual restart of the service is needed to get sonarr going again. In the debug logs, this is the last entry before it pukes:
((Debug logs)):
mono[31639]: Stacktrace:
mono[31639]: at <0xffffffff>
This is still happening but I’m not finding any errors except this one occurance below. The crashing appears to be happening though consistently now with the scheduled refresh series events.
17-8-6 22:27:28.2|Trace|Owin|SQLite error (5): database is locked
The events continue past this error but then suddenly stop and sonarr quits logging.
Hopefully, that’s the correct syntax and will do something
I’m using systemd as well and journalctl has logs. Is this still something I should try to pursue? Running it manually perhaps? The log file appears to be the same that journalctl gets.
Your other logging may already capture that crash, but may depend how the service handles the crash (I’m not sure it matters). You could run it outside of systemd (make sure you use the same user) and redirect the output if that works.
Finally… this appears to have crashed almost immediately after starting… I’ll run it again as the user and verify the service isn’t going as well to make sure this isn’t false information here…
Just-in-time compiler-based frameworks such as mono are far more susceptible to small memory corruptions. And we’ve seen it in the past.
I’m not saying this is the cause, in fact I think the chance is a small one. But it’s easily tested by running an > 8h memtest (hence ‘overnight’), and if we can exclude it from the possibilities then we save ourselves a wild goose chase.
One of the mono libs, or libmediainfo, or a myriad of other libraries that get loaded. Could even be a bug in mono. But first things first.
Also, at one point we do need a to get a full trace log, so we know exactly what sonarr/mono was doing in the minutes before the crash.
Understand that a ‘crash’ log is post-mortem and often insufficient, it’ll only tell us how the application died (hit by a blunt object), it won’t tell us what the blunt object was or who swung it.
So us ‘detectives’ need as much information as possible.
Yup, just zip it, but it needs to be together with the console log file of the same time period coz we have to try correlate the two.
The Sonarr logs are generated by the ‘managed’ part, so won’t actually contain the native crash. That’s the nasty part of these kind of things.
Let the memory scan run for almost 10 hours with 100% pass.
I moved the old existing trace logs from previous runs to a temp folder. It’s starting a new one now. Is there anything I could search for in the old logs to assist?