Fatal Lockup on Ubuntu - 100% CPU usage and "address already in use"

Sonarr version (exact version): 2.0.0.5163
Mono version (if Sonarr is not running on Windows): 5.4.1.6
OS: 16.04 Xenial
Debug logs: https://pastebin.com/Jujj9VGc
Description of issue: I don’t know why, but Sonarr locks up consuming 100% of the CPU, and trying to stop/restart from the command line using systemctl generates fatal errors about the address/port already being in use. Which isn’t possible because Sonarr is running in a Linux Container on proxmox and has the container (and fixed IP) all to itself.

I even went so far as to completely wipe the Linux Container, and then restore the whole container from a backup, but within 15 minutes, Sonarr was frozen again while trying to parse files, and stopping/restarting resulted in the same behavior of failing to start.

No recent changes to speak of other than the update form Sonarr 10 days ago.

Throw on trace logs and check if mediainfo was recently called, if you’re running a Ubuntu official repo version of mediainfo it could be getting stuck, best to upgrade to the latest mediainfo release from the mediainfo repo.

It could be something different as well, but trace logs will give the most information into what Sonarr was doing.

mediainfo is already the newest version (0.7.82-1)

Is there a command in CLI to enable trace logging? i can’t even get into the GUI because it locks up immediately after systemctl start sonarr.service

You can set it in the config XML file

1 Like

This was all that appeared in the trace.log:

18-3-29 00:45:45.2|Debug|ProcessProvider|Found 0 processes with the name: NzbDrone.Console
18-3-29 00:45:45.3|Debug|ProcessProvider|Found 1 processes with the name: NzbDrone
18-3-29 00:45:45.3|Debug|ProcessProvider| - [19208] NzbDrone
18-3-29 00:45:45.3|Info|Router|Application mode: Interactive
18-3-29 00:45:45.3|Debug|Router|Tray selected
18-3-29 00:45:45.5|Info|MigrationLogger|*** Migrating data source=/root/.config/NzbDrone/nzbdrone.db;cache size=-10485760;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 ***
18-3-29 00:45:45.7|Debug|MigrationLogger|Took: 00:00:00.1943686
18-3-29 00:45:45.7|Info|MigrationLogger|*** Migrating data source=/root/.config/NzbDrone/logs.db;cache size=-10485760;datetimekind=Utc;journal mode=Wal;pooling=True;version=3 ***
18-3-29 00:45:45.7|Debug|MigrationLogger|Took: 00:00:00.0241304
18-3-29 00:45:45.7|Info|OwinHostController|Listening on the following URLs:
18-3-29 00:45:45.7|Info|OwinHostController|  http://*:8989/
18-3-29 00:45:45.8|Debug|OwinAppFactory|Attaching NzbDroneVersionMiddleWare to host
18-3-29 00:45:45.8|Debug|OwinAppFactory|Attaching SignalRMiddleWare to host
18-3-29 00:45:45.8|Debug|OwinAppFactory|Attaching NancyMiddleWare to host
18-3-29 00:45:45.9|Info|NancyBootstrapper|Starting Web Server
18-3-29 00:45:46.9|Fatal|ConsoleApp|Address already in use. This can happen if another instance of Sonarr is already running another application is using the same port (default: 8989) or the user has insufficient permissions

But like I said, it’s in its own container with a static IP. That’s the only thing running in that container.

Anything?..

I just deleted the entire Linux Container, started from scratch with a new Container (Ubuntu 16.04), re-installed mono, re-installed sonarr, and still ended up with the exact same trace log and fatal error above.

1 Like


This is what happens after I’ve already run systemctl stop sonarr followed by trying to stop the linux container that sonarr is in.

omg, i was thinking to do what you’ve already done and i was like 99% sure that it would solve the problem! thanks for sharing it and please come back with updates if you have a solution!

Nope. Nothing yet. I’m pretty much at my wits end. I just bought a new RPi3 to bypass proxmox altogether, and see if I have any better luck with that.

Radarr has the exact same cloned environment that Sonarr had, and Radarr is running great.

Due to my completed downloads folder also existing on my proxmox host, trying to migrate sonarr elsewhere is proving very difficult, since it needs access to that folder.

Hopefully some folks can come up with some other ideas. I see no good reason why this behavior would be occurring.

best to upgrade to the latest mediainfo release from the mediainfo repo.

I just realized I completely missed this part and was using apt-get like a dummy. I got mediainfo 18.03.1-1 installed and restarted sonarr. The disk scan service and rss sync was noticeably faster, but ultimately it still locked up and froze within minutes. Here’s the last few lines of the tracelog:

18-4-4 03:39:12.1|Info|RefreshEpisodeService|Finished episode refresh for series: [76177][Saturday Night Live].
18-4-4 03:39:12.1|Debug|RefreshSeriesService|Finished series refresh for Saturday Night Live
18-4-4 03:39:12.1|Trace|EventAggregator|Publishing SeriesUpdatedEvent
18-4-4 03:39:12.1|Trace|EventAggregator|SeriesUpdatedEvent -> DiskScanService
18-4-4 03:39:12.5|Info|DiskScanService|Scanning disk for Saturday Night Live
18-4-4 03:39:12.5|Trace|EventAggregator|Publishing CommandUpdatedEvent
18-4-4 03:39:12.5|Trace|EventAggregator|CommandUpdatedEvent -> CommandModule
18-4-4 03:39:12.5|Trace|EventAggregator|CommandUpdatedEvent <- CommandModule
18-4-4 03:39:12.5|Debug|DiskScanService|Scanning '/media/TV Series/Saturday Night Live' for video files
18-4-4 03:39:14.6|Trace|Scheduler|Pending Tasks: 1
18-4-4 03:39:14.6|Trace|CommandQueueManager|Publishing RefreshSeries
18-4-4 03:39:14.6|Trace|CommandQueueManager|Checking if command is queued or started: RefreshSeries
18-4-4 03:39:14.6|Trace|CommandQueueManager|Command is already in progress: RefreshSeries
18-4-4 03:39:17.2|Trace|DiskScanService|93 files were found in /media/TV Series/Saturday Night Live

Is anything additional logged to the console (standard output/error)? If mono or Sonarr is crashing it may not be able to write to the log file before it falls down, but the standard output usually catches something additional.

Unfortunately no, the above paste is actually from the console output (which i assumed was the same as the tracelog, but i guess they could be different). But i couldn’t even navigate or ctrl+c out of the console anyways.

Under normal circumstances they are, but the output looks different and doesn’t have time stamps.

That’s definitely not a console log output, looks more like the sonarr.trace.txt file.
You should really get the console log in this particular case.

Also, you could try this: Stop Sonarr and change Radarr’s port to 8989 to check if you get the same port in use error. Won’t fix anything but might be useful to know.

You’re using LXC, which I know little to nothing about. The network configuration and port sharing might be completely different.
Get that port-in-use error out of the way first.

Sonarr version: 2.0.0.5163
Mono version: 5.10.1.20
Mediainfo version: 18.03.1
OS: Ubuntu 16.04 LTS via LXC on Proxmox Host
Debug logs:

  1. Output of mono --trace=N:ConsoleDriver /opt/NzbDrone/NzbDrone.exe : here (I don’t feel like that worked correctly, please let me know a better command to run if you need different info)
  2. Last available contents of sonarr.trace.txt prior to freeze here

Description of issue:
I previously posted about this issue and wasn’t able to get a resolution. Since then (for reasons not exclusively related to sonarr) I have tried the following:

  • Replaced the physical drives with new enterprise-grade SSDs
  • Re-installed the entire underlying Proxmox OS
  • Re-installed mono, sonarr, and got the latest mediainfo package from their official repository
  • Deleted my nzbdrone.db file and let it rebuild, to rule out a database corruption issue
  • Created a full blown Ubuntu 16.04 VM to rule out LXC issues with proxmox
  • Downgraded from mono 5.10 to mono 5.8
  • Used a Debian 9 LXC instead of Ubuntu, along with Mono v5.12 instead of 5.10

And yet, here I am, still in the exact same situation, with all of the above efforts still ending in the same lock-up behavior. It typically seems to happen during MediaInfo doing parsing, which is what led me to upgrade to the version directly from their repository to begin with.

Move to this thread, since it’s the same issue and hasn’t been closed due to inactivity.

  1. Capture the standard output/error to a file via redirection, something like so: https://askubuntu.com/a/625230/621585 with trace logging enabled

Thanks for the reply and pointing me in the right direction.

I’ve grown to learn/assume (correct me if I’m wrong) that “trace logging” can really mean two different things:

  1. trace logging of sonarr (accomplished in the .xml config)
  2. trace logging of mono (accomplished in CLI)

In this case, I’m assuming we’re interested in what mono itself is doing, so i ran this command:

mono --trace /opt/NzbDrone/NzbDrone.exe &> /var/log/sonarr_output.log

…but it generated a 1.8GB file in under 2 minutes. Is there a better filter command that would be more useful in whatever we’re hoping to find in those outputs?

Mono trace logging is very heavy handed, and rarely what we need (there are cases, but we’d be very explicit in that case).

the mono --trace flag is not what we’re looking for here, just Sonarr’s trace logs, with the console output redirected to a file.

mono --debug /opt/NzbDrone/NzbDrone.exe &> /var/log/sonarr_output.log (with the --debug flag for extra information).

Fortunately/unfortunately, sonarr ran for several hours this time without crashing, so it was long enough to make the sonarr_output.log file unwieldy, and i eventually had to delete it, thinking that a new one would start over in its place. Unfortunately it did not, so this time I was only able to get the trace log again and not the console output.

18-4-11 21:28:34.1|Debug|DiskTransferService|Move [/opt/downloader/completed/tv/Call.the.Midwife.S05E07.1080p.WEB-DL.AAC2.0.H.264-ESQ/5pSFWJ0QywhDYwa06Xq2HoPp9AtMMmWW1D0icdcoUHDGj.mkv] > [/media/TV Series/Call the Midwife/Season 5/Call the Midwife - S05E07 - Episode 7 WEBDL-1080p.mkv]
18-4-11 21:28:34.2|Trace|SymbolicLinkResolver|Checking path /media/TV Series/Call the Midwife/Season 5/Call the Midwife - S05E07 - Episode 7 WEBDL-1080p.mkv for symlink returned error ENOENT, assuming it's not a symlink.
18-4-11 21:28:34.2|Trace|DiskTransferService|Attempting to move hardlinked backup.
18-4-11 21:29:02.8|Trace|Scheduler|Pending Tasks: 2
18-4-11 21:29:02.8|Trace|CommandQueueManager|Publishing CheckForFinishedDownload
18-4-11 21:29:02.8|Trace|CommandQueueManager|Checking if command is queued or started: CheckForFinishedDownload
18-4-11 21:29:02.8|Trace|CommandQueueManager|Command is already in progress: CheckForFinishedDownload
18-4-11 21:29:02.8|Trace|CommandQueueManager|Publishing DownloadedEpisodesScan
18-4-11 21:29:02.8|Trace|CommandQueueManager|Checking if command is queued or started: DownloadedEpisodesScan
18-4-11 21:29:02.8|Trace|CommandQueueManager|Inserting new command: DownloadedEpisodesScan
18-4-11 21:29:02.9|Trace|CommandExecutor|DownloadedEpisodesScanCommand -> DownloadedEpisodesCommandService
18-4-11 21:29:02.9|Trace|CommandQueueManager|Marking command as started: DownloadedEpisodesScan
18-4-11 21:29:02.9|Trace|ConfigService|Using default config value for 'downloadedepisodesfolder' defaultValue:''
18-4-11 21:29:02.9|Trace|DownloadedEpisodesCommandService|Drone Factory folder is not configured
18-4-11 21:29:02.9|Trace|CommandQueueManager|Updating command status
18-4-11 21:29:02.9|Trace|EventAggregator|Publishing CommandExecutedEvent
18-4-11 21:29:02.9|Trace|EventAggregator|CommandExecutedEvent -> TaskManager
18-4-11 21:29:02.9|Trace|TaskManager|Updating last run time for: NzbDrone.Core.MediaFiles.Commands.DownloadedEpisodesScanCommand
18-4-11 21:29:02.9|Trace|EventAggregator|CommandExecutedEvent <- TaskManager
18-4-11 21:29:02.9|Trace|EventAggregator|CommandExecutedEvent -> TaskModule
18-4-11 21:29:02.9|Trace|EventAggregator|CommandExecutedEvent <- TaskModule
18-4-11 21:29:02.9|Trace|CommandExecutor|DownloadedEpisodesScanCommand <- DownloadedEpisodesCommandService [00:00:00.0306390]

I’ll run it again to see if I can get the console output this time.

Is it okay if I split the output files into stdout and stderr to help cut down on file size? Is there an argument to include in the command that would generate new log files in a rotating fashion?