"Too many open files" error message, "Unable to open database file"

Sonarr version (exact version): 2.0.0.5344
Mono version (if Sonarr is not running on Windows): 6.8.0.105
OS: CentOS 7 (virtual machine)
Debug logs:

Hastebin forced me to cut down the size of these logs to the relevant bits. If you’d like to see the full logs, I have them saved and can absolutely provide them via Google Drive/Dropbox/SFTP.

  • Normal info-level logs can be found here.
  • Trace-level logs during the beginning of the issue can be found here.

Description of issue: After 24-48 hours of use, Sonarr appears to hang/crash until the systemd service is restarted. After a restart, Sonarr continues to work without issue for another 24-48 hours.

Reviewing the logs, this appears to be an issue with Sonarr keeping open too many FDs (File Descriptors). Linux has a default limit of 1024 FDs for any given process, which Sonarr is exceeding. The full output of lsof for the Sonarr PID confirms this. Piping this output to wc shows we are exceeding 1024 FDs.

[root@media logs]# lsof -p 6946 | wc -l
1168

After doing some research on these forums, there’s a number of other threads on this issue:



There’s also a GitHub issue that describes this as well:

The two prevailing solutions outlined in these threads appears to be as follows:

  1. Use a popular Docker image instead
  2. Downgrade the version of Mono installed

I’d like to avoid using Docker for this implementation if possible for unrelated reasons. My questions are as follows:

  1. Is downgrading the version of Mono installed the only known way to resolve this issue?
  2. As of the time of this writing, what version of Mono is recommended/stable?

Thank you!

I gave up trying to prevent this from happening, and instead I am using monit to restart sonarr and radarr when CPU utilization goes over 30% for 5 cycles. Both sonarr and radarr restart due to high CPU utilization every couple days, usually within an hour of each other. This band-aid has been keeping things running pretty smoothly for me.

I worked with @Taloth on this issue over Discord, and we determined that the following unbound TCP sockets were causing the issue:

[root@media logs]# lsof -p 6946
COMMAND  PID   USER   FD   TYPE             DEVICE  SIZE/OFF       NODE NAME
<snip>
mono    6946 sonarr   30u  sock                0,7       0t0 3146524492 protocol: TCP
mono    6946 sonarr   31u  sock                0,7       0t0 3151235331 protocol: TCP
mono    6946 sonarr   32u  sock                0,7       0t0 3147332007 protocol: TCP
mono    6946 sonarr   33u  sock                0,7       0t0 3149435548 protocol: TCP
mono    6946 sonarr   34u  sock                0,7       0t0 3152225060 protocol: TCP
mono    6946 sonarr   35u  sock                0,7       0t0 3150426456 protocol: TCP
mono    6946 sonarr   36u  sock                0,7       0t0 3150909089 protocol: TCP
mono    6946 sonarr   37u  sock                0,7       0t0 3159226377 protocol: TCP
mono    6946 sonarr   38u  sock                0,7       0t0 3155326039 protocol: TCP
mono    6946 sonarr   39u  sock                0,7       0t0 3153358171 protocol: TCP
mono    6946 sonarr   40u  sock                0,7       0t0 3154337130 protocol: TCP
mono    6946 sonarr   41u  sock                0,7       0t0 3190987992 protocol: TCP
mono    6946 sonarr   42u  sock                0,7       0t0 3155803271 protocol: TCP
mono    6946 sonarr   43u  sock                0,7       0t0 3156782645 protocol: TCP
mono    6946 sonarr   44u  sock                0,7       0t0 3157745759 protocol: TCP
mono    6946 sonarr   45u  sock                0,7       0t0 3169462787 protocol: TCP
mono    6946 sonarr   46u  sock                0,7       0t0 3158718859 protocol: TCP
mono    6946 sonarr   47u  sock                0,7       0t0 3160053644 protocol: TCP
mono    6946 sonarr   48u  sock                0,7       0t0 3173865440 protocol: TCP
mono    6946 sonarr   49u  sock                0,7       0t0 3160692910 protocol: TCP
mono    6946 sonarr   50u  sock                0,7       0t0 3161980211 protocol: TCP
mono    6946 sonarr   51u  sock                0,7       0t0 3161662861 protocol: TCP
mono    6946 sonarr   52u  sock                0,7       0t0 3164104143 protocol: TCP
mono    6946 sonarr   53u  sock                0,7       0t0 3162621384 protocol: TCP
mono    6946 sonarr   54u  sock                0,7       0t0 3167992907 protocol: TCP

As the above output shows, these TCP sockets are opened, but are not bound to a specific IP/port combination. Over time, these continuously grew until the 1024 FD limit was hit.

We determined that the root cause of this issue has to do with having the Sonarr web UI open in a Chrome tab (although other web browsers will likely cause this issue as well.) The session keepalive mechanism that Chrome uses causes these sockets to open, but does not close them. This is not an issue with Sonarr, but an issue with Mono, the C# JIT framework that Sonarr uses on Linux.

The workaround for this issue is to not keep a tab open in your web browser pointed towards Sonarr at all times. I was able to resolve this issue by upgrading from Sonarr version 2.0.0.5344 to v3 (specifically, Sonarr version 3.0.3.741.) On this Sonarr version, I have not been able to reproduce this issue.

As best as I can tell, this same issue affects Radarr as well. I have not yet implemented a fix for Radarr (although I have implemented the workaround, and it appears to be working just fine.)

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.