Database is locked, clogging up system

Sonarr version (exact version): 2.0.0.4689
Mono version (if Sonarr is not running on Windows): 4.2.3
OS: Ubuntu 16.04
((Debug logs)):
sonarr.debug: https://pastebin.com/J8CzGTsA
sonarr.trace: https://pastebin.com/nUU9zmR5
sonar.txt: https://pastebin.com/48v9umCY
Description of issue:
Something changed overnight where sonarr has critical errors accessing the database. For some reason it always reports that the database is locked. I haven’t changed anything in the settings of sonarr, only manually added some media, but it always picked it up after a refresh of the disk files, without any troubles. Until today, where database troubles arose.

I have no clue how to resolve this anymore. Tried to remove sonarr, including all its config files in the ~/.config folder, checking my mono install, checking mysql install and reinstalling those services. I tried to reinstall and only add shows one by one to see what causes the issue, but all to no avail. I’m really in the dark here how such thing could have happened. Sonarr is always run by its own user, and program data is saved in its own folder, I haven’t done anything out of the ordinary for installation.

The problem is that this error is causing a 100% load on one of my threads from my CPU and it doesn’t automatically shutdown. It has a continous 100% I/O wait, taking up the entire CPU. Only manually stopping the service from commandline eventually frees up my CPU.

Any help would be greatly appreciated!

Sonarr uses sqlite, not mysql. Which version of sqlite is installed? Was it updated recently?

The issues appear immediately?

It’s possible the issue is with the disk itself is having issues which cause SQLite to treat the DB as locked.

If you redirect Sonarr’s configuration to another volume do you stlll have issues? (You’d need to use /data=/path/to/config)

Meant to say sqlite, my bad. It’s running sqlite 3.11.0

I’m using a mounted cloud drive, but never experienced any issues up until now. The mount has worked perfectly before and is set up in a way that sonarr can correctly analyze it. So I would be surprised if the drive would be the issue, it has never failed before with sonarr, so it would be strange to just fail without any reason. No changes have been made to the mount or the way sonarr uses the drive, just same old settings that always worked like a charm.

EDIT: I’ve found a new error that occurred, an I/O write error, but I’m fairly sure that’s because from the moment the ‘refresh disks’ segment starts running, it takes up an entire CPU core.
Pastebin for error: https://pastebin.com/gQkZtYzX
EDIT2: After further research, it appears I can ignore the above mentioned error.

EDIT: After further investigation, it seems that sonarr really has an issue with importing series from a mounted drive. I’ve done numerous fresh installs, and as soon as I point sonarr to a mounted drive to import already downloaded series, it immediately jumps up to 100% I/O wait on one CPU core. Could it be that the sonarr user doesn’t have the correct permissions to access the cloud drive or something? But wouldn’t the logs reflect such an issue then? I can try to run sonarr as root perhaps to try and import my existing series maybe, see if that helps? Adding non-existing shows to sonarr works without issues, the importing of existing makes it go crazy.

The “|Trace|Owin|SQLite error (5): database is locked” error is without a doubt related to my high I/O results.

ATTEMPT: So I tried running the sonarr instance as root and saw a much cleaner trace log, although “database is locked” errors still persisted. My statistics of my I/O wait have now gone from 100% continuous to an interrupted stream of 100% I/O. I think in the end, sonarr was able to correctly handle everything, but it took about 7 minutes for 13 episodes that I just imported. I’m not really looking forward for the hundreds remaining. If this issue can be resolved another way, that’d be a better solution.
EDIT: The attempt didn’t seem to solve the issues, they returned after I added another serie. It seems that once the DB get “locked”, it messes up any future requests and my system keeps hanging at 100% I/O on one CPU thread.

I guess this issue is created by having multiple threads open to the DB, where one could take a significant longer time than another. The issue that results in the DB lock is where one process makes a change while another is still selecting data from the database, effectively making sqlite lock the database to prevent inconsistencies. I’m unsure whether this is the actual reason of the issue, but to narrow it down, it happens in the process of refreshing series info from the disk, while it’s scanning for files and analyzing them. I presume there are simultaneous requests and pulls going on to the database, where I guess it messes up the write and read processes. SQLite WAL might prevent this?

Is the Sonarr DB stored locally on the server or on the cloud drive?

“|Trace|Owin|SQLite error (5): database is locked”

Could be for nzbdrone.db or more commonly the logs.db which gets written quite frequently, especially when Sonarr is actively working on things.

That’s already in use, you can see the files alongside the actual database when Sonarr is running.

If Sonarr is importing multiple series at the same time from a cloud drive it’s going to take a while if it’s analyzing each file, disabling media analysis may help there (skips media info checks for files already in the series folder).

It’s stored locally.

The issue also happens when only importing a few episodes. I imported 13 episodes from a serie and I had multiple “database is locked” errors. So it’s not necessarily because of the large amount of imports. I think something in the process of scanning the files makes SQLite go crazy, but don’t know how to fix it… :confused: I tried turning of media analysis and that seemed to have worked! Everything is quick as before, but I do wonder, isn’t media analysis vital to the overall functionality of sonarr?

It’s used at import time to weed out sample files and overall if you want media info information in the filenames (number of audio channels, bitrate, subtitle languages, etc), if not it’s not critical at least at this time (though I don’t have anything in mind that would make it critical either).

Well for now, I got it to work without the media analysis. Some imported episodes do show up in wrong resolution and such, but at least it’s tracking them now. There has been some issues now with other functionalities, like the automatic grabbing of new episodes and moving them afterwards. When checking the logs, it still shows database is locked errors, but if I persist in executing the functions a few time, it will eventually work some time.

I have no clue what’s causing these errors and have no idea how I could prevent those from happening. Are there some guidelines which could prevent such things? Or any advice on configuration that would minimize these from happening? I’ve tried to remove the logs.db, to see if that’s the database that is affected, but without any result. Rebooted a few times as well to clear up the ‘locked’ status of whatever database is affected, but after a while the errors pop up again. So I’m a bit in the dark here how to prevent this from happening, or how to improve my situation. As said before, I’ve also done numerous fresh installs (where I also removed the .config folder), but that didn’t seem to resolve my issues either.

17-4-30 17:03:04.4|Debug|UpdateMediaInfoService|Updated MediaInfo for '/home/media/tv/Blindspot/Season 2/Blindspot.S02E07.720p.HDTV.X264-DIMENSION.mkv'
17-4-30 17:03:04.4|Debug|VideoFileInfoReader|Getting media info from /home/media/tv/Blindspot/Season 2/Blindspot.S02E08.720p.HDTV.X264-DIMENSION.mkv
...
17-4-30 17:03:05.7|Trace|MediaInfo|Read file offset 0-32768 (32768 bytes)
...
17-4-30 17:03:37.5|Trace|MediaInfo|Read file offset 1230242974-1230284236 (41262 bytes)
17-4-30 17:03:37.5|Trace|MediaInfo|Read a total of 74030 bytes (0.0%)

I suspect your ‘cloud’ fs is insanely slow with random access. As you can see sonarr (or rather MediaInfo) needs 74030 bytes of the total file, some at the start some at the end. yet it takes your filesystem over 30 sec to get that last piece. (The read gets logged after it finished)
That’s the thing you’ll need to improve if you want better performance.

You can test it out with the mediainfo cmdline utility, it’s largely do the same thing.

PS: 1.2 GB file in 30 sec, that’s 328 mbit/s, either you have a rather nice fiber internet, or it’s not actually streaming the whole file but rather wait’s on some IO on the cloud side, my guess anyway.

The drive sonarr is using to maintain it’s database correctly is a fuse mount. It combines two directories into one. It could be that this causes issues perhaps? The fuse is a combination of a directory where all my downloads end up that need to be pushed to my cloud drive, and my mounted network drive of my media library. It needs to be this fuse, because else sonarr would detect episodes as not downloaded, while they are just waiting in line for the upload. This way sonarr can generate a valid representation of my entire library. Maybe because it’s a fuse, it’s acting up?

The cloud provider I’m using is amazon. Could be that they don’t play nice with this kind of service. I’m not really an expert on this, but I’ve seen numerous other people be successful in this combination.

PS: Sonarr is running on a dedicated server with a 1gbps connection, so you could say it’s doing fine regarding speed :slight_smile:

The drive sonarr is using to maintain it’s database correctly is a fuse mount.

You should put it outside of unionfs, give sqlite clean access to proper filesystem capabilities, it needs those to ensure consistency and performance.

PS: Sonarr is running on a dedicated server with a 1gbps connection, so you could say it’s doing fine regarding speed

That means your fs is quite possibly download the entire file, just to access a tiny part of it, and that would definitely cause slowless of the mediainfo stuff. Even though Sonarr needs to get the mediainfo only once, not being able to only dl tiny parts of the file has a significant impact on performance.
Based on a quick scan of the acd_cli docs it should be able to seek. You should be able to test it easily using the mediainfo cmdline utility. (Yes, I repeated that coz you haven’t mentioned you checked that…)
It could be the configured chunk size, or any other number of things.

Either way, it’s not my/our problem. Sry to be blunt, but they come by the dozen, the people who suddenly use cloud drives and expect all their software to behave the same even though the filesystem most certainly isn’t the same.

PS: It’s also possible that the constant db access hits acd instead of unionfs, and thus affects any other fs operation, so get that db out of the union & cloud fs… give it a true and proper local filesystem. You can always set a cronjob to copy a backup periodically.

No problem. Just wanted to gather some information about this issue, could’ve been that this was more common than I thought. Thanks for all the help and suggestions, I’ve got it somewhat working now. +1 for the support on this forum, you guys are doing great providing this platform.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.