Questions about blacklist/redownload and understanding of sonarr

Hi,

I’m new to Sonarr, I’ve been using sickbeard for some years now but saw the nice features which sonarr offers and made the switch. One of them is redownloading failed shows and blacklist non working nzbs. So today one of my downloads failed, 1080p web dl version of the show. What I saw was the file got blacklisted based on the name, however this blacklists all 1080p versions, even when the same file is uploaded multiple times with the same name. This would mean some uploads can be corrupt but some can be fine even when they have the exact same file name. I have configured about 26 indexers which offers different nzbs but sometimes the same, so I expected sonarr to try another 1080p web dl upload from a different indexer, even when it has the same file name, but it doesn’t since the file is blacklisted by name. So in this case it will never download the 1080p web dl version even when one of the similair uploads are just working fine.

My question is: am I correct and is the file blacklisted by name instead of nzb/indexer or does it work different and will it download another 1080p uploader later anyway? Maybe i’m missing something in my setup.

I’ve read the whole wiki including github.com/Sonarr/Sonarr/wiki/Failed-Download-Handling but this doesn’t really answer my question.

Another question I have, is it correct when I’m using SABnzbplus and enable “completed download handling” I can just left the Drone factory field empty and not use the drone factory option at all? What is recommended here?

In sickbeard I can prioritze my indexers, so I can setup my own spotweb instance as more important then another indexer which has a maximum to the api calls per day. I don’t see this feature in sonarr. I understand a backlog isn’t available and not needed but is it possible to prioritize the indexers in any way?

I also saw github.com/Sonarr/Sonarr/wiki/FAQ which explains how sonarr find episodes. It says it will do a search after sonarr was shut down for more then a hour. My nas is powered off for about 8 hours during the night, does this mean it will search for all episodes released during that night when it boots in the morning? Does this include just the episodes from last night or also the episodes which haven’t met the cutoff point yet?

Last question, I’m trying to setup https using this guide https://github.com/Sonarr/Sonarr/wiki/SSL but I can’t find httpcfg on my qnap. I’m using mono 3.10 qpkg from the qnap forum, anyone has a clue where it is located or has an alternative to add my https certificate?

A lot of questions but I hope someone can explain a bit more, I’ve already googled across the web read the whole wiki and a few threads in this forum but couldn’t find clear answers to these questions.

Thank you in advance

ps. sonarr runs on my qnap(intel based) with mono 3.10 installed, Sabnzbd is used as download client.

Lots of questions, glad you read the existing documentation, I’m afraid to think how many more questions you might have had. :smile:

Have you noticed a significant difference in the nzb selections? In my testing its very rare to see any differences among the top indexers. We recommend 2-3 indexers to keep processing times down and deal with one going down for a bit. 26 means 2600 releases (most of which are the same) are going to be processed every RSS sync (15 minutes by default).

It’s by name + published date (assuming the indexer is properly using the date it was published, not indexed), but right now it uses fuzzy date matching and assumes things posted +/- 2 days are the same (we realize this is far from ideal and have plans to improve it). We don’t use the nzb because we don’t have it (we’d have to download it for each indexer to see if it was similar enough, not an exact match because every indexer would generate it slightly different), this may end up being part of the process we do use though.

Yes, leave it blank, you’ll only need it to import something that is named oddly or you want to import something that didn’t go through SAB.

No, all indexers are hit whenever an RSS Sync is run or for a search, prioritization would only come via preferring a grab from one when the releases are otherwise equal: Trello

Only missing episodes will be searched for.

On Ubuntu I believe this comes from a development package (not the minimum packages required to run an application via mono), my assumption in this case would be that package wasn’t created or it wasn’t bundled with the package. I am not aware of any other way to enable HTTPS support directly with Sonarr, the only option would be via a reverse proxy (apache on nginx), also required if your cert uses an intermediate CA (most legit ones do) because mono’s implementation doesn’t support them.

Let me know if you need clarification on anything.

1 Like

hehe exactly:) but the documentation was quite good but sometimes it only explains what it does not how it exactly works. Thank you for answering all my questions

My sonarr install has only downloaded a few releases, but when I was using sickbeard I just used all indexers because sometimes some are down or some have stopped at all. Another thing was some are faster than others with indexing new releases or have more/different releases available or indexes different groupes. Another point was api call limits, I don’t have any vip indexers(I got my own spotweb instance which is unlimited) but just using all 26 gives me enough api calls available for the whole day(same indexers are used by couchpotato and nzbmegasearch). Anyway I did not really noticed a problem for now with sonarr, it seems to handle all 26 every 15 minutes just fine. Do you still recommend to remove about 23 of these indexers or just let it this way and just try out?

I understand, so when a crappy release is uploaded now it will get blacklisted but if the same release is reupload 2 days later it will give a try again even when it has the exact same name etc right?

I noticed NZBMegasearch is quite good at detecting duplicate releases, however i’m not sure how that exactly works.

Does this mean if I use both completed download handling and drone factory it will always handle everything from my sabnzbd completed download directory which match any of the tv shows in my series list and still works using the api because of completed download handling. How does this completed download handling works exactly? Does sonarr keep track of a download using the sabnzbd api based on an id it receives when it adds the download or something? I also saw the status bar from sabnzbd is shown in sonarr(quite nice:)) so I expect it works something like this.

Oke, maybe that’s a good thing and i just have to get used to it. But i’ve voted for this improvement in Trello anyway:)

Hmm maybe you are right and my package just lacks this feature which will be a pity. I will ask the package creator or handle it through apache reverse proxy. Thank you for the hint on this.

Again, thank you for taking the time to answer my questions. I must say sonarr is really good. Keep up the good work!

1 Like

That might be an issue, checking RSS every 15 minutes means 96 calls a day per indexers, free ones might not offer that much, but you could switch them to search only.

Thats our recommendation, but if its working for you now then leave it.

Yes, it would allow that release.

Don’t point the Drone factory at the same path as SAB downloads to, that will disable CDH - Drone Factory needs to be a different path which you would use to manually import something.

Yes, exactly.

1 Like

Thank you, I think everything is clear now:)

I got another question. I’ve set the following qualities: hdtv, webdl, bluray and cutoff point is webdl. So most of the time hdtv is downloaded initially and replaced with web dl later. I got also autosub running which downloads subtitles and create files like filename.srt or filename.en.srt etc. When a version got replaced sickbeard also removed the correspondending subtitles since they are out of sync anyway. But sonarr doesn’t remove them, is there a way to let sonarr remove them? Since this will pollute my collection with subtitles which have no video file available.

At this time Sonarr is unaware that those files exist (they are ignored during disk scans), support will likely come with: https://trello.com/c/IzNiSitU/230-move-additional-files

But this brings up an interesting problem, since subs are being created after the file is moved they wouldn’t be detected by Sonarr until the next disk scan, which might be after the file has already been updated, which would put you in the same boat.

Oke nice:) I don’t think it’s a real problem. At the moment, you know which files are being replaced so you can simply make an option which will remove all files which have the same name or contain the same name that would fix the issue. So instead of: “rm tv.show.hdtv.mkv” it should be something like “rm tv.show.hdtv.*”. Shouldn’t this be sufficient?

Anyway, I will keep an eye on the feature request:)