RSS vs API call

Hi,

In my understanding the phylosophy of Sonarr is diferrent than Sickbeard. Sonarr looks only to RSS feed of each indexer, at the oposite Sickbeard send API call for each episode on each indexer.
However, it is possible to run a similar process than Sickbeard by launching the “Search all missing” process from the “Wanted” tab
=> Am I correct ?

If so, knowing that sometimes there are troubles with API (especially with DOGnzb), I guess that it would be nice to launch automatically the “Search all missing” process once a day or at least once a week (I think this should be configurable).

In addition, I think that when adding a new indexer it would be nice to launch the “Search all missing” process on this new indexer.

From my peronal experience, I notice that this would be great. Currently I manually launch this process once a week, and all the time it find new episodes to download…

In the meantime, is there a linux command with which I can launch the “Search all missing” process without going through the Web GUI?

PS: Thanks for this really good SW

Yes, that’s the big difference between Sonarr and Sickbeard (&forks) that is somewhat difficult to grasp for ppl coming from SB.

On develop we actually have indexer backoff logic implemented that automatically ignores an indexer for a period of time when it errors out. At the same time I also implemented logic that fetches multiple pages of the rss feed, trying to catch up, up to 24h. However, dog doesn’t support multiple pages, sadly.
My recommendation would be not to rely on Dog alone, and accept that it doesn’t work 100%.

Doing periodic backlog searches is way too heavy for the indexer. It’s also pointless 99.99% of the time if your indexer operates properly.
This is not something we will ever implement and strongly encourage ppl not to write any scripts to force this.

At the moment with default settings Sonarr consumes 98 api calls a day on the rsssync, regardless of how many series you have. This can be reduced to 24 with the recent changes on develop, or maybe 12 (if your USP doesn’t get dmcaed to quickly).
This stands in stark contrast with the thousand api calls a backlog search for a dozen series would consume.

On Usenet you have on average maybe 6 pages of rss data per day. (25 releases per hour, on average)
So on average you need 6 api calls, daily, if you weren’t affected by takedowns.

To circle back to Dog, why do you think dog api server is having issues, all those thousands of apps hammering it with the same bloody query over and over again?

In any case, get a secondary indexer if you haven’t already.

Hi,
Thanks for your quick feedback Talbott.
My messages goal was not to criticize sonarr philosophy at all. I agree with you, I think that it is the good philosophy.
I took a look in my sonarr history, and most of the time (I can’t say all the time , I did not take a deep look enough) all the new references is coming from DOGnzb…
This seems to confirm that the sonarr philosophy is the good one!!!

However, I think that when adding a new indexer it would be nice to have the possibility launch the “Search all missing” process on this new indexer specifically.
Today I add nzb.su indexer, then I launch the “Search all missing” proces and it found and downloaded the full season 2 of Hawai five-0 (in french) on nzb.su

Anyway, I just sugest new feature, it is just a proposition…

Rest assured, I didn’t take it at criticism. I just wanted to make clear why we’re opposed to full backlog searches.

Searching after adding an indexer is possible, I just don’t think it’s worth the effort to implement it.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.