Constant "All indexers are unavailable due to failures" after internet failure

Version
2.0.0.3953
Mono Version
4.2.1 (Stable 4.2.1.102/6dd2d0d Thu Nov 12 09:52:44 UTC 2015)

OS: ubuntu 14.04
((Debug logs)) (posted to hastebin or similar):
Description of issue (if you think you’ve found a bug please include steps to reproduce):

My internet drops for a minute or two relatively frequently - Im working with my ISP to resolve this, but i think they are going to need to rerun cable. Due to these frequent drop outs, sonarr thinks my indexers are failing. All i ever need to do is go in a re test them and its fine again, but it kind of defeats the purpose of automation if its something i have to go in and do each time i want to check for a new episode or something.

Is there a way to extend the timeout period, or prevent it from “giving up” on an indexer if it fails too many times within that period?

Its not immediate failure, it escalates the more times it fails:

5, 15, 30, 60 minutes then 3, 6, 12, 24 hours. If the request succeeds after a failure the escalation level is reduced.

There is no way to change this behavior, but restarting Sonarr will also clear it.

Actually, no, Backoff is stored in the db.

@Ken_Kyger Since it deescalates upon success, it means it fails +50% of the time, that’s some crappy ISP connection.

Either way, in version .3953 we already don’t escalate if the failure is due to a dns or connection failure. So i’m guessing it’s some other failure, timeout for example.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.