Sonarr, a Seedbox, and no Drone Factory?

Hey everyone,

I have just updated to Sonarr Ver. 2.0.0.4855 and see that Drone Factory will be going away. I have been using it extensively because I use a Seedbox. Basically, my download cycle is: Sonar (On QNAP) Searches for Episode –> Passes to Deluge on Seedbox –> Downloaded and moved to Complete Folder on Seedbox –> Rsync Script (on QNAP) downloads episode from Seedbox over ssh tunnel and puts it in drone factory folder –> Sonarr Imports Episodes

This is just the way I figured out how to accomplish what I wanted when I started with Sonarr a few years ago so there may be better ways to do this but it has been working perfectly for me. So, my question is how can I get Sonarr to import the transferred downloaded files with no drone factory?

Thanks,
CC

+1

I have this exact same setup. I have Sonarr running at home and use a torrent or usenet blackhole which is then synced over to my seedbox. After the seedbox does the downloading the completed downloads are synced back to my nas and imported via drone factory. I realize that drone factory has quite a few issues so I’ve added my own renaming scripts to clean up the downloads before drone factory processes them. Is it possible to setup Completed Download handling for those of us in this situation?

EDIT: I just read this post and saw that the GUI for drone factory is going away but the same functionality will be there via API. A while back I had to write a script to trigger the manual import every 5 minutes because Drone Factory was bugged out so this will be no different. tl;dr It looks like this won’t be a big deal.

Oh, yea ok so not actually an issue, I can just add the API call to my transfer script.

I guess I don’t really understand why the GUI is going away. It seems to just be hiding it to prevent people from using it wrong. I didn’t realise it was such a problem, I have imported 10 TB + files though it with very few issues. I did like having the Rescan Drone Factory Folder Button though for that time when you had to do some manual clean up. Oh Taloth summed up the issues in this post pretty well.

Is Transfer Providers that he mentioned a thing yet? Or maybe that is in V3.

Exactly this, that way Drone Factory won’t start scanning mid-transfer and import half a file.

That is is what Manual Import is for, which has the ability to trigger the same scan that Drone Factory does as a one-off or the more in depth completely manual intervention.

It is not and won’t be in v3 (at release at least).

1.Makes sense, I had got around the half a file issue with flock when I set this all up 2 years ago so I had forgot that it could be an issue.

2.I will look at using the manual import more, the Rescan Drone Factory was just one button so that is why I always used it. But I can create a script to accomplish the same thing now.

3.Ok, do you guys have a recommended solution for when you are using a seedbox or a download box at another location? Or basically what we have been doing, which has been working.

Creating the API call was easy except for one part that I am not sure on. I used this:

curl http://localhost:8989/api/command -X POST -d ‘{“name”: “downloadedepisodesscan”}’ --header “X-Api-Key:MyKey”

Disabled Drone Factory and added that to my transfer script and I don’t notice a different which is great. My question is that I am not sure where it is pulling the path from in that case, I am assuming it is getting it from the Drone Factory Options. I tried using the Path variable and did this:

curl http://localhost:8989/api/command -X POST -d ‘{“name”: “downloadedepisodesscan”,“path”: “\\Nas\Download\TV\"}’ --header “X-Api-Key:MyKey”

Which runs and seems to do nothing, but gives me no errors. I tried various path type from shares to full folder file system path but no change. I saw a post from Taloth saying the Path variable shouldn’t be the root folder but rather an individual download folder. But in my case I only want to scan the root for whatever has shown up. Any advice?

Thanks,
C

With the last curl command you should point it to the individual download instead.

A folder specified by the path variable is assumed to be a single download (job) and the folder name should be the release name.

see https://github.com/Sonarr/Sonarr/wiki/Command#downloadedepisodesscan

Ok, so in my case I don’t want to do it by job, but just scan the folder twice and hour. The first curl command does that and from the right folder. But how does it know to do the Drone Factory folder? Or is it pulling that info from the DF settings within the Sonarr GUI?

I am just trying to move away from Drone Factory as suggested so wanted to make sure I wasn’t using it to set the folder unless that is correct as of now.

I would really love to see a basic best-practice guide on this, as I’m sure the devs or other users with greater history using it have better experience than I with making this work. I’m not super-happy with my SSHFS solution - mostly it works but as others predicted it has periodic awful slowdowns - and would love to see some guidance on this, as I was just about to fall back on using the drone factory functionality.

AFAICT putting aside differences in torrent clients, we’re dealing with 4 basic possible combinations (if we assume mac and nas users use a variety of linux commands to get things done) of REMOTE<->LOCAL, being Windows<->Windows, Linux<->Windows, Windows<->Linux, Linux<->Linux. It’s only 4 combos - wouldn’t they all have been covered, well, by someone in the past?

I’m being a bit weak, I’m sure, but my head starts to hurt now that I’m considering rsync and curl scripts.

@CamCorp You have to do it per job, even if that means you intelligently simply take the top level directories and run the api for each. All you have to do is guarantee that the transfer is finished.
The API call itself is asynchroneous btw, you get back the command ID and you can check the /api/command/{id} to get it’s progress. I’ve been meaning to add a ‘waitForCompletion’ option to the POST, but that might cause timeouts.

@ozoak

Mounted sshfs is the best approach in general because it means the filesystem is directly accessible. But what you need is to sit down and tweak the stack. ssh needs the hpn (high-performance networking) patches, which may be easy or harder depending on the distro (archlinux has a ready package for it). And you need to check your tcp stack parameters to allow for greater tcp windows/buffers to compensate for latency.
What it boils down to is that your tcp window needs to be throughput x latency x 2. If I have a 100mbit connection, and a 50ms ping, then it takes 25ms for data to reach the destination and another 25ms for the acknowledgement to reach me. So data is ‘in flight’ for 50ms total, which amounts to 0.6 megabyte for a 100mbit connection. or 6 megabyte for a gigabit connection.
ssh also has a bunch of internal buffers that are fixed length, something the hpn patches address.

It might be worth checking if your seedbox provider has openssh with hpn preinstalled. And go from there.

Refs:
https://www.psc.edu/hpn-ssh
https://aur.archlinux.org/packages/openssh-hpn-git/

Umm… what the f**k is a transfer script? How do people that are not programmers supposed to get the downloaded files into the tv show folders and renamed.categorised (particularly when I often get an entire series or 4 at a time.) And how do I run an API?

We are talking about when you are using a seedbox to download everything then transfer then to you own media server so its only for that instance. The API is pretty simple( see above) and can even be run from a browser, if you check out some of the linked pages there is more info. You can use manual import as well.

So Taloth What I would have to do is have it get all the folders under the root download folder, then run the command against each one individually. I could use flock to make sure its not run over top of itself. But basically, is that what I will have to do? I don’t want to go the SSHFS route as it is not always an option and have issues with it in the past and its more complexity.

I got to be honest and say I would much rather see the DF function remain, I know you have said its not up for discussion and you have never seen a set up which requires DF which I can understand. But when DF created simple solutions that worked well I don’t feel like creating a far more complex solution actually counts towards not needing DF. It seems like quite a Nuclear option to fix a lot of the issues you outlined in previous posts.

Drone factory was such a good automation tool, you could dump a whole show or multiple shows into it and watch Sonarr just go crazy and move it and rename it with out having to do anything, it was a thing of beauty, while manual import worked, but it was a lot more work when I prefer automation.
Since it seems that your mind is made up about retiring DF is there anyway you could change or allow the API to scan the root folder? Maybe even just until the Transfer Providers you were mentioning become a reality.

With DF or an API like that you have a full media organisation system where it can get the files for you or process media from other sources all automatically. This thing you guys have created is amazing so I do mean no disrespect and probable owe you 5+ hours a month in time you saved me. I just want to mention that because of DF you have created something that has been used in so many ways even if that wasn’t the original purpose.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.