Alternative to hard linking and copying

Have something else do the sorting, maybe something custom because of the requirements and that tells Sonarr to update its library after the file is copied (so Sonarr knows the file is there and doesn’t need to wait up to 12 hours to find out).

Whatever does the sorting would need to hardlink the file to the correct location (or you could move the download file and re-link it in the download client if you prefer) and then tell Sonarr to rescan the series. Turning off CDH is important so Sonarr doesn’t try to import it.

Ah, ok, I think I get it. I was confusing CDH with the Completed Download Handling - Remove option. So just to make sure I’m understanding it correctly, if CDH is off, it will simply send the torrent to the torrent client and that’s it, right? But it will also run the post processing script?

If I understand that correctly, with a little work, I think that’s the option that I have been looking for. But when you say “script something that does the importing” are you talking about the actual script doing the importing or are you talking about the scripting triggering Sonarr to import?

What I understand you to be saying is turn off CDH, use a custom script to find the drive the show resides on and then trigger CDH from the script on the correct location.

I hope I’m making sense and not over explaining myself. It’s Friday and my words have not been the best today.

Correct.

No, because nothing will be imported.

The script will need to do the importing, if it tells Sonarr to import the file (which is possible) Sonarr will decide where to put it.

No the script will need to do the moving/copying/hardlinking of the file, the benefit of that script talking to Sonarr at all is to tell Sonarr it was imported (instead of seeing the file as missing), that step is optional, but would be better overall. The approach is advanced and unique to your setup.

I’m curious about the share that you’re using though, if the torrents were downloaded to the share would hardlinking work? I have a similar limitation due to Drive Bender not supporting hardlinks, since I rename only symlinks would be viable for me though.

Ok, if post processing doesn’t happen unless I import, how would my script even be triggered? Are you talking about a cronjob or something?

Ah, hadn’t thought of that. My thinking was, since I know anything done in the webui can be done in the API, I could have the script trigger a manual import and point it to the file on the actual disk behind the share, without realizing I wouldn’t be able to have control over the destination. There doesn’t seem to be a way to dictate that, which is fine if that’s how it is. Just trying to find the least invasive option. In that case, could you tell me where in the source code the import function is? That way I can mimic the process instead of having to start from scratch.

Actually it’s the opposite. I’ll try not to over explain it. The software RAID system is called UnRAID if you’re curious. It provides parity and drive pooling like a RAID. I have 8 disks and 2 parity disks. The 8 disks are under /mnt/diskX/ with X, of course being the drive number. Those disks can be hard linked. Any folder on the root of those drives will create a virtual share (or I guess a drive pool is a better way to look at it) that will be mounted under /mnt/users/name_of_share. So say you have files /mnt/disk1/downloads/episode1.mkv and /mnt/disk2/downloads/episode2.mkv. Both of those files will show up in the downloads share mounted under /mnt/users/downloads/ even though they are on different drives. Anything under /mnt/users/ cannot be hard linked for obvious reasons. However any hard links made from the drives themselves will work just fine and still be valid under the user share since the hard linking doesn’t affect pooling.

The way I have it set up now, I download directly to the share because I have one drive to deal with and I can see everything on all 8 drives. This won’t allow me to hard link however. I could hard link if I downloaded to a specific disk instead of a share but this creates numerous management issues such as that disk filling up much faster and the the others not being filled at all because everything is going to one disk instead of being spread out. Or if I had shows A-D go to disk 1, E-H to disk 2, etc, well this creates different management issues, as does copying, that I have to deal with when the whole point of my setup is to lessen management. The ideal solution would be to download directly to the user share but then have a process figure out which disk the file is actually on and hard link that.

Actually, this gives me another idea that might work. What does Sonarr use to hard link? I image it just uses whatever is baked into the OS, right? I’m on Linux so would it just use ln? If I cannot change the values Sonarr uses to hard link, maybe I can use some kind of man-in-the-middle between Sonarr and ln. Does that sound feasible?

Yeah, something like a cronjob.

Which part of it, its in several places depending what its doing, but I don’t think any of it will be all that useful.

Interesting, I’ve used unRAID before and was looking at using it again, but wasn’t sure if/how hardlinks worked, which for my use would need to work across the share, which won’t work as you described.

Doesn’t downloading straight to the share add a lot of overhead since parity calculations are being performed as writes are happening (unless you’re using a cache drive of course).

Its all done through mono, we don’t explicitly call ln, so you can’t MITM the calls to ln with your own app.

[quote=“markus101, post:13, topic:11069”]
Which part of it, its in several places depending what its doing, but I don’t think any of it will be all that useful.[/quote]

The part that does the regex match to the filename and then decides where to finally link/copy/move it to.

Yeah, it makes the writing process slower, but so does writing it to the cache drive first and then copying it to the share. The cache drive is really meant to speed up the write process and delay writing to the array until a time when it’s more convenient. The download speed is slower than writing to the user share anyway so it doesn’t really affect anything for me.

Well, mono then. Going back to my question about the source code, where would that be? I think if I can find what command Sonarr sends to mono to hard link, I could MITM it by having a script that forwards all commands that it receives from Sonarr to mono (in case Sonarr sends commands other than hard links, which I’m sure it does) but will also reformat any commands to hard link. Sonarr is in a docker container so there are no worries with it messing with another program.

The parsing is done here: https://github.com/Sonarr/Sonarr/blob/develop/src/NzbDrone.Core/Parser/Parser.cs but there are other parts that determine which series and episode it is (ParsingService).

Mono is how Sonarr runs on non-windows machines, its not a command that is called. Here is where the hardlink is created: https://github.com/Sonarr/Sonarr/blob/develop/src/NzbDrone.Mono/DiskProvider.cs#L119

Oh, duh. I don’t know what I was thinking but I should have realized what you meant as soon as you said mono. I was just tired and didn’t catch it. I know you don’t running call command from mono/.net like that.

Anyway, thanks for all the help/suggestions. I think I can come up with a workable solution. Also, I know my use case scenario is rather unique but I wonder if it might be worth a feature request to have the post-processing script run even if CDH is turned off. There aren’t many reasons one would choose to turn CDH off, but I would imagine most if not all of those reasons would be because something else is handling the file. In that case being able to trigger that process with a post-processing script would be useful, as opposed to a cron job or watch folder or something. I also wouldn’t mind the ability for scripts to pass information back to Sonarr.

It runs when Sonarr imports files from a place other than the series folder, manual import, CDH, drone factory or when told to import through the API. I don’t see how an option would help here, if Sonarr isn’t doing the importing what would the script do and how would it know what parameters to use at that point you might as well call the script yourself.

Not trying to beat a dead horse here but I had another idea that I wanted to run by you markus101. At what point does Sonarr know where the file is located? When the torrent client reports to it after the download is finished? My thought is I can run a script from rtorrent, assuming I can ever figure out rtorrent scripting, so that when the download completes it will find the correct disk and change the save location from the user share to the actual disk. Then Sonarr will be able to hard link. There are plenty of different events that I could potentially use to trigger it, I just need to make certain that is happens before the information gets sent to Sonarr. Does that sound like it could work?

It knows while its downloading as well, but if it changed before Sonarr knew it was completed then it wouldn’t matter, Sonarr would just see it at the new location.

Definitely important, but if it moved and Sonarr failed to find the file it would try again a few minutes later, the issue would be partial files getting imported. If you could figure it out earlier (when the torrent is added) then Sonarr would never be able to import a partial file.

Seems reasonable, no clue how powerful rtorrent scripting is, but it seems plausible.

Ok, great, that’s kind of what I thought. There are a lot of events rtorrent uses so I don’t think it will be an issue to find some kind of “on torrent add” trigger. The only issues are rtorrent is essentially downloading 2 files, the .torrent file from the magnet think and the video files. If I’m not careful, the magnet link might cause problems. The other issue is there is virtually no documentation on rtorrent scripting and the forum (for rutorrent actually) is not accessible to new users as the activation emails never get sent, at least for the half dozen email addresses and providers I’ve tried anyway.

Anyway, thanks again for all the help. The level of support you consistently provide is above and beyond.

1 Like

If Sonarr could be modified to work the way I have CouchPotato configured, it would solve this issue for me and the OP - I’m in a similar situation being not able to use hard-links, though my array is built on Snapraid instead of Unraid.

The ideal solution for me (and what I have CouchPotato doing), is that when a file has completed downloading, first it is copied to the destination (optionally also renamed). Once that has completed, the original file is then replaced with a symlink to the copy. rTorrent has no issues with seeding from a symlink, and as an added bonus with a somewhat obscure ‘find’ command from a shell I can search for files linking to X and thereby discover what original filenames were before renaming if I need to.

I just realized my proposed solution won’t work. Even if I have rtorrent change the download location to the actual disk (the source in the hard link), when Sonarr tries to hard link it the destination will be on the user share. Looks like my only option is to turn off CDH.

Unless Sonarr uses relative paths, which I doubt it does, and even then I don’t think it would work depending on where the root of the relative path is. So I guess I’ll ask: when Sonarr does the linking/importing does it or can it be set up to use a relative path or does it use an absolute path?

Also. what’s a good solution to handling the sorting? Preferably something available in Linux. Flexget? Some python script?

Would @bloktor’s proposed symlink solution work for you, because that should be easy in a custom post processing script…

No, not really. I’m trying to hard link because I don’t want to copy it over to begin with. Almost everything on my server comes from Sonarr so writing all data twice is placing a not of unnecessary wear and tear on my drives, and with volume of data I’m dealing with it is a significant amount.

Mm, I’m forgetting that in the case of a seeding torrent Sonarr will only copy or hardlink.

It would seem that (if it was even viable solution) move-and-symlink would have to be built into Sonarr as an option for torrents, or perhaps you could have seeding stop on completion and restart it via a custom script in Sonarr after the file is moved?

I’m not quite sure what you mean or how that would accomplish it.

Regardless, it appears my mother was wrong; if I keep whining, I just may get my way after all. The new release candidate of UnRAID supports hard linking in user shares, which is really the ideal solution. At least for my scenario anyway.

It would give you this:

But it does sound like the UnRAID changes are preferable.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.