Seeing frequently errors like this File copy incomplete, data loss may have occured

Hello,

first let me thank you all for the effort you put into making this great piece of software. Unfortunately recently I experience some issue with it. Since about two weeks now, I’m having troubles when moving files from the download box (running Sonarr and NZBGet) to my NAS. The error message informs me that the file sizes didn’t match and thus the copy is likely to be incomplete.

I guess this is is not the right place to ask for tips who to figure out why the NFS link between the two systems isn’t reliable, there is one issue of which I think this is the right place. What is really unfortunate about the issues with file copying is that the source files also deleted (NZBGet). The only way to get around this is to re-download and hope for the best.

I was therefore wondering if it might be possible to add some sort of retry strategy in cases where the file moving went wrong? Maybe copy instead of moving to a tmp file, and rename and delete in case of success?

As I said, besides this one issue, thanks you very much for this great piece of software and keep up the good work.

Cheers,
Ralph

I have this problem too.

It’s only started happening recently, after I setup my NAS. I have a cifs share mounted to a local folder on the Sonarr host.

Any help is appreciated.

Thanks!

Hi,

no one else has any reply? Are really only the two of us experiencing this problem?

The strange thing is that e.g, CouchPotato doesn’t have this problem. I also tested
the network by copying 100GB in 10 GB chunks without error.

I would help a lot if Sonarr could at least stop deleting the source files in cases where it notices that a file transfer went wrong? Having to re-download the file is quite annoying.
Can the dev’s maybe comment if a retry strategy or a setting to avoid deletion of source files (in case of errors) is planed?

Cheers,
Ralph

Sry guys

Which version of Sonarr? are you on master or develop?
If you’re on master, consider switching to develop until we do a new master release. develop has some code to verify transfers.

To elaborate, the problem is a combination of mono and samba. We basically tell mono to ‘move’ a file, and somewhere internally in the stack it goes wrong. It only happens with CIFS btw. I recall some reports on the internet about SMB misbehaving during certain transfers, and even some configuration options, but that was over half a year ago.

I had to jump through quite a few hoops to make a backup without affecting the performance of local moves. The code has been in develop for a couple of master releases, but just a week before the previous master release one of our beta testers reported a problem, so I had to disable the feature prior to release.
Basically, we now do a combination of hardlinks and moves, the hardlink preserves the original content without wasting a copy action.

Anyway, try develop, and enable trace log level, so we can get proper details if it goes wrong.

1 Like

Thanks for the help Taloth!

Here is some more info as you requested…

Sonarr Version
2.0.0.3357 (master)
Mono Version
3.10.0 (tarball Wed Nov 5 12:50:04 UTC 2014)

fstab entry
//tower/Media /home/kodi/Media cifs iocharset=utf8,noperm,nounix,credentials=/home/kodi/.smbcredentials,uid=1000,gid=1000 0 0

Log File (Error around line 1050)
http://pastebin.com/D41pR5Mt

I’ll also try the develop branch like you suggested.

Cheers!

When I disabled the verified file transfer logic prior to release, I kept the check itself, that’s why you’re actually seeing a Warn, in earlier sonarr versions you would just silently end up with truncated files. Lovely right?

Lemme know how it goes when you’re on develop.

Only did a few tests in develop. So far so good, but I’ll keep you posted!

Bug or no bug, this is still an awesome piece of coding. Loving Sonarr since day one.

Exactly the same here with a cifs mount to a NAS drive, I see a lot of “File copy incomplete, data loss may have occured.” errors in the log. Very nice app, thanks for all your work, any news on whether this is definitely fixed in dev? I love the app but this is a bit of a show stopper for me. I would love to jump in a fix it myself but I’m slammed with work at the moment.

Hi,

can’t tell about the improvements on the dev branch but what did mitigate the issue for me was to move the download folder (destination of NZBGet, source folder for Sonarr) on the NAS too. No incidents since.

It’s not perfect, but will do till the devs sorted out the issue for good.

@Devs: Thought about my proposal to copy instead of move the file, and only delete if checksum succeeded? Doesn’t sound like too big of change, does it?

Any how, thanks for your effort.

Cheers,
Ralph

We only copy if the hardlink-move combo fails (which is implemented on dev). Copying in all scenarios would be really bad when both source & target are on the same drive (something you cannot reliably determine).

Please note that this is NOT our problem, all we did is implement a workaround. The problem is with mono+CIFS.

I probably have this same issue. Mount a NTFS permission with CIFS on my ubuntu 15, the ntfs is on a windows home server (WHS) share folder. One out of ten videos i get them corrupted.
If i click on my WHS=>manage computer=>shared folder=>open files
i see the entry there “d:\shares\path to series\movie file.mp4.partial” with WRITE status that stays there forever.
Needless to say that that file is unplayable.

Any ideas or help ?

tnx

Hi,

I don’t know it it helps for you, but I was able to mitigate this situation by moving the destination folder of my downloader (ie nzbget) on the same host. As there is no network copy anymore, I didn’t had any issue with it anymore.

If this is still necessary with the current version I don’t know. AFAIK Devs addressed this issue in recent releases, so maybe your problem is different from what I experienced.

Cheers,
Ralph

thanks @Ralph i will try that.
I have the latest master/stable version installed on my Ubuntu i don’t know if this is the one @Taloth is talking about.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.