Idea: use 2 drives to store series put videos on bigger and nfo and jpg on smaller


#1

I was wondering if you had a large video library if you split the videos on to drive with large clusters and put the nfo an jpg on small partitions with little clusters would there be enough space gain to be worth it.
I was thinking if a script could auto move the small files after sonar download them to the main drive.
I believe this would also make using snap raid more efficient if the files were near same size on the drives.
Im trying to do this on ext4 file system.

Then use Linux drive combine to make the 2 drive look like one. they would need to maintain identical folders. I have searched around and cannot find much info on this.

drive 1-> video drive 2->nfo jpg == software generated drive 3->drive 1+2


#2

I don’t think there would be enough savings to justify the complexity of splitting the metadata into a path separate from the video files. The jpg’s are big enough to not worry about, most are over 32KB and many are well over 64KB. So that leaves nfo files, 1 per series and 1 per episode. Assuming you were using 128K clusters and all your nfo files were 1K, and you had a library with 200 series and 5000 episodes. So for just the nfo files that might save about 600 out of roughly 7,226,881 MB (aka 6.9 TB) for that particular library. The jpgs would be a lot less savings, so let’s say for both best guess of around 1 GB out of over 7,000 GB. If you think it’s worth your time to worry about that 1GB when 8 TB hard drives are around $150, go for it.


Getting more space on your drives with Mhdffs
#3

It was just an idea some people may store metadata for many sources like kodi wdtv and pnp media servers, these all create a lot of little thumbs and xml files and if you allow artists and directors etc thumbs there can get up near 50 little files per show. I just think there has to be a file system that handles the small files. I think ntfs will hide them if they re very small but im not sure how ext4 does it. I just tossed this out as an idea to see if anyone thought it was worth looking in to.


#4

I just don’t think it’s an issue worth the extra effort in a world with 10+ TB drives. There’s just not that much slack space in clusters wasted in most files systems so it doesn’t add up to much savings in the grand scheme of things.

My media is on ZFS with up to 1MB record sizes (basically clusters). But it’s dynamic and will create records as small as 4KB I believe, definitely not worth bothering with there.

For NTFS, the default cluster size is 4KB for volumes up to 16TB, 8KB for volumes between 16 and 32TB, and so on.

For EXT4, I think the biggest worry is running out of inodes with tons of files in a file system as it’s default block size is 4KB like NTFS.
https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout

So, even if you multiplied my math by 50, but were using the sane defaults I mentioned, then you still would only save maybe 1 GB of disk space.


#5

I avoided using zfs i have been using mhddfs in single user environment as it has poor security and not being developed any more it seems. It does work good as long as you avoid moving files. I need to find a better file system now that i have near 25 drives attacked. mhdffs will just stop working every few days and i have to restart. I know mhdffs can auto move folders if they overlap a drive and thought if someone just make small change to how it worked they could have it allocate files by cluster size of drives. Then create several drives with varying cluster sizes and let it mirror the directory and auto move files to where they waste the less space.
I did write to one developer of these file systems and he also did not think it would be effective at saving space.
a file system that could auto compress the text files and leave the thumbs and videos uncompressed could be prefect for this if it exists.


#6

It does, it’s called ZFS. If you enable compression, it will compress any files that are compressible, and leave the rest in their original form.


#7

zfs looks nice for someone just getting started. since i have like 18 drives using mhddfs It would be hard to switch. I looked into BRTFS it is a journaling system that can be put on ext4 drives easily and offers compression and may have raid in the future. I tried to convert a 5 tb drive to it and i took like half a day. then i changed it back to ext4. I need to know more on how it works before trying to convert all drives to it.
I have decided to get rid if mhddfs and change to mergerfs as mergerfs is still being developed and will work on my system. I will need to set up a small test system and learn how btrfs works. Btrfs seems like it would want to join all the drives into a raid 0 which is what i want to avoid. Id rather keep all files on 1 drive than get the extra speed from raid 0.
On videos if you loose a drive in a raid and backups fail it pretty much destroys all the drives. at least on a non raid system i still have the remaining drives. Im hoping i can get btrfs compression to work on individual drives then just use merger fs to join them if there is no option in btrfs to join and keep the separated.
Its all for my personal library. sadly If i go btrfs it would take a week or two to convert every drive.
I tested out freenas that had zfs and If i was starting all over that looks like a good way to go.