It seems you've done more research into it than me!
You have no
idea how much time I've used fiddling with this array
Also, using ZFS, even on FUSE, is fantastic. It's very very stable, and for a collector like me, it gives so much piece of mind to know that ZFS will know and fix
file errors on-the-fly and that I can always do "zpool status" and see a list of any files that possibly couldn't be fixed even with double parity (I'm running RAIDZ2 on 7*2 TB members, of which one member is 4*½TB and two are 2*1TB, both in "linear" (i.e. JBOD) mdadm arrays for the middle layer).
I was just looking at this enclosure and want a file system I can expand over time. Preferably something native to windows now. I have a large Silverstone enclosure so if possible I'd rather not even buy that Edgestor enclosure as I can fit 7 drives in my enclosure. I was really hoping ZFS would have taken off by now but it just never happened. Is there anything else similar to ZRAID and native?
Not sure exactly what you mean by "ZRAID and native". Do you mean like, a solution where you don't use virtual disk files on the host but rather raw disks (or even a bare-metal OS not virtualized), or a filesystem that behaves like ZFS? In the latter case, btrfs for Linux is coming along very nicely, and the versions with the newest 2.6.37/8 kernels should be very stable. I'm using it on my laptop just as a regular fs, but haven't tried it as a RAID provider (yet, I'm gonna use it on my HTPC/fileserver for RAID-1'ing a couple of old ATA disks used as temporary download space).
Have you already bought the enclosure? I'm only asking because I've done a bit of case modding over time to be able to fit all my disks in them. You can really stuff quite a few disks in a regular case with a bit of creativity, a hacksaw, some metal bars, screws and lady's nylon stockings
It'll even look good, you'll get better airflow, cooler operation, less noise and less dust in your case! I fitted 11 drives in a regular mid-tower case, and have a larger Lian Li case for my HTPC which holds 18 drives plus an SSD for the OS. You just have to say bye-bye to all the 5.25" bays at the front of your case
About the expandability, I'm gonna say this - with the disclaimer that I haven't followed best practices, but rather tried to find a way where I could use drives of different sizes and be able to expand the array later on, even with the ZFS limitation that you cannot add members:
Using a middle-layer like mdadm or LVM is really nifty in that you can create ZFS member drives using any combination of smaller physical drives. Some will argue that this makes an array member more likely to fail, but I don't really agree to that. Statistically, you can't say that a member comprised of three drives is three times more likely to fail than one made of just one member. Harddrives will fail eventually, that's a given, but each of the three disks should - statistically - last just as long as one, so it should be the same on average. Of course, if we accept that any drive may be "bad" from the factory and will fail quickly, the risk of getting one of those and have it affect your member would be three times as high. On the other hand, you could argue that each of the three disks is only carrying 1/3 of the load of that member (so long as you use linear/JBOD in the middle layer), so if anything they should last longer
than a single disk. Also, random seeks on the filesystem level would on average be quicker, as you have roughly three times the number of drive heads and platters to move around poking at your data when compared to a single disk.
Add to that the possibility of being able to expand you array. Point in fact your situation. Not sure if you're actually running ZFS yet, but let's say you currently have a 4*1TB ZFS RAIDZ array. That's 3TB of data. Since you cannot add new members to a ZFS array, buying 4 2TB drives and following best practices you'd have to create a new 4*2TB RAIDZ vdev and add it to the pool, adding 6TB of data. That's 9TB total.
If, however, those initial 4*1TB members were actually on a middle layer like md or LVM, you could chose to add the new 2TB drives to the existing middle layer arrays instead of creating a new vdev. In that case, each member would expand to 3TB, and ZFS would automatically grow to use the new space, ending up at - also - 9TB total.
In this exact setup, you wouldn't gain any extra space using the middle layer, but given a larger number of members and a higher level of redundancy, like RAIDZ2, the difference can be much larger. Point in fact, my setup, if using best practices:
4*½TB physical disks in RAIDZ2 vdev: 1TB
4*1TB physical disks in RAIDZ2 vdev: 2TB
4*2TB physical disks in RAIDZ2 vdev: 4TB
Total, using best practices: 7TB effective. Using my approach:
7*2TB middle layer linear array members: 10TB
Total, using middle layers: 10TB effective.
That's a very noticeable difference, +3TB with the same disks!
Now, about when disks fail. I recently had 3 drives failing simultaneously (two with bad sectors, and one just going into complete convulsions). It was pretty quickly fixed by replacing each of the physical devices and mirroring the old faulty device onto it. Re-add to the middle layer arrays, and do a scrub on ZFS. As most of the data was just fine, it was mostly reading and fixing the few errors that were caused by bad sectors.
Hm. I'm rambling a bit here
But it's nice to be able to pass some of this info on to someone else. I didn't find much useful info when I started on my venture back then, so if you need any info or tips, ask away - I have done a lot of thinking and testing doing this
Gotta go to the butcher and pix up some meat, do some shopping for beer, but I'll be back later and will check back.