4 500g drives how to setup raid.....
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
4 500g drives how to setup raid.....
Im wondering what is the best raid setup to use for 4 500g drives, for storage and file safety. this setup would be for video, photo, etc sharing between the computers in my network...
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
-
- Posts: 1386
- Joined: Fri Jun 18, 2004 6:53 pm
Isn't RAID 10 more tolerant of multiple disk failures than RAID 5?
In RAID 5, if you lose a disk and commence recovery, loss of a second disk will result in the loss of all data. In RAID 10, (assuming 1+0 which is a striped set across two sets of mirrors), the array can withstand multiple disk losses as long as no complete mirror is lost. In theory, RAID 10 should be more robust for the same number of disks.
It's possible that when one disk in an array fails, others are nearing their end of life as well, so there is a high chance of secondary failure during an hours-long recovery process. Say you had four disks - in RAID 5, a failure of any of the remaining three during recovery will destroy your data. End of story. In RAID 10, there's a 1 in 3 chance that a failure of a second disk during recovery will destroy your array, versus a 2 in 3 chance that the failure of a second disk will be in the other mirror and not impact your data.
So the question is, are you after maximum storage space or maximum redundancy?
You've probably read it ad-nauseam that RAID is NOT a backup, but in case you haven't, RAID is NOT a backup. It merely provides limited hardware fault tolerance.
In RAID 5, if you lose a disk and commence recovery, loss of a second disk will result in the loss of all data. In RAID 10, (assuming 1+0 which is a striped set across two sets of mirrors), the array can withstand multiple disk losses as long as no complete mirror is lost. In theory, RAID 10 should be more robust for the same number of disks.
It's possible that when one disk in an array fails, others are nearing their end of life as well, so there is a high chance of secondary failure during an hours-long recovery process. Say you had four disks - in RAID 5, a failure of any of the remaining three during recovery will destroy your data. End of story. In RAID 10, there's a 1 in 3 chance that a failure of a second disk during recovery will destroy your array, versus a 2 in 3 chance that the failure of a second disk will be in the other mirror and not impact your data.
So the question is, are you after maximum storage space or maximum redundancy?
You've probably read it ad-nauseam that RAID is NOT a backup, but in case you haven't, RAID is NOT a backup. It merely provides limited hardware fault tolerance.
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
To expand on what bgiddins said:
In a RAID10 set, you have multiple stripes, each of which is mirrored. As such, you can lose one drive in each stripe without losing data.
In RAID01, if one drive dies, that entire stripe is lost. As such, your fault tolerance is a single drive.
If I remember correctly, the performance of the two setups is roughly the same.
In a RAID10 set, you have multiple stripes, each of which is mirrored. As such, you can lose one drive in each stripe without losing data.
In RAID01, if one drive dies, that entire stripe is lost. As such, your fault tolerance is a single drive.
If I remember correctly, the performance of the two setups is roughly the same.
The big win here would involve using a ZFS pool of drives, but since I suspect you're not using FreeBSD or Sun or Apple hardware for home use, try RAID 1 + 0: Build a set of mirrored drives and stripe those sets of mirrors.
I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
Intel's onboard ICHxR works just great until you have to rebuild or recover your array, and then... ugh. You get what you pay for, and you really can't spend less than $300- if you don't have that kind of money, then take your chances with onboard RAID.
As others have mentioned, 3ware makes decent boards, but I'd also take a close look at Areca as well- in your setup, I'd recommend the 12x0 (PCIe 8x) I've had good luck with both vendors over the years, but have had fewer issues with Areca's drivers in the recent past.
I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
Intel's onboard ICHxR works just great until you have to rebuild or recover your array, and then... ugh. You get what you pay for, and you really can't spend less than $300- if you don't have that kind of money, then take your chances with onboard RAID.
As others have mentioned, 3ware makes decent boards, but I'd also take a close look at Areca as well- in your setup, I'd recommend the 12x0 (PCIe 8x) I've had good luck with both vendors over the years, but have had fewer issues with Areca's drivers in the recent past.
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
Performance issues are somewhat moot point. In many cases, having a single parity-based solution is perfectly acceptable. Rebuild sizes on my 8x1TB array take roughly 16 hours - not 3 to 7 days. Hard drives and controllers are becoming faster (my setup is capable of 350MB/s linear reads, 200MB/s linear writes), and it's making rebuilding failed arrays far less of an issue.fri2219 wrote:I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
Just out of curiosity... how would ZFS be better than setting up RAID10? I'm not too familiar with how it's setup, but would the end result be the more or less the same (only you're limited to using that filesystem)? In one case, the rebuild is taken care of by the controller - the other, it's taken care of by some monitoring system in the OS.
Just thought along the same lines, after reading through the baarf article. I don't think any user runs 200 discs in raid arrays at home. I'm wondering if all those bad parity and flaky drive restrictions actually still apply and whether Raid6 is an appropriate solutions for the "big guys".Nick Geraedts wrote:Performance issues are somewhat moot point. In many cases, having a single parity-based solution is perfectly acceptable. Rebuild sizes on my 8x1TB array take roughly 16 hours - not 3 to 7 days. Hard drives and controllers are becoming faster (my setup is capable of 350MB/s linear reads, 200MB/s linear writes), and it's making rebuilding failed arrays far less of an issue.fri2219 wrote:I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
*nixcraft article
Here's some better illustrations of the points I was trying to relay, along with performance numbers: http://www.cyberciti.biz/tips/raid5-vs- ... mance.html
Also, the "3-7" days point I was trying to make... I mangled an edit. That was meant in the context of using Intel's soft driver, not a decent controller.
Also, the "3-7" days point I was trying to make... I mangled an edit. That was meant in the context of using Intel's soft driver, not a decent controller.
There's quite a long list of reasons why, but here's a good demonstration of how convenient it is to use: http://opensolaris.org/os/community/zfs/demos/basics/Nick Geraedts wrote:Just out of curiosity... how would ZFS be better than setting up RAID10? I'm not too familiar with how it's setup, but would the end result be the more or less the same (only you're limited to using that filesystem)? In one case, the rebuild is taken care of by the controller - the other, it's taken care of by some monitoring system in the OS.
Here's a comparison chart showing feature differences: http://www.unixconsult.org/zfs_vs_lvm.html
-
- Posts: 312
- Joined: Tue Jul 24, 2007 3:57 pm
- Location: Minnesota