4 500g drives how to setup raid.....

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
kojack
Posts: 18
Joined: Thu May 22, 2008 4:37 am
Location: Grand Falls, NL, Canada

4 500g drives how to setup raid.....

Post by kojack » Wed Oct 15, 2008 4:06 pm

Im wondering what is the best raid setup to use for 4 500g drives, for storage and file safety. this setup would be for video, photo, etc sharing between the computers in my network...

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Wed Oct 15, 2008 4:45 pm

If you're going to run this on an onboard chipset, then I'd suggest you go with RAID10. You'll only get 1TB of usable space, but the data security is about as good as it gets, and there's little to no computational overhead involved with running the volume.

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Post by JazzJackRabbit » Wed Oct 15, 2008 6:58 pm

Best would be RAID5, but you'll either need a card that supports hardware RAID5 or a pretty beefy CPU (at least dualcore if you are going to be working on the same machine) along with OS that supports software RAID5.

Otherwise RAID10 is your next best choice.

bgiddins
Posts: 175
Joined: Sun Sep 14, 2008 1:04 am
Location: Australia

Post by bgiddins » Wed Oct 15, 2008 7:09 pm

Isn't RAID 10 more tolerant of multiple disk failures than RAID 5?

In RAID 5, if you lose a disk and commence recovery, loss of a second disk will result in the loss of all data. In RAID 10, (assuming 1+0 which is a striped set across two sets of mirrors), the array can withstand multiple disk losses as long as no complete mirror is lost. In theory, RAID 10 should be more robust for the same number of disks.

It's possible that when one disk in an array fails, others are nearing their end of life as well, so there is a high chance of secondary failure during an hours-long recovery process. Say you had four disks - in RAID 5, a failure of any of the remaining three during recovery will destroy your data. End of story. In RAID 10, there's a 1 in 3 chance that a failure of a second disk during recovery will destroy your array, versus a 2 in 3 chance that the failure of a second disk will be in the other mirror and not impact your data.

So the question is, are you after maximum storage space or maximum redundancy?

You've probably read it ad-nauseam that RAID is NOT a backup, but in case you haven't, RAID is NOT a backup. It merely provides limited hardware fault tolerance.

kojack
Posts: 18
Joined: Thu May 22, 2008 4:37 am
Location: Grand Falls, NL, Canada

Post by kojack » Thu Oct 16, 2008 6:40 am

I know raid is not a fail safe, however its better than one drive with your data on that. therefore, raid 1+0 it is. yes my main system is going to house these drives...

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Thu Oct 16, 2008 12:01 pm

Just make sure it's RAID 10 and not RAID 0+1. There's a suubtle, but significant, difference between the two RAID levels.

kojack
Posts: 18
Joined: Thu May 22, 2008 4:37 am
Location: Grand Falls, NL, Canada

Post by kojack » Thu Oct 16, 2008 3:38 pm

and what is the difference?

bgiddins
Posts: 175
Joined: Sun Sep 14, 2008 1:04 am
Location: Australia

Post by bgiddins » Thu Oct 16, 2008 4:50 pm

RAID 1+0 is a striped set across mirrors (i.e. two RAID 1s in RAID 0)
RAID 0+1 is a mirror of striped sets (i.e. two RAID 0s in RAID 1)

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Thu Oct 16, 2008 7:28 pm

To expand on what bgiddins said:

In a RAID10 set, you have multiple stripes, each of which is mirrored. As such, you can lose one drive in each stripe without losing data.

In RAID01, if one drive dies, that entire stripe is lost. As such, your fault tolerance is a single drive.

If I remember correctly, the performance of the two setups is roughly the same.

kojack
Posts: 18
Joined: Thu May 22, 2008 4:37 am
Location: Grand Falls, NL, Canada

Post by kojack » Fri Oct 17, 2008 4:49 am

Perfect so 1+0 it is!

Luminair
Posts: 223
Joined: Wed Mar 21, 2007 10:45 am

Post by Luminair » Sun Oct 19, 2008 3:53 am

I've done this before. If you're not using a 3ware raid card, you're doing it wrong :)

fri2219
Posts: 222
Joined: Mon Feb 05, 2007 4:14 pm
Location: Forkbomb, New South Wales

Post by fri2219 » Mon Oct 27, 2008 9:53 am

The big win here would involve using a ZFS pool of drives, but since I suspect you're not using FreeBSD or Sun or Apple hardware for home use, try RAID 1 + 0: Build a set of mirrored drives and stripe those sets of mirrors.

I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.

Intel's onboard ICHxR works just great until you have to rebuild or recover your array, and then... ugh. You get what you pay for, and you really can't spend less than $300- if you don't have that kind of money, then take your chances with onboard RAID.

As others have mentioned, 3ware makes decent boards, but I'd also take a close look at Areca as well- in your setup, I'd recommend the 12x0 (PCIe 8x) I've had good luck with both vendors over the years, but have had fewer issues with Areca's drivers in the recent past.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Mon Oct 27, 2008 11:21 am

fri2219 wrote:I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
Performance issues are somewhat moot point. In many cases, having a single parity-based solution is perfectly acceptable. Rebuild sizes on my 8x1TB array take roughly 16 hours - not 3 to 7 days. Hard drives and controllers are becoming faster (my setup is capable of 350MB/s linear reads, 200MB/s linear writes), and it's making rebuilding failed arrays far less of an issue.

Just out of curiosity... how would ZFS be better than setting up RAID10? I'm not too familiar with how it's setup, but would the end result be the more or less the same (only you're limited to using that filesystem)? In one case, the rebuild is taken care of by the controller - the other, it's taken care of by some monitoring system in the OS.

Cistron
Posts: 618
Joined: Fri Mar 14, 2008 5:18 am
Location: London, UK

Post by Cistron » Mon Oct 27, 2008 2:59 pm

Nick Geraedts wrote:
fri2219 wrote:I would definitely avoid parity based (e.g. RAID 5 and 6) RAID solutions for that set-up: If you'd like a painfully detailed explanation of why, google for BAARF or "RAID5 is evil". Also, aside from performance issues, RAID5 has a size limit of around 1.5 Terabytes- and rebuilding an array that size would probably take 3-7 days, depending on your hardware.
Performance issues are somewhat moot point. In many cases, having a single parity-based solution is perfectly acceptable. Rebuild sizes on my 8x1TB array take roughly 16 hours - not 3 to 7 days. Hard drives and controllers are becoming faster (my setup is capable of 350MB/s linear reads, 200MB/s linear writes), and it's making rebuilding failed arrays far less of an issue.
Just thought along the same lines, after reading through the baarf article. I don't think any user runs 200 discs in raid arrays at home. I'm wondering if all those bad parity and flaky drive restrictions actually still apply and whether Raid6 is an appropriate solutions for the "big guys".

fri2219
Posts: 222
Joined: Mon Feb 05, 2007 4:14 pm
Location: Forkbomb, New South Wales

*nixcraft article

Post by fri2219 » Mon Oct 27, 2008 9:05 pm

Here's some better illustrations of the points I was trying to relay, along with performance numbers: http://www.cyberciti.biz/tips/raid5-vs- ... mance.html

Also, the "3-7" days point I was trying to make... I mangled an edit. That was meant in the context of using Intel's soft driver, not a decent controller.

fri2219
Posts: 222
Joined: Mon Feb 05, 2007 4:14 pm
Location: Forkbomb, New South Wales

Post by fri2219 » Mon Oct 27, 2008 9:13 pm

Nick Geraedts wrote:Just out of curiosity... how would ZFS be better than setting up RAID10? I'm not too familiar with how it's setup, but would the end result be the more or less the same (only you're limited to using that filesystem)? In one case, the rebuild is taken care of by the controller - the other, it's taken care of by some monitoring system in the OS.
There's quite a long list of reasons why, but here's a good demonstration of how convenient it is to use: http://opensolaris.org/os/community/zfs/demos/basics/

Here's a comparison chart showing feature differences: http://www.unixconsult.org/zfs_vs_lvm.html

protellect
Posts: 312
Joined: Tue Jul 24, 2007 3:57 pm
Location: Minnesota

Post by protellect » Tue Oct 28, 2008 1:17 pm

I think RAID is for performance, more than storage.

If you just need storage, I wouldn't even use RAID.

Put two in your fileserver, put two in a computer you turn on to backup once a week.

Post Reply