Quiet RAID 5 home server

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
gronx
Posts: 7
Joined: Wed Mar 14, 2007 3:41 pm

Quiet RAID 5 home server

Post by gronx » Sun Feb 08, 2009 2:16 am

I want to build a home server that supports RAID 5. I’m at the brainstorming stage and I’d like to get some input. Here’s what I am thinking as far as requirements:

-QUIET (Doesn’t need to be completely silent)
-At least 3TB Raid 5 array (4 SATA 1TB Disks + separate system disk)
-I would need to stream HD video
-24/7/365 operation (low power a big plus)
-Trying to keep as budget as possible ($900 at most on hardware)

What sort of setups do other folks run at home? What OS? Any recommendations on parts (HDs, MBs, Cases?)

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sun Feb 08, 2009 12:03 pm

Don't do RAID 5.

Either stick with two, four, or six disks in RAID 1 or go four or six disks in RAID 10 (not counting the boot drive).

See viewtopic.php?p=388987 for much more on RAID levels

I'd also want to stay away from 1.0TB and 1.5TB drives right now as seagate and western digital are both having problems with firmware issues on very large drives.

I'd probably go for 640GB Green Power drives for an array but it'd take 8 of those in RAID 10 to get you 2.33TB. (640GB drives format to about 597GB of usable space)

Next step up is 750GB green power drives where 4 disks in RAID 10 gets you about 1.4TB in usable space. Do 6 1TB disks in RAID 10 and you get about 2.1TB in usable space.

Given the array size you requested it pretty much forces us to look at larger drives.

Next step up is 1TB green power drives where 4 disks in RAID 10 gets you about 1.9TB in usable space. Do 6 1TB disks in RAID 10 and you get just about 2.8TB in usable space.

Next step up is 1.5TB green power drives where 4 disks in RAID 10 gets you about 2.8TB in usable space. Do 6 1TB disks in RAID 10 and you get just under 4.2TB in usable space.

Next step up is 2TB green power drives where 4 disks in RAID 10 gets you about 3.7TB in usable space. Do 6 1TB disks in RAID 10 and you get about 5.6TB in usable space.

If RAID 1/RAID 10 seem expensive to you consider that to do RAID 5 right you need a hardware controller (not motherboard based raid) and further once you get into the TB+ range you should be doing RAID 6 instead of RAID 5 which requires an even more expensive raid controller. Even after all of that RAID 5/6 rebuild times can get out of hand with multiTB arrays.

RAID 5/6 are not something you want to dive into unless you have VERY SPECIFIC needs and are very sure you don't want RAID 1 or RAID 10.

Remember that partition sizes larger than 2TB on windows is going to be more of an issue so you may choose to partition the array anyway.

You might as well go with a 640GB Green Power for the boot drive if the system will be idle a lot.

Don't forget your backup software/media. If you have a multiTB array and and a couple of drives it might not matter that you have RAID 5 or 10. If you did that as 3 separate RAID 1 arrays you only lose 1/3 of the total data but losing 2 or more drives in RAID 5 or 10 is likely to mean you lost all the data on the entire array.

And if all 6 drives have bad firmware you may just lose access to all 6 drives at once. Don't assume any type of RAID will keep you from losing data.

FartingBob
Patron of SPCR
Posts: 744
Joined: Tue Mar 04, 2008 4:05 am
Location: London
Contact:

Post by FartingBob » Sun Feb 08, 2009 3:09 pm

dhanson865 wrote: Remember that partition sizes larger than 2TB on windows is going to be more of an issue so you may choose to partition the array anyway.
I thought that is just a issue with XP, or 32bit OS's (one or the other), vista doesnt have that problem as far as i know.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Tue Feb 10, 2009 6:35 am

FartingBob wrote:
dhanson865 wrote: Remember that partition sizes larger than 2TB on windows is going to be more of an issue so you may choose to partition the array anyway.
I thought that is just a issue with XP, or 32bit OS's (one or the other), vista doesn't have that problem as far as i know.
I've never dealt with larger than 2TB partitions but I think this post sums it up with some RAID 5 examples.
http://www.carltonbale.com/2007/05/how-to-break-the-2tb-2-terabyte-file-system-limit/comment-page-1/ wrote:Breaking 2TB Option 1 - Use Windows with NTFS and GUID Partition Tables (GPT) partitions. It is possible for Windows to use NTFS partitions larger than 2TB as long as they are configured properly. Windows requires that the GUID Partition Tables be used in place of the standard Master Boot Record (MBR) partition tables. You will need Windows XP x64 Edition or Windows Server 2003 Service Pack 1, Windows Vista, or later for GPT support. (It is possible to mount and read existing GPT partitions under Windows XP and 2000 using GPT Mounter from Mediafour.; however, their MacDrive product does not support GPT partitions.) There are a couple of stipulations for GPT disks. First, the system drive on which Windows is installed can't be a GPT disk because it is not possible to boot to a GPT partition. Secondly, an existing MBR partition can't be converted to GPT unless it is completely empty; you must either delete everything and convert or create the partition as GPT. Read this Microsoft TechNet article for more details on GPT.

* To summarize: 1 RAID array of five 1TB Drives -> 1 RAID level 5 Volume Set that is 4TB -> 1 NTFS GUID Partition Table Windows partition that is 4TB.

Breaking 2TB Option 2 - Use Linux with CONFIG_LBD enabled. Most Linux file systems are capable of partitions larger than 2 TB, as long as the Linux kernel itself is. (See this comparison of Linux file systems.) Most Linux distributions now have kernels compiled with CONFIG_LBD enabled (Ubuntu 6.10 does, for example.) As long as the kernel is configured/compiled properly, it is straight-forward to create a single 4TB EXT3 (or similar) partition.

* To summarize: 1 RAID array of five 1TB Drives -> 1 RAID level 5 Volume Set that is 4TB -> 1 EXT3 (or similar) Linux partition that is 4TB.

Breaking 2TB Option 3 - Use Standard Partitions and Create Multiple Volume Sets within a RAID array. A RAID array itself can be larger than 2 TB without presenting a volume set larger than 2 TB to the operating system. This way, you can use older file systems (that support only 2TB) and still have RAID 5 protection and more than 2 TB of total storage. To do this, put all 5 drives into a RAID set and create a 2 TB RAID Level 5 volume set — this will leave 2TB of the RAID set unused. Then create a second 2 TB RAID level 5 volume set. Boot into your operating system, create a partition on each of the 2TB virtual drives, and format each of the two 2TB virtual drives. The disadvantage is that there is not one single, large 4TB partition. The advantage is that 1) backwards compatibility for the file system and partitions and 2) they are both part of a RAID 5 array and are protected from single drive failures and only 1 drives worth of storage is sacrificed for RAID parity data.

* To summarize: 1 RAID array of five 1TB Drives -> 2 RAID level 5 Volume Sets that are 2TB each -> 2 standard NTFS (or any other) partitions that are 2TB.
I'd just modify that with the don't use RAID 5 disclaimer. :wink:

protellect
Posts: 312
Joined: Tue Jul 24, 2007 3:57 pm
Location: Minnesota

Post by protellect » Tue Feb 10, 2009 8:23 am

dhanson865 wrote:Don't do RAID 5.
I would modify that statement to say "If you're going to do raid5, do it right with a proper hardware raid card"

For a file server I would always recommend using cheap, reliable hard disks for the OS in raid1 or raid10, and RAID5/6/1/10 for data. be sure to have hotspares.

I would also always remind you, that RAID is never a backup tool; it's an uptime tool.

I haven't had any problems addressing partitions larger than 2TB with server 2003 x64. I've got a feeling linux can deal with them in similar fashions.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Tue Feb 10, 2009 3:58 pm

protellect wrote:
dhanson865 wrote:Don't do RAID 5.
I would modify that statement to say "If you're going to do raid5, do it right with a proper hardware raid card"
Have you read the high cost of raid 5?

The high cost of RAID 5
http://www.networkworld.com/newsletters ... stor1.html
http://www.networkworld.com/newsletters/stor/2006


and the other quotes about raid 5 from other articles
http://storageadvisors.adaptec.com/2006 ... -x86-cpus/
Another point is that RAID-5 is becoming “yesterday’s technologyâ€

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Thu Feb 12, 2009 3:36 am

dhanson865 - I can understand the argument against RAID5 for enterprise situations, but at the same time, look at their calculated values for MTTDL is on the order of hundreds of years. In a home situation, if you're looking to maximize storage space while still keeping basic redundancy, RAID5 will typically do a good enough job. As always, RAID is not a backup - it's a way of making the most of your money, while (sort of) covering your bases.

I'd also recommend using a proper hardware RAID card, but if that's not available and the storage server is solely file storage, then there shouldn't be any noticable performance problems with a modern CPU backing the parity calculations.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Thu Feb 12, 2009 5:19 am

Someone has been drinking from the FUD-fontain.

RAID5 done properly isnt a problem.

Get Samsung F1 drives, they have an URE rate of 10^15, you should be safe. Its also worth mentioning that most newer raid cards handle URE without a hitch... so there.

I'd consider linux and software raid6, just to be on the safe sid, but do get an UPS.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu Feb 12, 2009 6:55 am

Nick Geraedts wrote:In a home situation
In a home situation I'm too lazy/cheap to do all the work it takes to make windows RAID 5/6 work like it should.

If you aren't using windows to make your raid 5 setup then life is simpler.

Looking at http://support.microsoft.com/kb/929491 it doesn't mention Vista so maybe for users of Vista and Windows 7 my dislike of RAID 5 will be less of an issue from a performance standpoint. I know that with RAID 1 or 10 I don't have to worry about partition alignment.

Even then I don't like the extra write penalty for RAID 5 vs RAID 1 or 10. Redundancy is nice. Redundancy that robs performance more than necessary is not something I tend to recommend.

I suppose its up to each user to decide where the balance between space and performance is but to me with all the configuration and reliability issues I've seen with RAID 5 it just isn't worth using. That extra couple of percent of disk space lost to the extra redundancy in RAID 6, 1, or 10 is not a factor to me but price, performance, and reliability are.

RAID 1 can be done cheaply in windows with no controller concerns and even the cheapest controllers do RAID 1 right. It's the closest thing to fire and forget RAID there is.

RAID 10 takes little more effort and is still friendly to cheap hardware. If the RAID controller does 10 it is fire and forget. If not you can do part or all of the RAID in software and be OK.

RAID 5 requires a good hardware controller to not be hampered by several issues and good controllers are more expensive. Once you get one that will do RAID 5 well it most likely will do RAID 6 well.

RAID 6 requires a good hardware controller. Once you have one that can do 6 why do 5?
Last edited by dhanson865 on Thu Feb 12, 2009 7:16 am, edited 2 times in total.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu Feb 12, 2009 7:01 am

Wibla wrote:RAID5 done properly isn't a problem.

I'd consider linux and software raid6, just to be on the safe side, but do get an UPS.
I totally agree with your post. I don't even mind you suggesting that I'm spreading FUD.

My concern is that for many people they don't know what they are getting into when they think simple home RAID 5 and I'm willing to raise the warning.

I take a windows centric view on this so as soon as you say linux I'm assuming you aren't going to run into the issues I see on the windows side and I say more power to you.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Thu Feb 12, 2009 8:24 am

I'm not saying youre spreading FUD, only that youve read too much of it :)

SW Raid5 in windows is a dead end, I dont understand why people even consider it. On linux its a nobrainer, easy to set up and stable/fast

I'm considering testing Windows 2k3/2k8 server for my old fileserver, but I have a proper HW raid setup (look at the 3.7TB box in my sig).

Post Reply