2TB storage in RAID 5 mode...

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

2TB storage in RAID 5 mode...

Post by sailorman » Fri Nov 16, 2007 12:45 pm

Hi there,

I'm building a totally new HTPC and i want it to be quiet enough (as much as i can get...)

It will be based on ASUS P5E3 Deluxe WiFi motherboard which can only support 6 SATA drives (other parts will be: E6850 Core2Duo CPU, MSI RX2600XT 512MB silent VGA card, 2GB RAM DDR 3 Corsair DHX 1333MHz, Origen S21T case / ANTHEC Phantom 500 PSU) .

Furthermore i want to have at least 2TB available storage space in RAID 5 mode (for safety reasons) while the OS will be run from an intependent HDD 250 GB SATA.

The question is what kind of HDDs is better to use? 4 HDDs of 750GB eachone or 3 HDDs of 1TB eachone? The solution of using 5 HDDs of 500GB would be more efficient (from the cost and managebility point of view) but i want to have one SATA plug of the mobo available for future upgrades (e.g. BD Drive, another HDD, etc.)

Also i 'd like to have an idea of what specific models of HDDs are better for this purpose? I mean from what company and which models, cause i usually use Western Digital and i'm very satisfied from them till now, but i have never used so big HDDs before.

Any idea / opinion would be very usefull for me.

Thank you all.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Fri Nov 16, 2007 1:09 pm

4x1TB with hotspare :lol:

murtoz
Posts: 122
Joined: Tue Mar 13, 2007 12:24 pm
Location: Wiltshire, UK

Post by murtoz » Fri Nov 16, 2007 3:03 pm

On the principle that on a striped raid like raid 5, all drives can be accessed at the same time, I'd go with as many drives as possible. This would theoretically give the best performance, especially for large sequential files like tv recording/movies would be (provided there are no other bottlenecks).
On top of that I think 500GB drives give you the most storage for your money (although I have no evidence to back that up), and you would only be wasting 500GB of available space to redundancy.
Also you could use the excellent Samsung HD501LJ (I have two here (suspended) and cannot hear them, unless it's the middle of the night and I hold my breath :-D). So, go with 500GB if you can.
If you really need the last sata port free, go with 750GB. Not sure which make to go with in that case, sorry.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Fri Nov 16, 2007 3:24 pm

And you can get a cheap PCI SATA card for like $10-20 these days, works just fine..

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Fri Nov 16, 2007 3:25 pm

If you really need the last sata port free, go with 750GB. Not sure which make to go with in that case, sorry.
Thanks for your ideas.

Actually i dont need the last SATA port free right now cause i choose not to use a SATA DVD+/-RW but an IDE (Plextor PX 800A), but i think it's better to have one free for future upgrades as i said before (e.g. one more HDD later or an HD-DVD / BR, which they absolutely need a SATA port).

In fact, i was thinking to use 5 HDDs WD 5000ABYS (they are perfect for RAID modes) in the first design. But this solution has many limitation as i said. Imagine if i wanted to add one more drive and hadn't a spare port!! Silly me...!

santacruzbob
Posts: 25
Joined: Thu Apr 05, 2007 4:23 pm

Post by santacruzbob » Fri Nov 16, 2007 3:38 pm

No offense, but that really seems like a stupid concern to me. It's already been said that pci sata controllers are very cheap and easy to find and use.
You're going to be spending a considerable amount of money building your raid array ($500+), and I'm sure you want it to perform well for a long time so I think you should focus on that rather than the fact that you might want to install another sata drive later on. Depending on usage, you might want to reconsider your options. Raid 5 has horrible write speeds especially on a 3 disk array.. If you really need reliability and speed your best option is a raid 10 with four 1tb drives.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Fri Nov 16, 2007 3:44 pm

santacruzbob wrote:Raid 5 has horrible write speeds especially on a 3 disk array.. If you really need reliability and speed your best option is a raid 10 with four 1tb drives.
RAID5 has horrible write speeds when using any onboard controller. If you're using a dedicated controller, performance is decent.

For his setup, I'd also suggest using RAID10. Speed and reliability - you can't beat that. ;)

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Fri Nov 16, 2007 4:23 pm

what does RAID 10 offer more then RAID 5???

supposing that i will use 4x1TB HDDs (regardless to that kind of HDDs still has reliability problems and the cost will be extremely high; more than 1000 EURO / around 278 euro eachone) what will be the final storage capacity and the access speed?

sheninat0r
Posts: 76
Joined: Mon Oct 29, 2007 2:02 pm

Post by sheninat0r » Fri Nov 16, 2007 4:43 pm

RAID 10 is twice as big as RAID 5 :lol:

RAID 5 uses parity calculations for redundancy, which takes up a LOT of processor overhead if done in software/in the CPU, which GREATLY slows read/write times. Good RAID cards can go from $300 to $1000 too.

RAID 10, on the other hand, uses mirroring and striping in combination - two drives are mirrored [RAID 1] and then two other drives are mirrored as well, and these two RAID 1 units are put into a large RAID 0 array, giving the speed of RAID 0 and the redundancy of RAID 1 at the same time - with no parity calculations either.

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Fri Nov 16, 2007 4:56 pm

two drives are mirrored [RAID 1] and then two other drives are mirrored as well, and these two RAID 1 units are put into a large RAID 0 array, giving the speed of RAID 0 and the redundancy of RAID 1 at the same time
better speed, worst storage performance... considering the cost it doesnt seem to be a good idea...

in fact, i itend to use the motherboard's integrated RAID controller...

which are the best HDDs for RAID modes in 750GB and 1TB capacities? from the silence/reliability/speed point of view?

beoba
Posts: 36
Joined: Fri Mar 16, 2007 9:54 pm
Location: Boston

Post by beoba » Fri Nov 16, 2007 5:37 pm

I'm looking into converting an existing desktop into a fileserver/TV box, and have been looking into getting a SATA PCI expander card since the motherboard lacks SATA plugs. What kind of throughput can you get through a single card? Would the PCI bus become a bottleneck in a 4-drive software RAID 5 array?

Probably just gonna get this (or similar tier), as the chip (SIL 3114) looks to have good Linux support: http://www.newegg.com/Product/Product.a ... 6815280003

murtoz
Posts: 122
Joined: Tue Mar 13, 2007 12:24 pm
Location: Wiltshire, UK

Post by murtoz » Fri Nov 16, 2007 6:11 pm

sheninat0r wrote:RAID 5 uses parity calculations for redundancy, which takes up a LOT of processor overhead if done in software/in the CPU, which GREATLY slows read/write times. Good RAID cards can go from $300 to $1000 too.

RAID 10, on the other hand, uses mirroring and striping in combination - two drives are mirrored [RAID 1] and then two other drives are mirrored as well, and these two RAID 1 units are put into a large RAID 0 array, giving the speed of RAID 0 and the redundancy of RAID 1 at the same time - with no parity calculations either.
As a result, minimum number of drives required for raid 5 is 3, and for raid 10 it's 4. Total available space for raid 5 is total capacity of all disks minus the capacity of one disk. For raid 10, it is half the total capacity of all disks.
beoba wrote:I'm looking into converting an existing desktop into a fileserver/TV box, and have been looking into getting a SATA PCI expander card since the motherboard lacks SATA plugs. What kind of throughput can you get through a single card? Would the PCI bus become a bottleneck in a 4-drive software RAID 5 array?
On a normal PCI bus (32bit/33Mhz), the available bandwidth is 132MByte/s (32bit x 33Mhz / 8bit). However, this is shared with all other pci devices on the same bus.
The maximum sustained throughput on a single disk varies widely but should be somewhere between 50 and 80MByte/s.
A 4-drive raid 5 would effectively use 3 drives. 3x50MByte/s is 150MByte/s, which is already larger than the total bus bandwidth. With onboard sata controllers this is often not a problem as these are integrated into the southbridge (ICH), which connects at much higher speeds to the rest of the system.

However, if you are connecting to this fileserver via ethernet, that is what you should be worrying about... for 100Mbit ethernet, the available bandwidth is (100/8 =) 12.5MByte/s. If you have gigabit ethernet, you almost match the pci bandwidth.

beoba
Posts: 36
Joined: Fri Mar 16, 2007 9:54 pm
Location: Boston

Post by beoba » Fri Nov 16, 2007 7:09 pm

In this case it'll be GigE, which, haha, is also a PCI card. Oh well, I'm not all that worried about throughput on this thing, as long as transfers and streams are reasonable. And if it doesn't work out, I'd only be out $25 for the cheap SATA card.

I'm replacing my current gaming machine (built in 2002) with a new machine, and converting the current machine into the fileserver (which is in turn replacing a 600mhz P3 system which sometimes has difficulty playing larger video). I figure in a couple years I'll be doing another "shift", so I'm fine with a non-optimal solution for this round.

Something else that I've been curious about: Do IRQ's have any effect on this stuff? More specifically, would ensuring that two PCI cards stay on separate IRQ's help performance? I vaguely remember the BIOS supporting some form of IRQ assignment to different slots, or something along those lines.

FireFoxx74
Posts: 23
Joined: Thu May 11, 2006 6:43 am

Post by FireFoxx74 » Sat Nov 24, 2007 3:25 am

I was thinking about this recently as well. It's a toss up between a M2N WS Pro (6x SATA), or a HighPoint 4/8-Port SATA card.

~El~Jefe~
Friend of SPCR
Posts: 2887
Joined: Mon Feb 28, 2005 4:21 pm
Location: New York City zzzz
Contact:

Post by ~El~Jefe~ » Mon Nov 26, 2007 11:06 pm

lol!

reading on here about 10 raided drives ... lol ....

just made me laugh actually. cant imagine silencing that!

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Tue Nov 27, 2007 1:54 am

I have 9 drives in raid5, its just a tad noisy :P

simeli
Posts: 61
Joined: Wed Jun 28, 2006 5:31 am
Location: Zurich Switzerland
Contact:

Post by simeli » Tue Nov 27, 2007 4:54 am

performance wise, as stated before the more disks you add the better the performance. theoretically.
but - the more disks you have in the array, the more likely one will fail at some point. thus, using fewer disks may make sense too. on top, cooling and noise are easier to cope with as well if you use fewer disks.
as for the pci controller, in you case i'd get a pci express version, it will not have the bandwidth problem that pci controller impose. real hardware controllers from lsi logic, 3ware or areca don't come cheap though but they offer you the possibility to expand the raid or migrate raid levels while online e.g. going from a 500gb raid one two a 1b raid 5 by adding a third drive and to 1.5tb by adding another drive. they do not replace a backup though. they merely increase availability of the data. if the data gets corrupted, its gone.

as for the disks, the special raid editions offered by various manufacturers have a feature named TLER (time limited error recovery) that is necessary with those raid controllers mentioned. usually the disks handle error recovery themselves. but in a raid setup the controller takes care of that. but if the drive tries to fix an error and its using too much time, the conroller will kick it from the array. so you should only use those with the proper controller or turn tler off in the disks firmware if you want to use the in a software raid.

today's processors have enough horsepower to cope with parity calculations in a soho fileserver environment. i would not worry about that too much. but the ICHR raid 5 implemetation *is* slow nonetheless. much better using linux' software raid. it is robust, well tested and scales well. otherwise you'll need to get a decent controller.

what os are you planning on using? windows or linux?

quorton
Posts: 5
Joined: Thu Nov 01, 2007 2:23 am

Post by quorton » Tue Nov 27, 2007 5:17 am

Why use an independent HD for the OS ?
If you put the OS on the raid 5 HD's you can already add one extra drive.
And raid is a lot faster then using one drive.
I see a lot of comments about raid speed and cpu usage etc. I'm using raid 5 (4x 500GB) and its a LOT faster reading and writing then using single drives, and the cpu usage is negligable, maybe sometimes it peaks at 5% (Q6600). I have a P5K-E wifi motherboard, which has an ICHR9 chipset, which is considered very good.

So I don't see a downside to just using raid 5. Sure it can always go faster, but then it'll cost a lot more money. And for a HTPC does it really need to be that fast ?

gb115b
*Lifetime Patron*
Posts: 289
Joined: Tue Jun 13, 2006 12:47 am
Location: London

Post by gb115b » Tue Nov 27, 2007 5:43 am

how can raid 5 can't be quicker to write than a single drive??...it has to write to a parity drive as well... plus an xor calculation...

quorton
Posts: 5
Joined: Thu Nov 01, 2007 2:23 am

Post by quorton » Tue Nov 27, 2007 6:58 am

Because a large file or multiple files are written 4/5/6... bytes at the time (depending on the number of drives). That is why the more drives you use in a raid(5), the faster it becomes.

jackylman
Posts: 784
Joined: Sun May 22, 2005 8:13 am
Location: Pennsylvania, USA

Post by jackylman » Tue Nov 27, 2007 9:32 am

I'd run the OS on the RAID for the read speed benefit.

I really question the choice of RAID 5. Do you really need to read that much data per second at a sustained rate in an HTPC? I second shen's recommendation for RAID 10 or RAID 0+1.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Tue Nov 27, 2007 11:02 am

Raid5 for the most storage + decent redundancy.

I would NOT run the boot drive off the raid array, its garantueed horror if anything happens.

drees
Posts: 157
Joined: Thu Jul 13, 2006 10:59 pm

Post by drees » Tue Nov 27, 2007 11:35 am

BTW, if you're using more than 4-5 drives in a RAID5 setup, you should seriously consider at least having a hot-spare or even better, going to a RAID6 setup.

It is very common for another drive to go bad when rebuilding the array.

Terje
Posts: 86
Joined: Wed Sep 08, 2004 4:50 am

Post by Terje » Sun Dec 02, 2007 11:53 am

gb115b wrote:how can raid 5 can't be quicker to write than a single drive??...it has to write to a parity drive as well... plus an xor calculation...
Thats valid for old HW raids and software raids.

Modern HW raid5 with writeback cache are significantly faster writes than a single drive. No problem getting 150-170 MB/second writes these days if you have a good modern raid controller with cache. Problem is just to find a good quality raid controller.

If you enabled the writeback cache, make sure you also get the battery backup module so you do not loose 200MB of data in your writeback cache when the power fails :)

Terje
Posts: 86
Joined: Wed Sep 08, 2004 4:50 am

Re: 2TB storage in RAID 5 mode...

Post by Terje » Sun Dec 02, 2007 12:15 pm

Personally, on my desktop, I have an ARC-1220 from Areca in an Antec P180B cabinet and believe me, it screams, and there are several new cards that are even faster.

I have a 3 disk raid 5 and a 2 disk raid 1. All using WD500KS drives (based on recommendations from SPCR).

The reason I have split up like this is that while I have seen HW raid controllers mess up on raid5s, I have never seen a raid 1 I have not managed to recover as long as the HW raid does not use any proprietary disk layouts (well, I have a fair bit of real life experience recovering disk failures so I am probably a bit better of here than most. The point is... its easier in any case :))

That is, most controllers do not add any magic on a raid 1 drive that stops you from taking it out and hooking it to a normal, standard SATA controller and access it. This is in fact one for the first things I verify that I can do whenever I encounter a new HW device I want a raid 1 one. Luckily, I have not seen a device that does stupidity like this for many many year, but then again, I generally do not do much evaluation of standard PC components. My work is on servers.

This is important especially for personal users for many reasons. One is that it gives you very important options in case of disaster recover, the other is if the Raid card itself fails after 2-3 years. If you are a corporate user, you generally keep important HW on maintenance contracts. If something fail, the vendor has to get you something that can be plugged in as a replacement.

As a personal user, you could risk that you cannot find the raid card 2-3 later and you are not able to recover your data even though the disks are still fine.

In most cases, raid cards from vendors are backwards compatible in terms of raid layout, but there is no guarantee and the raid vendor could also go out of business completely.

This gets even more important if you use a motherboard embedded raid. Lots of strange solutions there, and motherboards generally have a pretty short life cycle these days and PC motherboard vendors are not very concerned with pop in replacements that are backwards compatible either.

I have been working in large scale computer facilities now for 15 years and seen a lot of things happen, I have seen raid 5 systems mess up seriously for themselves in some cases, but a plain standard raid 1 can take a lot of beating and you will still manage to recover most of your data by just connecting to a standard controller.

I would never keep very critical stuff on a raid 5 without having a really good tape backup (obviously... viruses and human mistakes can still kill any storage you have, so you should be careful anyway, but I think most people get my point here, private persons rarely have a nice cool tape jukebox in the closet that does nightly backup with a wife that rotates the tape offsite and all that stuff :))

So, I keep my most important data on the raid 1 and I keep slightly less important stuff as well as software and OS on the raid 5. I want my programs to start quickly and I need gigabytes of temp data for processing at times which I need quick access to.

A normal raid 1 like this is more than fast enough in most cases to browse pictures, edit word files and store email.

The main challenge I encountered stuffing 5 drives like this into the Antec P180 was vibrations. I have added a lot of extra dampening in my 180, but still, 5 drives spinning causes a lot of harmonics and I had to do a fair bit extra work due to that to avoid humming in the cabinet.

2 days ago, I had one of the 500KS drives in the Raid 1 failing and I decided that I could actually use the remaining drive in a different computer and got 2 new 1TB WD10EACS instead.

During the replacement work, I accidentally stopped the fan in the lower section of the P180. This had the WD500KS drives trigger the alarm beeper on the areaca (actually, I have the included software sending me an email to my mobile phone as well). For curiosity, since I had a safe copy of my most critical data, I waited a while to see where this was going.

The 500KSes got up to 60-62C before they stabilized. The WD10EACS stabilized at 52 degree.

Very impressive temperature difference which could be critical for data safety if you have a fan failure.

I can also notice a slightly reduced vibration in the cabinet with the WD10EACS vs. the 500KS.

With the performance of modern harddisks, I would not worry too much if they perform well enough for a HTPC. Any modern 3.5" drive will do just fine for that purpose (including the slow spinning WD10EACS). I would look carefully at the raid controller though. Lot of crappy raid cards, but some that are really good as well.

If you want raid 5 with good performance, I would get a high end SATA/SAS controller with writeback cache and battery backup. The writeback cache also helps the raid controller delay writes to a moment that is suitable, which helps on ensuring that you have a stable feed of data for your video playback.

If you want raid 1 or 10, cheaper stuff will do just fine, but beware of raid10 with a motherboard embedded controller. If your motherboard fails after a couple of years, you are unlikely to find a replacement and your data might be lost.

From what I have seen of the WD10EACS so far, I have no problems recommending it for HTPC use despite being slower than many smaller 7200 rpm drives. In general, the fewer disk you have, the less noise you get and statistically, the fewer parts, the less chance you have that one of the fails.

Like several other people has pointed out, performance is not likely to be a big issue on a HTPC anyway.

On my HTPC, I run a 2 disk raid 1 for my music (I spent too much time ripping all my CDs to loose that :)). I just use a set of single drives for recordings. I don't really care if I loose a disk there. my HTPC is actually my linux machine as well, and I never have any complaints from my gf when she uses the HTPC while I work on it remotely.

I generally keep work on a separate drive from the recordings though, so I do not affect I/O much. One advantage of separate drives vs. a raid.

From the experience I have with the WD10EACS so far, I do consider however replacing the single disk drives with a new WD10EACS mainly to reduce heat buildup and power consumption (I have got the setup quite silent anyway).

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Sun Dec 02, 2007 4:23 pm

RAID5 has horrible write speeds when using any onboard controller. If you're using a dedicated controller, performance is decent.
Horrible, I wouldnt say so, I get 26+MB/s using software (full software, all the sata ports on the mobo are setup as IDE).

40 seconds to write 1GB is not much time to wait, and it doesnt cause any problems with my video viewing whilst writing either, its dirt cheap, and if my mobo dies I can use anything at all as a replacement because its not hardware.

All-in-all, I am happy.

As far as your setup is concerned, the descision is yours, but personally, I wouldnt want to trust that amount of data on a day-to-day use PC/gaming rig/workstation that will get software installation/uninstallations and possibly re-installs on a regular basis. The answer to that is a server, that does nothing but look after your data, thats what I have, and it never gets medled with, only turned on, turned off and checked to make sure my RAID-5 array is "healthy".


Andy

Raptus
Posts: 8
Joined: Wed Dec 05, 2007 8:35 am
Location: Germany

Post by Raptus » Wed Dec 05, 2007 8:53 am

jackylman wrote:I'd run the OS on the RAID for the read speed benefit.
Don't. As Wibla already pointed out, it is a time bomb.
I really question the choice of RAID 5. Do you really need to read that much data per second at a sustained rate in an HTPC? I second shen's recommendation for RAID 10 or RAID 0+1.
It seems you got it all backwards.
As the OP doesn't need the highest STR, RAID5 would do just fine, even as software RAID. It has the benefit of being cheaper (capacity of N-1 drives compared to N/2), and because of the lower number of drives, the chance of a drive failure is also lower.

Compared to cheap RAID controller cards software RAID performs just as well. Those cheap controllers effectively do most calculations in the driver anyway. Only cards in the 800$+ region really start to outperform good software implementations (on linux).
And software RAID has the added advantage of being hardware independent, so you're not screwed if your controller breaks.

I suggest software RAID5 with 3x1TB. It also gives you the lowest noise and power draw.

LAThierry
Posts: 95
Joined: Tue Nov 14, 2006 4:15 pm
Location: Los Angeles, California

Post by LAThierry » Wed Dec 05, 2007 10:14 am

Apples and oranges.

If I had asked the forum to recommend a 6-cylinder car, would everyone tell me about their better ideas of 8 or 12 cylinder engines with quadruple turbos?

The original poster, sailorman, asks about a 2TB+ RAID 5. Why are then people compelled to bring in their RAID 10 or RAID 6 solutions? Of course RAID 10 is faster. Of course RAID 6 has more redundancy.

But you're proposing solutions which require more drives, because they use more space for mirroring or parity, hence more money, more SATA ports, maybe a larger case. Why assume the original poster has any of those to spare? Maybe he's on a budget, maybe his motherboard or case only have room for X drives.

What is it with RAID that makes people compare apples and oranges?

drees
Posts: 157
Joined: Thu Jul 13, 2006 10:59 pm

Post by drees » Wed Dec 05, 2007 11:58 am

LAThierry wrote:What is it with RAID that makes people compare apples and oranges?
Because many people do not know the difference between apples & oranges, and a helpful discussion regarding the differences can be very helpful.

But in this case, the OP was very helpful in determining his criteria:

6 drive enclosure, 2 reserved for the OS, 4 for the RAID, minimum 2TB available space.

Given these limitations, he's not going RAID10 unless he gets 4 1TB drives which would be very expensive due to their size and half the space going to redundancy.

RAID6 is only useful when you have 5 or more drives in the array, so that is out.

RAID5 is the only usable option here. As far as whether to go with 3 1TB drives or 4 750GB drives, I personally prefer to use fewer disks if possible as that will generate less heat and there will be fewer parts to break. IO performance isn't going to be significantly different for the two types of arrays.

750GB drives range between $180-$200 or $720-$800 for 4
1TB drives range between $270-$290 or $810-$870 for 3

So the 4-disk 750GB setup gives you more bang for your buck.

But a 4-disk setup doesn't leave you room to expand in the future, where 1TB drives will give you the option to grow your array by another TB down the road (most RAID controllers / software RAID setups let you do this, but check first) so I think it's worth the slight additional expense up front for the 1TB drives so that you have room to expand in the future.

As far as drive brand, my primary criteria when selecting drives is to only pick from drives that have a 5-year warranty. 3-year warranty is acceptable if the price is very good, but stay away from 1-year warranty drives. Seagate is the only manufacturer who gives you a 5-year warranty on SATA drives (let me know if there are others!).

jackylman
Posts: 784
Joined: Sun May 22, 2005 8:13 am
Location: Pennsylvania, USA

Post by jackylman » Wed Dec 05, 2007 1:39 pm

Seagate is the only manufacturer who gives you a 5-year warranty on SATA drives (let me know if there are others!).
WD has a 5 year warranty on its RAID edition (RE) products.

EDIT: Also Hitachi on its Ultrastar series.

Post Reply