News: Sandybridge, Bulldozer and UEFI
-
- Posts: 2198
- Joined: Thu Feb 10, 2005 11:20 am
- Location: TN, USA
-
- Posts: 239
- Joined: Sat Mar 29, 2008 7:05 pm
- Location: Toronto Ontario
hmm
Afaik you would need something like a 6+ disk RAID1 array to match the random access times of a good SSD. Talk about a pointless expense :D The SSD is probably the less expensive option if all you care about is boot/load times. (again, afaik)Monkeh16 wrote:Or, alternatively, they're using a large hardware RAID array which makes an SSD a pointless expense. ;)
Oh, and linux software raid ftw!
Re: hmm
Also, from a system administration point of view, I would think you'd generally want to keep a RAID like that as a separate resource that can be taken offline for maintenance, etc., rather than having the added complication of saddling it with the bootable OS partition.andymcca wrote:Afaik you would need something like a 6+ disk RAID1 array to match the random access times of a good SSD. Talk about a pointless expense The SSD is probably the less expensive option if all you care about is boot/load times. (again, afaik)
Oh, and linux software raid ftw!
Re: hmm
I know people with 6+ disk RAID5 arrays. Hardware with battery backing and RAM for performance. They make SSDs look like cheap toys, oh, and they store terabytes, not gigabytes.andymcca wrote:Afaik you would need something like a 6+ disk RAID1 array to match the random access times of a good SSD. Talk about a pointless expense The SSD is probably the less expensive option if all you care about is boot/load times. (again, afaik)Monkeh16 wrote:Or, alternatively, they're using a large hardware RAID array which makes an SSD a pointless expense.
Oh, and linux software raid ftw!
And yeah, software RAID is great.. Except it doesn't scale all that effectively to large arrays. A hardware controller is a much more effective option for arrays containing more than a handful of drives.