Olaf van der Spek wrote:
Well, if you went with software RAID.
Why would you not go with Linux software RAID?
Especially considering this product almost certainly uses Linux software RAID anyway...
I believe their CPU has hardware RAID, although I'm not sure.
On a little box using a xscale proccessor then hardware raid would be faster.. But, beleive or not, generally on a PC with at least a semi-modern proccessor Linux software raid is faster. Actually can be very much faster.
Keep in mind what your dealing with here.
Real hardware raid has a dedicated proccessor to perform the calculations needed to stripe data across multiple devices. These aren't specialized items, but are generally just small embedded-style proccessors running at a few hundred mhz at most. With a PC you can get a 2.5-3ghz proccessor for dirt cheap, like in a Dell low end server for 400 bucks. This will easily outperform any sort of proccessor you'd find on affordable real hardware raid. (likes cards you can get for around 400 dollars)
Then on top of that how data is managed and algorythms used in Linux MD raid are very smart and efficient. And since the CPU is stuck in I/O wait most of the time waiting for the harddrives to find the information programs need then it's not going to cost you anything to use those spare cpu cycles on software raid.
Keep in mind also that 90% of what people think is 'hardware raid' isn't. If you have a 'raid' chip on your motherboard, for instance, or in a low end card (say costing 200 dollars or less), then the vast majorities of these are best described as 'fake raid'. They work after a similar fasion that winmodems function were all the work is done emulated by software in the drivers. Usually this stuff is pretty low-class stuff, Linux MD (software) raid is usually better then what you'd get from driver manufacturers, although Linux DM stuff is more and more supporting these products since some actually do have rudementary ability to offload certain calculations.
Limitations you run into with software raid on Linux has mostly to do with PC and PC-class hardware. The PCI bus is limited by it's bandwidth.. after about 4 or 5 harddrives on it your going to run into bandwidth limitations. Hardware raid items don't typically run over the PCI buss and thus can scale higher. Linux MD doesn't support hotswapping yet, although SATA should be technically hotswappable out of the box. So you need hardware raid if you want to replace a failed harddrive without shutting down the box. Good hardware raid provides CPU offload which can help in performance on heavily loaded servers. Hardware raid will often provide advanced data protection features that your not going to find on Linux software raid. For instance it's not impossible that when a harddrive is going out on a Linux system that corrupted data is stripped accross the raid array before the OS notices that the drive is failing, and remounts everything read-only or crashes.
So hardware raid is very nice, but if your looking for best performance without breaking the bank then something like a low end dedicated file server running a Pentium-D (or AMD dual core) and Linux software raid will be fastest. Especially if you can get all the I/O on the PCI express bus to get around older PC's nasty PCI bandwidth limitations.
I guess the reason is NetBIOS protocol used by Windows shared directories. It's frankly quite abysmal performance-wise.
Ah, I must be blaming the wrong thing then. Are there any alternatives to NetBIOS?
Not for Windows. It's your best choice.
For Windows server they have DFS, but that's mostly for enterprise stuff. I don't know much about it.
It's not realy NetBIOS anymore. The protocol, in it's latest incarnation, is called SMB for "server message block". Or if you want to get realy fancy it's called CIFS for "common internet file system" (which it's not! it's not safe enough for internet use, only use on protected networks)
CIFS stems from IBM and friends trying to make a standardized protocol. They tried to clean it up and such.
This thing isn't that bad though. It's pretty fast. On my system I tried out SAMBA (windows file and print services for Linux and anything basicly non-windows) and it would take about 2 minutes and 15 seconds to copy a 700+ meg file off of the server.
The reason it takes so long is just because SMB has a lot of overhead. It's got a lot of cruft built up from over the years and it wasn't that good to begin with.
In Linux/Unix land people tend to use NFS (network file system) which is a older protocol, but it's fast. The downside is that security is pretty much non-existant. Anybody that can get physical access to your network can spoof your client's IP address or DNS name (depending how you have it setup) and download all the files on your server very easily. (which is why I keep a incrypted directory on it for important stuff) But it's faster then SMB.
The same size file that took over two minutes on SMB took slightly less then a minute on NFS. With just a bit of overhead from TCP/IP the bottleneck was the harddrives.
NFS and SMB is pretty much what people use. SMB mostly due to the fact that it's what Windows uses and everybody needs to keep compatable with Windows.
The other network file system I use is a smaller one called SSHFS, which uses Linux FUSE to export a file system over SSH (secure shell). Everything is encrypted and authentication is strong so it's safe to be used over the internet and I use it on my laptop when I am out and about. It's pretty fast (still faster then SMB), but encryption has a lot of CPU overhead.
The nice thing about NFS is that it supports all the special file types such as named pipes and sockets and such in Linux. So you can run on remote root with it and easily boot your entire operating system off of a server. The SAMBA guys are aiming for this compatability with Linux, but it's not going to happen any time soon. I don't know about Windows.
Other odd file systems include OCFSv2, Lustre, GFS, OpenAFS, and so on and so forth, but for home users SMB or NFS is what will work best.
Probably what you can do for Windows is just have a system with a lot of ram and only install your operating system on it. Then you can install all your games and other software on the server. That way you can hopefully tweak your system not to access the swap file on the disk to much and keep your drive in sleep mode as much as possible.
Otherwise you can have pretty long SATA cables. External SATA enclosures aren't to expensive and you can get ones that can hold several drives so you can probably end up with pretty massive storage in it's own box away from you or in a closet or something like that.
Like the ones shown here:
With eSATA you can have cables up to two meters long. So there are lots of possibilities there. I expect that it would be just as fast as local storage, since that's pretty much what it would be.