Real-World file server CPU requirements

Our "pub" where you can post about things completely Off Topic or about non-silent PC issues.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Real-World file server CPU requirements

Post by andyb » Thu Apr 10, 2014 10:38 am

I have been keeping a close eye on the upcoming and now released AMD Kabini desktop CPU + motherboard combo to possibly replace my current very old setup. I have read a few and skimmed a few Kabini reviews. Sadly from my perspective none of the reviews have gone beyond the realm of general benchmarks for a desktop system, and here I get to the big question.

What CPU / system is actually required to max out (whichever is reached first) Gigabit Ethernet or a HDD. Generally speaking Gigabit has a "realistic" max throughput of 100-110 MB/s and there are many HDDs that are also capable of this.

My current Server has an ancient AMD Socket 939 single core 4000+ 2.4GHz CPU, 2GB of DDR1 400MHz RAM, a PCI-E Gigabit NIC (the on-board is PCI and was a very poor performer), a PCI graphics card dating from 1995, an 8-port hardware RAID card with 4x 2TB HDD's in RAID-5 and a pair of single drives.

Before I installed the PCI-E NIC I was averaging around 45-50 MB/s transfer speed in either direction. With the new NIC I am getting about 75 MB/s. The CPU usage is a long way from 100%, but the CPU "Kernel" usage is nearly 100% which is why my question is about CPU requirments as I believe that this "could" be a bottleneck.

I really am not certain what is causing my unexpected lack of performance, and have been wondering for some time whether my server is CPU bound or whether is something to do with the motherboards ability to move the data around efficiently, and if it is either of these then what would I need in the way of a CPU and motherboard, I already know that the CPU requirements are a long way short of a 3.2GHz Intel i5 thats in my desktop, but I am really not certain what the bottom of the range option looks like.

I have benchmarked my RAID card on my server (read only benchmark) and it makes Gigabit look slow, I have also benchmarked my single drives just in-case the writing ability of my RAID-5 array is significantly slower than the read speed. And at the other end I have also tested it on my SSD with no improvement over a standard spinning disk. This of course leaves the elephant in the room, the networking equipment itself.

I have a good quality brand (Belkin) Cat 5e Gigabit throughout and both machines are attached to the same "D-Link DGS-1008G" 8-port switch which was a lot faster than the 5-port Netgear switch it replaced (I needed more ports).

I have trawled through several searches on Google to find out what the actual CPU requirements are "to reach max Gigabit throughput" and have drawn a blank. Does anyone else who is reading this have a personal file server and Gigabit and get 100MB/s - 110MB/s throughput, and if so how does your server and networking equipment differ from mine.


Kind regards, Andy.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Thu Apr 10, 2014 1:07 pm

You can use iperf / jperf to check your network equipment.
https://code.google.com/p/xjperf/

Old how-to:
http://www.smallnetbuilder.com/lanwan/l ... ance-jperf

Assuming all's well, you should see 900Mb/s or better NIC to NIC.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Thu Apr 10, 2014 1:45 pm

I assume the OP is talking about SMB performance which has issues you won't spot by benchmarking raw network throughput. Do bechmark it just in case but you can get slow SMB on a fast network.

I don't know what's required to get 110MB/s since I don't care too much for last 20% or so of sequential performance when random access performance is terrible anyway.
But with decent operating systems and fast enough storage, I've typically been able to get something like 90MB/s effortlessly these last few years, even with slower (but newer) CPUs. Just look at how slow NAS CPUs are. Kabini would be seriously overkill for a basic file server. Any reasonably modern x86 CPU shouldn't be a bottleck when serving files.
Slow hardware isn't the only cause for bad SMB performance so I wouldn't simply assume it's a motherboard issue either.

Try running a fresh/live install of a decent server operating system on your hardware and see if performance improves and what kind of CPU load you get. If your "nearly 100%" CPU load simply disappears or is revealed to be of the iowait variety or something, you'll know it's not the CPU that's the problem.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Thu Apr 10, 2014 1:55 pm

I mentioned iperf because of Andy's comment
This of course leaves the elephant in the room, the networking equipment itself.
It's probably not cabling/switch/NICs, but you never know.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Thu Apr 10, 2014 2:19 pm

Sure, if I'm not mistaken there's one NIC we haven't heard about (who knows how bad it is?) and I guess I'm a bit of a switch snob because I don't associate D-Link with quality. I'm sure some of their switches are fast but it's certainly worth testing.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Thu Apr 10, 2014 4:01 pm

The CPU is actually not the most important part for getting good speeds out of SMB/CIFS. The important requirements are:

1. You must be using SMB 2.0 or higher on BOTH the server and client. This means Windows Vista+ or Samba 3.6+. If you are using Samba SMB 2.0 must also be enabled, it is often turned off by default. This is probably the #1 reason why home NAS appliances have such terrible speeds, they are using SMB 1.0. It is possible to tweak SMB 1.0 into decent performance, but by default it's not going to max out gigabit no matter what CPU.

2. Use a PCIe NIC. More than once I have seen people replace an onboard PCIe Realtek with a PCI Intel and send their speeds plummeting. It's a file server, not a DB server, you want throughput not lower CPU usage.

3. Make sure your disk(s) can keep up. Badly configured RAID can often kill speeds, especially writes. What kind of write speeds do you get locally on your RAID5?

As to CPU, I've maxed out gigabit just fine with dual-core Atoms (D510) if the above conditions are met. I'd suspect your Athlon 4000+ is using SMB 1.0 and could get better speeds using SMB 2.0. A Kabini should have no problem, but personally I would look at a Baytrail for the lower power usage.

Least important: Networking equipment. We are talking about <10 machines (probably less than 5) talking at once and with relatively big packets. Not 100s all trying to send 64 byte packets at line speed. I've used all manner of crappy switches for this purpose and they all work just fine unless they were obviously broken. Just make sure your cables are up to spec.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Fri Apr 11, 2014 3:21 am

washu wrote:If you are using Samba SMB 2.0 must also be enabled, it is often turned off by default. This is probably the #1 reason why home NAS appliances have such terrible speeds, they are using SMB 1.0.
I do get decent (better than the OP in any case) throughput without enabling SMB2. And the smb.conf man pages of the two distros I checked do say it's experimental so I doubt it's enabled by default in those versions.
Maybe I should try SMB2 anyway. I don't particularly care for maximum throughput optimizations but does SMB2 also improve performance in other respects (responsiveness, small files)?

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Fri Apr 11, 2014 3:36 am

I built a D510MO music server about 4 years ago. It used a 5400 RPM WD notebook drive. It ran whatever Ubuntu server version was current at that time; I am unsure which SMB implementation was used. My build thread had jperf and iozone test results, but I see today that imageshack broke the photos. The Iozone test & HTOP screenshots are attached for reference. Personally, 39-47 MB/s is plenty for my needs (serving flacs to squeezeboxes around the house), so I never sought to improve on it. Note smbd eating 70+% of one CPU core. This suggests that the HDD is bottle necking the CPU, which is what I'd expect.
D510MO_iozone.jpg
htop_iozone_test.jpg
You do not have the required permissions to view the files attached to this post.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Fri Apr 11, 2014 3:47 am

washu wrote:You must be using SMB 2.0 or higher on BOTH the server and client. This means Windows Vista+ or Samba 3.6+. If you are using Samba SMB 2.0 must also be enabled, it is often turned off by default.
Today my D510MO server runs Ubuntu 12.04 server w/Samba version 3.6.3. I dug through the samba config file - I didn't see where to enable SMB version, though. smbstatus just confirms ver 3.6.3. Where do I check this?

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Fri Apr 11, 2014 3:58 am

The option's called max protocol or something. Search the manpage. Ubuntu might have enabled it by default for all I know.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 11, 2014 5:08 am

HFat wrote: Maybe I should try SMB2 anyway. I don't particularly care for maximum throughput optimizations but does SMB2 also improve performance in other respects (responsiveness, small files)?
SMB2 helps for small files as it can overlap commands and batch multiple commands together. You still are unlikely to get full gig rate if the files are really small, but I've seen decent improvements over SMB1.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Fri Apr 11, 2014 5:37 am

HFat wrote:The option's called max protocol or something. Search the manpage. Ubuntu might have enabled it by default for all I know.
Thank you. Per:
http://www.samba.org/samba/history/samba-3.6.0.html
SMB2 support
------------

SMB2 support in 3.6.0 is fully functional (with one omission),
and can be enabled by setting:

max protocol = SMB2

in the [global] section of your smb.conf and re-starting
Samba. All features should work over SMB2 except the modification
of user quotas using the Windows quota management tools.

As this is the first release containing what we consider
to be a fully featured SMB2 protocol, we are not enabling
this by default
, but encourage users to enable SMB2 and
test it. Once we have enough confirmation from Samba
users and OEMs that SMB2 support is stable in wide user
testing we will enable SMB2 by default in a future Samba
release.
I'll look for this after work.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 11, 2014 9:11 am

Thanks very much for all of the replies.

I have installed and run Jperf. With the default settings I got 50-55 MB/s, I then (as suggested) ran it again with the "TCP Window Size" set to 64 Kb, and got a respectable 108MB/s which is exactly what I was hoping for.

Next thing to look at is Write speed for my HDD's.

For this I will test the write speed (on my server) from one of my single drives to the other. I selected 10.1 GB of large files and copied them in one go to a 2TB Samsung 2TB HD203WI which measured a very disappointing 68MB/s :( Both of these drives are wired up to my Highpoint RocketRAID 2320.

For my next test I copied 10.1GB (different file set) from my RAID-5 array (because is 99% + full) to one of my single drives and got 75MB/s. Also very disappointing.

I have re-read (skimmed) some review of my RAID card to see what kind of performance other people got, for single drives doing simple "large" file copy tests they exceeded 100MB/s, sadly the rest of the tests were far less clear about the "write" performance, especially in RAID-5.

Another test result for you, when I read data from my SSD in my desktop to a single drive on my server I get 68 MB/s, which rules out the issue of the RAID cards internal bandwidth being a limiting factor.

Another test involved me torturing HDD's. I copied the same 10.1GB data set from a single drives to my RAID-5 array which had just 13GB of spare space (2.91GB at the end) - the result is only surprising because of how fast it was, 61.5 MB/s.

I have just one last test to try, testing the boot drive. This is an ancient 400GB Samsung HD400LJ, and as this drive is also 99% full I am copying data from that drive on my server onto my SSD in my desktop, for this result I got MB/s.

So the results as I see it point towards my RAID cards write speed regardless of whether its a single drive or my RAID-5 array.

A couple more things to note:

I am running 2003 server, and was planning on doing a re-install with a desktop version of Ubuntu (exact Linux version may vary, driver support needs to be factored in and I want a GUI) when I add 4x WD Red drives to my RAID card as a RAID-5 array.

HD Tach gives an (average across the entire drive) read speed of 90.6 MB/s (high of 125 MB/s low of 45 MB/s), whilst the RAID array gets 264.5 MB/s (high 310 MB/s, low 190 MB/s), an important thing to remember is that a RAID-5 array will by its nature always have a much faster read speed than write speed, by a single drives read and write speed are very similar (providing there is a decent amount of spare space - 560GB in this instance).

I have also noticed a trend, every single data transfer starts of with a reported speed of 100MB/s or more, I can only assume this is a result of caching.

The one thing that I have learned from all of this, is that my CPU is not an issue.

Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 11, 2014 9:53 am

andyb wrote:Highpoint RocketRAID 2320.
Not going to sugar coat it: Your RAID card sucks.

Outside of highpoint being solidly in the lower tier of RAID controllers, the card simply does not have the required hardware to get good RAID 5 speeds. Good RAID 5 or 6 on a hardware RAID card almost always needs cache, which the 2320 lacks. It would probably be fine for RAID 0, 1 or 10, but it is not suitable for RAID 5 with decent write speeds.

RAID 5 writes are most certainly not limited to the speed of a single drive. A good hardware RAID controller or a good software implementation can do very close to the combined speed of the data volumes (n-1 for RAID 5). As a point of comparison, my LSI 9260 gets ~300 MB/sec write to a RAID 5 of 4 2TB WD Greens. It also gets ~850 MB/sec write on a RAID 6 of 8 3TB Seagate MD001s. The 4 port version (9260-4i) is not much more than your highpoint card.

As another point of comparison, I get ~220 MB/sec write on the onboard Intel RAID with 4 2TB Seagate MD001s in RAID 5. Intel is the only "fake RAID" that performs well. If you want to upgrade your system anyway, it's a cheap method to get decent performance if 6 drives max is enough.

Server 2003 is not helping because it is SMB 1.0, but you could tweak it a bit. If you want to stay with Windows then server 2008R2 or even 2012 would probably run fine on your current system. Just don't use Windows software RAID 5. If you want good write speeds in RAID 5 on Windows you have to get a good RAID card or a motherboard with Intel RAID.

If you want to run Linux then you should use mdadm instead of your highpoint. If you can set your card to just be an HBA then you could still use it for that. FreeBSD or FreeNAS with ZFS is another option, but you would need way more RAM than you currently have. FreeNAS has a nice web interface.

TL DR: Ditch your highpoint if you want good write speeds.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 11, 2014 10:23 am

When I bought that card it was one of a handful on the market that was PCI-E, PCI-E was a must as PCI is obviously deficient and PCI-X motherboards were insane amounts of money. This RAID card was significantly cheaper than the competition at the time and I lusted after other cards but they were so much more expensive they simply were not an option. Likewise now, I am more interested in getting myself 4x 4TB HDD's than a better RAID card - just looked up that card, it costs slightly more than 3x 4TB WD Red drives @ £410.

Also your explanation of my RAID card being rubbish for RAID-5 does not explain why my write speeds for single drives are so poor.

More RAM is one of the reasons why I was looking at a new CPU, mobo, RAM combo as its simply not worth upgrading the DDR1 RAM in my server as its seriously expensive and that money would be better spent on new kit.

As for the networking performance, Server 2003 looks like it could be a serious bottleneck and I happy to take advice about its replacement, with my only requirements being driver compatability and a GUI.

What drive testing software did you use to test your write speed.? I will give that a go and see what numbers I get. as that will help a great deal.


Andy

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Fri Apr 11, 2014 10:52 am

As I understand it, Server 2003 is based on XP. Aside from SMB1, it uses XP's file copy engine. Vista SP1 brought many improvements:

http://blogs.technet.com/b/markrussinov ... 26167.aspx

@washu - you are more knowledgeable re: linux than I ... assuming it supported Andy's raid card, could a live CD/USB distro be used to adequately network file copy speeds?

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 11, 2014 11:01 am

If you want to run Linux then you should use mdadm instead of your highpoint. If you can set your card to just be an HBA then you could still use it for that. FreeBSD or FreeNAS with ZFS is another option, but you would need way more RAM than you currently have. FreeNAS has a nice web interface.
Coming right back to CPU requirements. If I turned my RAID card into a simple host adapter and used Linux to run a pair of RAID-5 Array's what kind of CPU horsepower would I need.? Also what kind of throughput would I get (if we assume that my RAID card has infinite bandwidth).


Andy

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 11, 2014 1:31 pm

OK back to testing with a popular HDD test tool that also does "write" tests.

RAID-5 array scores a write speed of 125 MB/s, whilst a single drive scores a write speed of 80 MB/s.

So the two things that I never anticipated being the problem appear to be the write speed on my 2 single drives, and the OS.

The next thing that I need to do is to buy 4x WD Red HDD's and set them up in a RAID-5 array alongside my existing RAID-5 array of 2TB HDD's and then benchmark the new RAID array before going any further. If the new RAID array performs as well as or better than the current array then I have no need to change my RAID card or go for software RAID. The next step will of course to backup all of my non-important data that I have on my Boot drive and wipe it and install a modern Linux OS, and then re-test my server from a network perspective. After all of that I can then decide whether or not to replace the CPU, RAM and motherboard or just wait for something to fail.

My end goal is pretty straight forward, to have a 12TB RAID-5 array and the rest of my existing 2TB HDD's (6 of them) will be individual drives used as a backup for the RAID array, excluding the possibility of HDD failure that would prove to be a pretty nice setup for me that wont need a further upgrade for some years to come.

Do you SPCR members agree with my reasoning, conclusions and my end goal.?


Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 11, 2014 8:04 pm

andyb wrote:Also your explanation of my RAID card being rubbish for RAID-5 does not explain why my write speeds for single drives are so poor.
It does explain it, I was just trying to be polite. Highpoint cards are garbage no matter what you are doing with them. Try your drives on an Intel controller or modern AMD one. They will likely perform better. Your old chipset might also be part of the problem, but most of it is the Highpoint card.
More RAM is one of the reasons why I was looking at a new CPU, mobo, RAM combo as its simply not worth upgrading the DDR1 RAM in my server as its seriously expensive and that money would be better spent on new kit.
I agree that spending more on DDR1 RAM is not worth it. However, unless you are using ZFS more RAM will not help performance in your use case, just mask it a bit more. With more RAM you might be able to write more before performance tanks, but it will still tank once your write cache is full.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 11, 2014 8:14 pm

andyb wrote: Coming right back to CPU requirements. If I turned my RAID card into a simple host adapter and used Linux to run a pair of RAID-5 Array's what kind of CPU horsepower would I need.? Also what kind of throughput would I get (if we assume that my RAID card has infinite bandwidth).
Really you don't need anything for CPU to do software RAID. It is a very common misconception that you need a powerful CPU for software RAID. You don't. What you need is a good software implementation with intelligent cacheing. Mdadam, ZFS and GEOM do this. Windows RAID and most fake RAIDs don't do this.

What a good hardware RAID card offers is decent drivers and cache. The CPU on them is usually really slow compared to your main processor. My LSI card has a PPC chip that any atom would put to shame. It is the cache that helps write performance.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Sat Apr 12, 2014 2:34 am

oops!
Last edited by HFat on Sat Apr 12, 2014 2:46 am, edited 1 time in total.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Sat Apr 12, 2014 2:45 am

Since washu didn't answer that...
You can in principle do whatever you want with a live distro and you certainly don't need to overwrite your current boot drive to install Linux for testing.
While it's not strictly necessary to use a regular drive (or to use any kind of boot drive at all if only you set a little room aside on your data drives for a boot/system partition(s)), the simplest and most flexible thing to do for testing might be a regular install to any old drive you have lying around. Drive performance doesn't matter.

And it should be possible to non-destructively test the actual performance of a drive which is normally connected to a RAID card by connecting it directly to any non-ancient mobo.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 5:07 am

andyb wrote: RAID-5 array scores a write speed of 125 MB/s, whilst a single drive scores a write speed of 80 MB/s.
While 125 MB/sec isn't horrible given that we are concerned about Gig networking, it is still really bad compared to how much that card goes for. For less money you could have had a new MB+CPU+RAM with a better RAID controller "free" on the MB. Also, that is only ideal sequential speeds, you want some overhead so that even medium sized files don't slow you down below that.
So the two things that I never anticipated being the problem appear to be the write speed on my 2 single drives, and the OS.
Don't rule out the highpoint or your old system causing the slow single drives until you have tested on a good controller.
Do you SPCR members agree with my reasoning, conclusions and my end goal.?
It's a good goal, but I would still strongly recommend not continuing to use your highpoint, at least for RAID. While you may get lucky, crappy RAID cards have bitten many people in the ass before. If you are going to upgrade anyway, why not build a new system with your new RAID using software/Intel/real HW raid and then copy the data over. You could keep your old system as a fully separate backup. If it supports WOL then you can leave it off 99% of the time.

At the absolute minimum, make sure none of your backup drives are attached to the highpoint.
Last edited by washu on Sat Apr 12, 2014 5:11 am, edited 1 time in total.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 5:10 am

HFat wrote: And it should be possible to non-destructively test the actual performance of a drive which is normally connected to a RAID card by connecting it directly to any non-ancient mobo.
You couldn't fully test the drive this way, as you would want to test write speeds as well which would be destructive.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Sat Apr 12, 2014 5:24 am

If you wanted the full picture (in this case, I'd be content with a read test), you could write exactly what was on the drive in the first place. The risk isn't zero. Doing anything is risky. But that risk ought to be very small if you're careful.
washu wrote:crappy RAID cards have bitten many people in the ass before.
As have non-crappy ones once in a while.
I'm biased against both RAID5 and hardware RAID at home (or in organizations with a small IT budget).
Last edited by HFat on Sat Apr 12, 2014 5:29 am, edited 1 time in total.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 5:26 am

Jay_S wrote:@washu - you are more knowledgeable re: linux than I ... assuming it supported Andy's raid card, could a live CD/USB distro be used to adequately network file copy speeds?
In principle yes, but Linux might "bypass" the card making the test invalid compared to a newer Windows version. While a Highpoint 2320 is technically a hardware RAID controller, it is such a poor one that Linux might treat it as a fakeRAID controller and use dmraid on it. That will make it use Linux's own software RAID (same as mdadm) and it would perform much better than it normally would.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Sat Apr 12, 2014 7:35 am

I again thank you for your help.

Washu, you missed out giving a critique of my "write" tests on both my RAID-5 array and single drives.

"RAID-5 array scores a write speed of 125 MB/s, whilst a single drive scores a write speed of 80 MB/s."

After having looked up a review of my now rather old HDD's (in single drive mode) this is exactly in-line with the drives actual performance, so I don't really see the RAID card as a bottleneck, especially as I am simply after maxing out Gigabit Ethernet.

As for small and medium files, I don't really have much in the way of them, and I therefore don't care whether it will take 1-minute to copy them across the LAN or 5-minutes because there are so few of them and they hardly ever get touched.

As for not using my RAID card in the future for single drives - that's not very realistic, and I simply don't see the point of replacing it if an individual drive on the RAID card is being held back by the performance of the drive and not the card (which appears to be the case). As I will end up with 10/11 HDD's in my server and most motherboards only come with 6-ports, if I don't use my RAID card as a simple HDD controller then I will have to replace it with another one - a cheap one, which I doubt very much will make the slightest bit of difference, especially as I have experienced cheap drive controllers on motherboards before, they all appear to suck or have serious compatibility issues, so why would a cheap add-in controller be any better.? This is an honest question as I simply don't know what the situation is like ATM in that regard ever since Intel and AMD moved up from 4-ports to 6-ports as standard (on their better controllers) very few motherboards add additional controllers and as such I don't see the reviews.


Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 9:41 am

andyb wrote:As for not using my RAID card in the future for single drives - that's not very realistic
It is quite realistic if you value your data. You can believe me or not, I'm just some name on the internet. I have been dealing with storage systems for over 20 years now. Everything from old MFM drives to high end EMC and HP SANs. I have seen many crappy RAID cards lose or corrupt data, including several Highpoints. Far more than good cards or simple controllers. You might get lucky, but you are risking your data with a Highpoint card. Even if it was a good card I would still not suggest using it for your main array and backup drives at the same time. My main arrays are on my LSI card, but none of my backup drives are. Not putting all your eggs in one basket and all that.

I agree that many cheap SATA controllers suck. Asmedia/Jmicron based cards should be avoided. Newer Marvel based cards are fine if they are simple AHCI controllers and not fake RAIDs. The 4 port ones with PCI-e 2x connectors are not bad if your MB supports them. The irony is that the actual drive controller on the 2320 is a Marvel and would be fine if it didn't have all the Highpoint crap around it and was used as intended in a PCI-X system. That is not a typo, the controller on your card is PCI-X and Highpoint shoehorned it into a PCI-e card with a bridge chip.

If you want to go better than a basic add in card, look for an LSI HBA. There are several variants of the LSI 9211-8i which can often be found reasonably on ebay.

The best and safest option would be to just keep your old system for your backups. Then you don't need a controller with lots of ports to try and get all your drives in one system.

It's your data, you decide how much risk you are willing to take. Personally I would never trust a Highpoint card. I have removed many of them from systems I inherited administration of and refused support if removing them was not possible.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Sat Apr 12, 2014 9:51 am

washu wrote:You can believe me or not, I'm just some name on the internet.
You are the forum's resident expert. Anyone who's watched the forum for more than a few weeks must have figured that out.
washu wrote:It's your data, you decide how much risk you are willing to take. Personally I would never trust a Highpoint card.
In addition, I would never trust a RAID card if I couldn't afford either a spare or LOT of downtime.
Also, I wouldn't trust myself to admin more than a basic RAID array for valuable data without putting in some serious reading and especially testing time beforehand.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Sat Apr 12, 2014 10:19 am

Thanks again for your help.

I had a quick look around for that LSI card and it was either 2nd hand on eBay (I really dont like 2nd hand stuff) or it appears to be way more than I am willing to pay.

I have however found this (possible little gem of a product).

http://uk.startech.com/Cards-Adapters/H ... PEXSAT34RH

It uses a Marvell controller, and I have used a few "Startech" products in the past, they are cheap, but appear to be well made decent products with vanilla drivers and none of them have failed.

I can and will buy it for £56 if it has your blessing, I can then get a socket 1150 motherboard and use the "fake" Intel RAID-5, the 2 spare SATA ports and the 4 on this card for my backup drives, problem solved.

At current prices that would make my entire server re-build £710.

£56 for that controller card
£52 for motherboard
£28 for 4GB DDR3
£32 for a 2.7GHz Celeron
£540 for 4x 4TB WD Red HDD's

And I can then e-bay my current RAID card to drop the overall costs down a bit.


Andy

Post Reply