Speed of Gigabit LAN - any differences onboard vs PCI(e) ?

Offloading HDDs and other functions to remote NAS or servers is increasingly popular
Post Reply
ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Speed of Gigabit LAN - any differences onboard vs PCI(e) ?

Post by ACook » Sun Nov 02, 2008 6:56 pm

I've had my 10/100 switch for quite a few years, and when xferring files between 2 pc's connected to it, I get a solid 9MB/s, which I think is perfectly acceptable and normal.

While xferring from one pc to another, I started to move alot of big files, and thought I'd try a gigabit switch.

With a theoretical 125MB/s max, and a probably more practical max of 90MB/s, I thought it'd be able to max out the hd's easily, and I'd even be happy with 50MB/s, faster than what I get to the usb2 ide enclosure.

Imagine my disappointment when it seems to max itself out around 23MB/s.

Is this normal, is the switch to blame, are the onboard gbit LAN ports to blame, should I check my cables?


HD's: 1TB Samsungs in both machines.
CPU1: E2160 @ 2250 on Asus P5E-VM
CPU2: 4850e @ stock on Asus M3A-H/HDMI

Switch: Sitecom LN116

(I wanted a cheap switch now, and after going around 6 stores in the area all not having anything cheap in store, I got this one at a large electronics store which has a no questions asked return policy. Even if it would've worked fine it was 20e more expensive than online, so would return it anyway)

I figured a switch isn't all that complicated, and even if it's cheap, it'd at least be able to do a simple pc to pc xfer at reasonable speeds...

CPU Usage goes up bigtime as well when xferring files, can that impact things? When just streaming video it's not as bad.

Any thoughts?

are there perhaps any settings I should force for the onboard lan?
I got no clue what all the options do there.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Sun Nov 02, 2008 8:32 pm

You should probably check the cables: the only twisted pair cables that support gigabit Ethernet are CAT5E and CAT6.

josephclemente
Posts: 580
Joined: Sun Aug 11, 2002 3:26 pm
Location: USA (Phoenix, AZ)

Post by josephclemente » Sun Nov 02, 2008 8:50 pm

The Asus P5E-VM's gigabit seems to run on the PCI Express bus which is good - is the Asus P5E-VM the same way?

ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Post by ACook » Sun Nov 02, 2008 9:06 pm

I've got a bunch of different era cables.

e'quip UTP Enhanced Cat.5 Patch ISO/IEC 11801 & EN 50173 & TIA/EIA 568A 3P Verified - 24AWGx4P Type CM (UL) C(UL) CMG E164469

that's one I had loose to get near enough the keyboard to type it up.

one of the cables is similar to that one,

and one cable says gigabit verified. so I'm assuming that one's good, but the other one, is the code above any indication?

austinbike
Posts: 192
Joined: Sun Apr 08, 2007 5:09 pm

Post by austinbike » Mon Nov 03, 2008 3:37 am

Yes, check your cables but also check to see if you have jumbo frames enabled. Your switch will need to support jumbo frames, and if it does enabling jumbo frames on your nic will make a big difference.

If your switch doesn't support it jumbo frames don't turn it on, that just creates more issues.

theycallmebruce
Posts: 292
Joined: Sat Jul 14, 2007 10:11 am
Location: Perth, Western Australia

Post by theycallmebruce » Mon Nov 03, 2008 7:38 am

+1 jumbo frames. This should definitely improve your throughput to allow you to get close to wire speed.

Do try with a crossover cable to see if the switch is having any effect on the throughput. A decent Gigabit switch should have a fat enough backbone to transmit and receive at wire speed on all ports simultaneously, but the cheaper ones might not. As pointed out, it also needs to support the jumbo frames.

A quick explanation of jumbo frames:

By default, most Ethernet devices have an MTU (Maximum Transmission Unit) of around 1500 bytes set. For Gigabit Ethernet this translates to over 80,000 frames per second at wire speed! That's a lot of interrupts for your CPU to be servicing.

Enabling jumbo frames allows programs to send frames up to 9000 bytes in size. With full sized packets, that effectively divides the number of interrupts by six, so your CPU shouldn't thrash out so badly.

High end Ethernet cards also include features like large buffers for interrupt mitigation, and TCP offload in hardware to allow high throughput with little CPU load.

ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Post by ACook » Mon Nov 03, 2008 12:43 pm

so is that jumboframes somethiung to be set in the properties of the NIC?

currently in advanced properties for this onboard nic (.sig machine) it says Maximum Frame Size: 1514


btw I manually set both MediaType properties from Auto to 1000Mbps Full Duplex, which resulted in an increase from 23MB/s to 32MB/s

nice, but not enough yet.



unfortunately there's hardly any detail available on what exactly this switch can do, so don't see any mention of jumbo frames.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Mon Nov 03, 2008 4:13 pm

Generally speaking you shouldn't set both sides to a forced speed. You should always let the network card auto negotiate as some drivers disable features when forcing a speed.

ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Post by ACook » Mon Nov 03, 2008 5:38 pm

when i get a certified gbit cable I'll try it on auto again

theycallmebruce
Posts: 292
Joined: Sat Jul 14, 2007 10:11 am
Location: Perth, Western Australia

Post by theycallmebruce » Mon Nov 03, 2008 5:39 pm

It is odd that the autodetection should make any difference to the speed. Was it auto negotiating to 1000 Mbps half duplex?

As for jumbo frames, yes it will be in the driver properties tab if you are running Windows. Do you have a range of options in a dropdown for Maximum Frame Size, or do you have to type a value? Try referencing the user documentation for your devices. Be sure to increase the value at both ends.

To find out if your switch supports jumbo frames, just try it out. If your packets don't make it through, try with a crossover cable.

josephclemente
Posts: 580
Joined: Sun Aug 11, 2002 3:26 pm
Location: USA (Phoenix, AZ)

Post by josephclemente » Mon Nov 03, 2008 6:16 pm

Jumbo frames shouldn't be required for more reasonable network performance. I do not use jumbo frames and can do 100 megabytes per second and higher in my home network.

Back when I was experimenting with jumbo frames, I found they did make a huge difference with my old PCI Gigabit cards.

The last time I had gigabit performance issues was when my file server/HTPC was running a motherboard with an AMD RS482 chipset. Even using Gigabit network cards (tried both PCI Express and PCI) failed to improve things. I switched the motherboard to one with an Intel G33 chipset and my network speed issue was solved. So I'm curious if the P5E-VM's Gigabit port uses the PCI Express bus or PCI...

ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Post by ACook » Mon Nov 03, 2008 10:15 pm

no clue.


If using PC-A I put a file from PC-A to PC-B, and then also using PC-B I pull a file from PC-A to PC-B, I get 50MB/s average

As soon as one xfer finishes, it's back to 20-30 MB/s


This is all using shared folders on xp sp3.

theycallmebruce
Posts: 292
Joined: Sat Jul 14, 2007 10:11 am
Location: Perth, Western Australia

Post by theycallmebruce » Mon Nov 03, 2008 11:04 pm

joseph, that's interesting that you have had no problem. For me, MTU was definitely the limiting factor with a pair of HP dual port Gigabit NICs. I suppose there are a lot of other variables like operating system, CPU and so on.

ACook, it's also quite possible that that your disks are a bottleneck. Not all hard disk drives keep up with Gigabit Ethernet, especially in random access applications. Run disk benchmarks on each of the machines and see if you get similar results. Also try transferring data from RAM to RAM and see what kind of throughput you get.

As for PCI vs PCI Express, yes, you will not get full duplex Gigabit throughput on a standard 32-bit/33 MHz PCI bus, because the bus becomes a bottleneck. You might be able to find out which bus the onboard NIC is connected to by looking at the motherboard manual. Sometimes they have a diagram which indicates how everything is connected together.

Finally, you might want to benchmark with something better than Windows file sharing. Personally, I used ATA over Ethernet, because the protocol has very little overhead and gives you results quite close to the true throughput.

theycallmebruce
Posts: 292
Joined: Sat Jul 14, 2007 10:11 am
Location: Perth, Western Australia

Post by theycallmebruce » Mon Nov 03, 2008 11:34 pm

Just thought of something else: you might be able to find out which bus the NIC is on in Device Manager. Select "View Devices by Connection" and see if it is under PCI or PCI Express. I'm not sure if Windows will differentiate PCI and PCI Express though; the PCI I'm on now doesn't have PCIe so I can't check.

ACook
Posts: 282
Joined: Sat Apr 21, 2007 5:35 pm
Location: In the Palace

Post by ACook » Mon Nov 03, 2008 11:37 pm

using different protocols may give me a true max, but since I'm only going to use xp with shared folders, there's not much use to that.



I'm switching the machines round today, and with that the need for big fast filexfers is over for now.


Thanks for the pointers sofar, I'll keep them in mind for when I've got a bit more time to look at the issues.

lm
Friend of SPCR
Posts: 1251
Joined: Wed Dec 17, 2003 6:14 am
Location: Finland

Post by lm » Tue Nov 04, 2008 7:46 am

Windows shares only give you a small part of your theoretical transfer speed capacity.

For example, with my server and workstation, and a switch between them, all running 100Mbps, all I ever got for windows share transfers was like 4MB/s, while ftp transfers were over 10MB/s (ie. very near theoretical max).

My colleagues tell me the same problems happen with todays faster computers and gigabit networks too. Windows shares just aren't up to speed. I don't have personal experience on the current state of affairs, since I have migrated completely to linux.

There are some possible differences between onboard vs. separate NIC.

1st is the bus. Obviously a separate nic connected to PCI-E bus uses the PCI-E bus, but which bus does the integrated nic use? Turns out in some motherboards the integrated NICs are connected to the PCI-E bus, thus giving equal performance potential to separate PCI-E cards, while on some mobos the integrated NICs are connected to the PCI bus, which actually will become a severe bottleneck with gigabit ethernet. This is documented in the specifications of motherboards, and I recommend avoiding any mobos whose integrated NIC uses the PCI bus instead of PCI-E, if you need gigabit speeds.

2nd is offloading. The TCP/IP stack requires some processing for each data packet, like calculation of checksums etc. More or less of this can be offloaded to the NIC silicon, or alternatively those things are processed on the CPU in software. Effects of this can be verified by running some low level network data transfer like netcat full speed on an otherwise idle computer and measuring both the CPU usage and the achieved transfer speed. The higher the CPU usage compared to transfer speed, the less offloading the NIC can do. Separate cards are not clear winners, this is case by case thing, you might be perfectly well of with integrated NIC.

3rd is drivers. Lousy drivers might give worse latencies, lower transfer speed, higher CPU usage, more packet loss etc. Again this is case by case thing, but some manufacturers seem to have better track record on driver quality than others.

If you want to know for sure, use low level programs for testing your true max speed and compare it to a known good result.

AnaLogiC
*Lifetime Patron*
Posts: 3
Joined: Mon May 12, 2008 5:09 am
Location: Vienna, Austria

Post by AnaLogiC » Tue Nov 04, 2008 1:47 pm

I'm surprised nobody recommended some Cisco Catalyst-Switches and a pair of 4-port Intel-NICs yet - that's what usually happens when networking is the thread-topic...

I too was searching for some definite answers for the "how to get may files quickly from a to b using a network"-topic around a year ago, unfortunately I didn't find anything usefull.
Actually I found that 20MB/s-number for a Gbit-connection quite often.

Ok, here's what I would to by what I've learned until now:

1) Find out what NICs you have - if they're PCIe and Realtek 8168/8111, you're fine - if they're by Marvell, that's not so fine, definitely check out several drivers in that case, if nothing helps there's hopefully a free PCIEx1-slot for a cheap Realtek.
Don't waste money on something like Intel Pro1000PT's - I did (60 Euros for two NICs) and got ~5% worse throughput with them, no matter if offloading was enabled or disable (CPU-load didn't change either)

2) Check that there are no unnecessary bindings or filter-drivers on your NIC - they should also use default settings, if you're not sure reinstall them (remove the driver and reinstall by "scan for hardware" works to restore default settings).
Bindings you'll need are TCP/IP, Client for Microsoft Networks / File and Printer Sharing, Network Monitor is safe and QoS shouldn't matter if you'ev it installed (I always uninstall it, no need for it).

3) Take the switch out of the equation - connect the PCs with a simple network-cable (Gbit-NICs must support MDI-X, so a crossover-cable is not needed anymore).
You can add the switch later to find out if it is crap - if it works probably you absolutely shouldn't see any difference (that's the case with my ~30 Euro Dlink DGS-1005D I bought in March 2007, just for reference).

4) Offloading was a nice idea when CPUs were slow back in the day, and it might made a minimal difference if both your connection and CPU are near 100% most of the time - for your application (single stream) it's completely unnecessary.
That's probably why a simple (and cheap) design like the Realteks can offer more throughput at the same CPU-load than a more complex Intel-Design (which might offer an advantage with several hundret streams running parallel - though I personally wouldn't pay 30 Euros per NIC just to find out if that's really the case).

5) Same goes for Jumbo-Frames - quite helpfull when Interrupt moderation and MSI-signaling (PCIe) didn't exist yet, now that both is supported by the cheap Realteks their advantage is pretty much gone - if you get them working at all.
They didn't make a noticeable difference with the Intel-NICs, I couldn't really get them working with Linux, and I think I actually caused a BSOD trying them once with some old Realtek Windows-driver...
Feel free to try, but don't expect much.

6) Ok, back to testing - now that you've ensure no filter-drivers, firewalls (Windows-firewall is safe, others may not be), scanners or switches are in the way, you're ready to test raw TCP/IP-throughput.

I personally prefer netio for this, easy to use and available as Windows/Linux-Binary and as easy to compile source.
(URL in next post)

Run on the server: netio -t -s
and on the client: netio -t 1.2.3.4
(of course 1.2.3.4 is the IP-address of the server)

You should reach >110MB/s pretty independent of packet-size, I'd expect CPU-load to be around 20%-30%, most of it being kernel-load.

Thinks look quite different when you want to transfer files - actually Linux's Samba/CIFS implementation by now tends to be better than the one by Microsoft themselves.

After I was done with all my research, the german magazine c't did some benchmarking, while they came near wire-spead (>90MB/s) with the Samba-Client in Vista, they only reached >60MB/s with the client in XP-SP2 - that's actually also the performance I got - they did their measurements with a Ramdisk, and so did I.
Copying file from/to my Linux 2.6-Storage I average around 40MB/s-50MB/s, I think that should be possibly with your hardware.
Actually I once testet Win2003 as a server, I think it was minimally slower, not more than 10% though.

Regarding those 4MB/s over 100MBit:
Back in 1998 at my first employee we had a Linux 2.0-Development-Server and NT4-Clients with 3Com-NICs, usual throughput was ~6MB/s with the 3com-NICs.
Since those had issues with the already-mentioned expensive Catalyst-Switch we had (took over 2 minutes after power-on until the NICs had a link), I one day replaced my 3com with a cheap Realtek - I then reached 7MB/s-8MB/s.

Of course that was 10 years ago - my only device left with only a 100mbit-Port is my 5 year old noteobook with a Pentium-M 1.3GHz and a pretty slow 80GB-disc (>200 Euros at that time).
I copy files with a constant 99% usage (according to the Network-monitor in the Taskmanager) and a throughput >10MB/s - both PCs are running XP SP2.
I've not done any optimisations done on the TCP-Stack, I tried them but I didn't find a difference - those 4MB/s sound like some firewall/netfilter, or a really slow disc - never experienced that slow performance.

Hope those reference-numbers and things help - unfortunately there're not many tweaks I can provide to you, simply because they did nothing for me - GBit-performance is hard to reach with one stream (=file-copying with one client), and after trying a lot of things (iSCSI, AoE, FTP with vsFtp) I settled with simply Windows networking - most of the time my WD6400AACS limits the transfer-speed anyway.

Good luck on your quest anyway!

AnaLogiC
*Lifetime Patron*
Posts: 3
Joined: Mon May 12, 2008 5:09 am
Location: Vienna, Austria

Post by AnaLogiC » Tue Nov 04, 2008 1:47 pm


Magsy
Posts: 12
Joined: Mon Jun 11, 2007 6:54 am
Location: Wales, UK

Post by Magsy » Wed Nov 05, 2008 1:43 am

It is probably just the OS, even with server class hardware and a Cisco 3750 switch I never used to see much over 40-50mb/s on Win2k3/XP

Vista and Win2k8 support SMB 2 for sharing which is much much faster.
With 2008 server on the server and Vista on the client I'm now running 100mb/s+ with a PCI Intel card.

Post Reply