Any advantage to connecting gigabit ethernet to PCIx16 slot?

Offloading HDDs and other functions to remote NAS or servers is increasingly popular
Post Reply
tehfire
Posts: 530
Joined: Mon Jan 01, 2007 9:57 am
Location: US

Any advantage to connecting gigabit ethernet to PCIx16 slot?

Post by tehfire » Sun Jan 31, 2010 2:42 pm

I understand from a peak theoretical bandwidth standpoint, they will be the same, so that's not what I'm asking...

Code: Select all

CPU-----G31 Chipset-----ICH7-----HDDs
             |            |
             |            |
             |            |
             |            |
             |            |
          PCIe x16      PCIe x1
This is for a Windows Home Server. The PCIe lanes on my motherboard are connected as shown above. What I'm wondering is, if I connect it to the PCIe x1 slot, it would then pass onto the ICH7 southbridge, then to the G31 chipset, then to the CPU. Would putting it on the PCIe x16 slot benefit me in any way, in terms of latency or power consumption reduction?

One more question: Suppose I'm dumping a bunch of data from a client computer onto the hard drives. Would the data pass from the PCIe slot to the CPU, then back from the CPU to the hard drives? Or does the data go straight from the PCIe slot to the Hard Drives? I know very little about DMA and such, so I'm not sure exactly how the data passes through.

And I realize that at the end of the day these configuration changes would probably not make any noticable change in the performance on the server. I just thought I'd ask :)

Fëanor
Posts: 46
Joined: Sat Jun 02, 2007 12:01 pm

Post by Fëanor » Sat Jun 26, 2010 6:38 am

IMO it won't benefit you at all versus the integrated ethernet. The motherboard architecture isn't the bottleneck; Windows is.

Vista/7/Server 2008 use a newer TCP/IP stack that significantly improved network throughput compared to XP, but it's still not capable of using the hardware's full potential. I don't know if Windows Home Server uses this newer TCP/IP stack.

m1st
Posts: 132
Joined: Sun Jan 31, 2010 6:43 pm
Location: US

Post by m1st » Sat Jun 26, 2010 7:09 am

Haha I forgot about this thread! Thanks for reviving it :)

I ended it up just connecting it to a standard PCIe x1 slot in my computer. I'm able to transfer at about 110MB/s peak, and I think that the limiting factor here are the hard drives on the server.

As far as I know, WHS uses Server 2003 as its codebase, so I'd imagine it has a similar networking stack as XP. It's working well enough for now, anyways...

fwki
Patron of SPCR
Posts: 120
Joined: Mon Dec 17, 2007 7:06 am
Location: Houston, Texas U.S.A.

Post by fwki » Sat Jun 26, 2010 8:00 am

Here's a good article on network throughput limiting factors: http://www.smallnetbuilder.com/nas/nas- ... ault&page=
Onboard NIC with PCIe 1x is plenty fast enough to reach the theoretical peak of 113MB/s wheras the old PCI had a ceiling around 65. Next limiting factor (once file size exceeeds RAM caching benefits) is HDD performance, assuming you have a PC with enough power to reach the limit. The newer disk drives are catching up with RAID in having faster capability than the gigabit limit and at that point (or before) the OS SMB protocol and record size limits take over. Vista/Server 2008 use SMB2 and large record size. WHS is based on Server 2003, and you're seeing peaks >100MB/s probably due to RAM caching. However, with Vail out in the wild (beta testing is free and fun!), you can now max out WHS gigabit LAN performance. Once it's released, NAS systems with WHS Vail should be at the top of the speed charts.

Post Reply