But this post is about my experiences with the Gigabyte 6600GT Silentpipe (nx66t128vp) that I used for this build. Note that what I describe below is only valid for this particular card. Other models with passive cooling from Gigabyte may use a different setup and a different sensor chip.
I don't have access to a digital camera at the moment but there are nice pictures of the card in the review at THG.
front
back
So after putting everything together and installing WinXP, drivers and some applications it was time take a closer look at temperatures. Normally I use Speedfan to monitor temperatures so I started there.
Everything looked ok,
Sensor chip: National LM99
GPU core temp: 42C
GPU board temp: 50C
For comparison I checked the temperature monitoring in the Nvidia display utility. There was no GPU board temperature displayed there but the GPU core temperature was 75C! And this is at idle... Hmm, I needed a third opinion so I started Everest Home Edition. And I surely got a third opinion, 66C. And Everest detected the sensor chip as National LM89. But on the positive side it displayed the same GPU board temperature as Speedfan, 50C.
I decided to see what would happen during load so I started RTHDRIBL (windowed mode, 800x600) and left it running until temperatures were stable, rougly 25 min.

Ahh, a very bad ventilated case, I can hear you think. Well, there is 2x120mm Nexus fans on 12V. One intake in the front and an exhaust in the back. The AMD64 3200+ venice is passively cooled by a Thermaltake Sonic Tower, and landed on 46C during this test (37C at idle). So the ventilation is ok.
So I went back looking at reviews on this card again to see if anyone else had very high temperatures. A few of them stated that the card got very hot but only here did they mention temperatures and from what utility they came from. Using the readings from the Nvidia driver they saw values >100C duing load. In this same review the temperature of the heatsink was measured with an IR thermometer to be 70C.
I started looking closer at the heatsink assembly on the card and it seemed wobbly. There is a thermal pad between the copper plate attached to heat pipe and the GPU. If this got bumped at installation or laying loose in the retail box it might dislodge and not make very good thermal contact anymore. So I decided to take the heatsink off and clean the surfaces with isopropanol before applying some Zalman thermal grease. The whole thing was put back together and this lowered temperatures ~10C.
A note on the heatsink assembly. The copper plate making contact with the GPU has the heatpie attached to it. But the heatsink on that side of the card does NOT come in contact with the heatpie. It only makes two small contacts (20x8 mm approx.) with the copper plate on each side of the heat pipe. I will take pictures of this when I get hold of a camera again.
A step forward but I was still not comfortable with the temperatures. So in search for the reason for the differences in displayed temperatures I read on the forum for Everest and in the information about Rivatuner that driver display temperatures are temperature compensated and that the level of compensation might change between driver versions even. Why this is I have no idea.
There seems to be very little difference between the two different sensor chips detected, LM89 and LM99. One can note though that the absolute maximum working temperature for the diode in LM89 is 125C, interesting with a throttling temperature of 145C in the driver. But it seems that the chip on the Gigabyte card is the LM99, there is an 8-pin chip of the right size marked T17C indicating this (according to the link above). The datasheet also indicates to the origin of the low values reported by Speedfan. The sensor has a builtin offset of 16C that Speedfan does not seem to compensate for. That is,
Tdiode = Treported + 16
This can be checked also with Rivatuner that can read the temperature from both the Nvidia driver and directly from the sensor chip (this value is identical to the one reported in Speedfan). Everest on the other hand also claim to read the sensor but always display a value about 10C below the Nvidia driver utility so it seems to compensate for the sensor chip offset. The difference between sensor chip output and the reported value from Everest is somewhat higher than 16C though.
So where does this leave me? I bought a Zalman VF700-cu and replaced the passive solution. Below is a comparison at the two cooling solutions during different settings.

All numbers are idle/load. Load is from running RTHDRIBL (windowed mode, 800x600) until temperatures were stable (*the load test for Zalman, passive was stopped after 6 min. Guess you can't run it passive with load

You can judge for yourself if you think the passive solution from Gigabyte is adequate. Even though I had no graphical artifacts at a core temperature above 100C or stability problems I do know that I don't want a >70C heatsink or a >100C core. Actually seeing the board hitting almost 70C made me more uncomfortably than the core at 110C. Even though the GPU seems to be made to withstand this there are a lot of standard components around it that is not. But this is my personal view.
And if you have a Gigabyte 6600GT Silentpipe, could you please reply with your temperatures and from which monitor program they have been obtained.
/Jaer
Don't know why my posts always end up so long...
