Aerocase Condor: A Massive, Passive VGA Cooler

Want to talk about one of the articles in SPCR? Here's the forum for you.
WR304
Posts: 412
Joined: Tue Sep 20, 2005 1:21 pm
Location: UK

Post by WR304 » Fri Aug 18, 2006 1:11 pm

The eVGA 7900GT KO isn't the same as a standard 7900GT.

This article mentions that along with the higher clock speeds it also has a higher vcore resulting in a hotter card:

http://www.pureoverclock.com/article635-8.html

It looks like a good deal price/ performance wise though. :)

rpsgc
Friend of SPCR
Posts: 1630
Joined: Tue Oct 05, 2004 1:59 am
Location: Portugal

Post by rpsgc » Fri Aug 18, 2006 1:31 pm

WR304 wrote:The eVGA 7900GT KO isn't the same as a standard 7900GT.

This article mentions that along with the higher clock speeds it also has a higher vcore resulting in a hotter card:

http://www.pureoverclock.com/article635-8.html

It looks like a good deal price/ performance wise though. :)
If that's true then probably the best choice would be a regular 7900GT because (according to xbitlabs)

6800GT = ~55W
7900GT KO = ~57W

Perhaps...

EDIT: It seems that the regular 7900GT consumes much less at a shy 48W (http://www.xbitlabs.com/articles/video/ ... ise_4.html)
Last edited by rpsgc on Fri Aug 18, 2006 1:52 pm, edited 1 time in total.

WR304
Posts: 412
Joined: Tue Sep 20, 2005 1:21 pm
Location: UK

Post by WR304 » Fri Aug 18, 2006 1:45 pm

Is there a Xbit labs figure for an eVGA 7900GT KO?

I could find a review for a eVGA 7900GT CO but that's a different card again. :(

It does raise one point for articles reviewing graphics card coolers though.

If you use a non standard overclocked card it will just confuse everybody. :?

rpsgc
Friend of SPCR
Posts: 1630
Joined: Tue Oct 05, 2004 1:59 am
Location: Portugal

Post by rpsgc » Fri Aug 18, 2006 1:51 pm

WR304 wrote:Is there a Xbit labs figure for an eVGA 7900GT KO?
Yes.
http://www.xbitlabs.com/articles/video/ ... 0gt_4.html

WR304
Posts: 412
Joined: Tue Sep 20, 2005 1:21 pm
Location: UK

Post by WR304 » Fri Aug 18, 2006 4:01 pm

That's the 7900GT CO review. :)

The 7900GT KO as mentioned by cmthomson is a different card again.

eVGA currently have 10 different versions of the 7900GT card. :shock:

http://www.evga.com/products/

rpsgc
Friend of SPCR
Posts: 1630
Joined: Tue Oct 05, 2004 1:59 am
Location: Portugal

Post by rpsgc » Sat Aug 19, 2006 1:20 am

WR304 wrote:That's the 7900GT CO review. :)

The 7900GT KO as mentioned by cmthomson is a different card again.

eVGA currently have 10 different versions of the 7900GT card. :shock:

http://www.evga.com/products/
Oops.. my bad. Didn't pay enough attention.

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Mon Aug 21, 2006 1:21 pm

This review brings up a number of issues for me.

1. Are two slower graphic cards (SLI or Crossfire) easier to cool silently than one top end card?

People often jump at the mention of dual GPUs as if it’s inherently a bad thing. Since microprocessors built on the same fabrication process typically tend to be offer less performance per watt at the top end of their frequency range, it’s surely worth investigating how the two setups compare in the areas of power efficiency, value for money and especially ease of cooling. The extra power consumption of the motherboard chipsets supporting dual VGA cards will also have an impact.
The overhead of running two discrete VGA cards may well prohibit these setups from beating a single VGA card in overall power efficiency, but it must surely be easier to cool them silently, especially on motherboards which have a wide spacing between the two x16 slots.

2. How reliable are the power consumption figures for GPUs from sites such as Xbitlabs?

It has recently become more apparent to me that measuring the power consumption of CPUs in a meaningful way isn’t as easy as running a single utility such as CPUBurn. While I respect the scientific rigour of SPCR very highly, I’m not sure that I feel vaguely so confident with regard to other sites whose benchmark figures are widely touted on this site by SPCR members, including myself.
In one sense the power figures are less relevant than tracking the maximum GPU temperature in relationship to a real-world maximum safe temperature for that GPU architecture. These temperatures thresholds will typically vary between ATI & NVidia and also across different product ranges by the same company. Therefore, looking at the power consumption and temperature figures in isolation doesn’t tell the whole story.
I’m not sure that the GPU chip designers release enough information about their products to allow a more informed decision to be made, or maybe it’s a matter of wading through myriad PDF files like a later day Sherlock I.T. Holmes.

3. What actually is a safe temp for a GPU?

I have a passive 6200TC (very low end) in one system and it still idles in the 70s or 80s centigrade. The Nvidia software by default has set the ‘core slowdown threshold’ to 145C! It would likely be impossible to reach this temp with such a low end card, but it’s something to keep an eye on if you do have some big GPU iron under the hood. Hopefully, modern GPUs will protect themselves in the same way that modern CPUs do by lowering their clock speed or even shutting down completely. Is this the case? If it does, then the temps aren’t such an issue.
The temperature issue can lead to paranoia for people that are uncomfortable with such high temperatures, so investing in a graphic card that comes with a lifetime warranty does make sense if you are pushing the envelope.

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Tue Aug 22, 2006 1:27 am

1. Are two slower graphic cards (SLI or Crossfire) easier to cool silently than one top end card?
I don't quite follow the logic here; OK, so you have two cards, so the surface area for dissipating the heat is potentially much greater, but you also get an increase an heat output, so most likely a lot of that gain is lost. Also, 2 low-performance cards ≠one high-end card generally, it will need to be more like 2 medium performance cards.
2. How reliable are the power consumption figures for GPUs from sites such as Xbitlabs? ...While I respect the scientific rigour of SPCR very highly, I’m not sure that I feel vaguely so confident with regard to other sites whose benchmark figures are widely touted on this site
To be brutally honest, if we are talking about scientific rigour/credentials, the Xbitlabs team win hands down on this one. Most if not all of them have an IT/scientific degree from technical institutes/universities, including Oleg Artamonov who is a graduate of Moscow State University and posts occasionally here and knows more about PSUs than anyone could ever want to know! :lol:
3. What actually is a safe temp for a GPU?
I'm not sure this is as relevant a question for SPCRers as "What kind of heat load does it dump into the case (and consequently has to be exhausted)?"

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Tue Aug 22, 2006 6:07 am

jaganath wrote:I don't quite follow the logic here..
If you install for example an X1900XTX in your system and even when using the most optimized low noise cooling than you can devise the GPU temp is hitting 120C, you have to ask yourself the question – what is the safe maximum temperature for this GPU? All the other systems temps might be comfortable, so the issue then hinges on the max safe GPU temp. If you don’t feel it’s safe at that temp and you don’t want to compromise on noise levels then this card isn’t going to work for you.

So, two 48W GPUs that are separated by a distance of 4 to 6 inches are going to be easier to cool than one 121W GPU. This is comparing two 7900GTs with an X1900XTX using the power figures supplied by XBitlabs. It also stacks up very well in a performance comparison although your idle power consumption will rise and the cost will be greater.

This can make the difference between silent case fans and going above your noise threshold. How this impacts on overall system noise levels will depend on the other components used. It’s possible to use an E6600 in this setup and still keep the total AC power draw to 200W. Since it’s possible to run this sort of system with a fanless power supply, the SLI setup will have no affect on overall system noise and you can hit a lower overall noise level.

I’ll come clean and say that whilst you are playing games the extra noise will not be noticed, so I’m very open to being termed anally retentive here. I’ll put my hand up to that, but it doesn’t take away from the basic premise involved here.

I wasn’t intending to be disparaging specifically towards Xbitlabs, but just used them as an example of a site whose data is posted here; I find it an interesting site.

As for the issue of people having a degree and therefore somehow automatically being kosher, I put that in the same class as someone having a recording contract. In both situations the only thing they have is a piece of paper, a degree certificate or a recording contract, it doesn’t really mean very much at all. There are a lot of mediocre people with degrees and not particularly talented people with recoding contracts. Go figure.
To quote Frank Zappa, ‘If I smoke this, is it the same as if I had a High School Diploma?’. His answer, ‘Yes, they both contain absolutely nothing’. Lol.

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Tue Aug 22, 2006 6:45 am

I just noticed these benchmarks which show a 7600GT SLI setup performing very similarly to an X1900XTX at 1600x1200 with anti-aliasing disabled. With ant-aliasing enabled, the ATI card does pull away and leads by on average ~20%.
The 7900GT SLI setup that I outlined above is over 20% faster than the ATI card with anti-aliasing enabled. There are so many permutations and variables when benchmarking games that I don’t pretend to know how typical these are. As a ballpark figure it serves its purpose.

The two 7600GT cards are rated by Xbit labs at idle/3D load as 15W/36W versus 30W/121W for the ATI. They are matched at idle purely at the VGA card power consumption level (15W * 2 = 30W v 30W), but the SLI cards at load offer a gain of 49W (2 * 36W = 72W v 121W).
This doesn’t take into account any extra power that an SLI setup introduces at the card level or motherboard level, of course. But purely from a cooling perspective, this has to be a large gain.
You can buy two passively cooled Gigabyte 7600GTs for less than the cost of a fanned 1900XTX and don’t have the extra cost and hassle of needing to mod it.

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Tue Aug 22, 2006 7:05 am

I suspect that comparison is flattered by the use of the the X1900XTX, one of the most power-sucking cards on the market. Surely it makes more sense to compare two 7600GT's with one 7900GTX?

Also, I was under the impression that the 7600GT used 51W at 3D load?

VRZone NVIDIA 7600GT, 7900GT & 7900GTX Reviews

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Tue Aug 22, 2006 8:01 am

jaganath wrote:I suspect that comparison is flattered by the use of the X1900XTX, one of the most power-sucking cards on the market. Surely it makes more sense to compare two 7600GT's with one 7900GTX?
I found the X1900XTX a useful reference point as it was used in the SPCR review and the fact that’s it a power hog makes it useful for looking at worse case scenarios. Who knows how hot DX10 cards will be, so looking at silent VGA cooling from another perspective could become more or less of an issue when they are released.

In this context I don’t see the point of comparing with a 7900GTX as it seems to consume a low enough amount of power so that cooling it silently shouldn’t be an issue.
jaganath wrote:Also, I was under the impression that the 7600GT used 51W at 3D load?
The VRzone figures vary quite a lot across the board from the Xbitlabs review. It throws my whole supposition up in the air. I guess I’ll just have to buy a couple of GTs and give ‘em a go. :shock: Not that I have the equipment to measure discrete voltages. What do you actually need to do that? On second thoughts, don't encourage me. :roll:

st_o_p
Posts: 6
Joined: Tue Aug 22, 2006 8:10 am

X1900XT can be cooled passively by Condor

Post by st_o_p » Tue Aug 22, 2006 8:22 am

I just wanted to say that I have a totally passive system:
Conroe E6700 cooled by Ninja (no fan)
Radeon X1900XT cooled by Condor
Silentmaxx Passive power supply 400W
in a P180 case with all fans removed

The trick is I keep the side panel of the case open (cheating I know, but it's silent). I have a passive MB (Asus P5W DH) and 2 150 GB Raptors in enclosures (Scythe) in the upper 5.25" bays of the case.

Basically my temps usually max out at 65 C CPU and 85 C for the video card - after several hours of gaming. I've tried Oblivion on heavy settings as well as Far Cry and Doom 3 with video settings maxed out. The GPU never reached 90 C (the ambient room temps have been around 78-82F lately)

I didn't even undervolt anything - I tried playing with ATI Tool but I found out my temps don't go down on lower voltage, so I just left the card on default (managed by catalyst)

How about that?

peteamer
*Lifetime Patron*
Posts: 1740
Joined: Sun Dec 21, 2003 11:24 am
Location: 'Sunny' Cornwall U.K.

Re: X1900XT can be cooled passively by Condor

Post by peteamer » Tue Aug 22, 2006 8:55 am

st_o_p wrote:How about that?
8)

Devonavar
SPCR Reviewer
Posts: 1850
Joined: Sun Sep 21, 2003 11:23 am
Location: Vancouver, BC, Canada

Post by Devonavar » Tue Aug 22, 2006 1:17 pm

smilingcrow wrote:2. How reliable are the power consumption figures for GPUs from sites such as Xbitlabs?
This is a question that has been on my mind a lot recently. It applies to CPU figures as well as GPU figures. I think smilingcrow has it right: They're not terribly reliable, and they are certainly not representative of typical usage.

I think this issue has already been dealt with, at least where cooling is concerned. SPCR has known for years that CPUBurn is one of the heaviest loads that you can use to stress a CPU; realistic, non-benchmark loads do not eat up as much power or generate as much heat. The purpose of using CPUBurn is to test heatsinks in a worst-case scenario. If a heatsink can handle a processor running CPUBurn, it will be fine for other strenuous activities. In addition, it is such a simple load that it makes testing easily repeatable — not something you can say about "typical" usage scenarios.

Knowing the maximum power draw has a similar use in the context of GPUs — it helps single out worst-case scenarios for testing. It was Xbitlabs' testing that helped me recognize that the stress-test that we were using (ATI Tool) wasn't stressing the X1900XTX enough to trust it as a benchmark.

However, when it comes to comparing different GPUs (or CPUs for that matter) with each other, I don't know that I can think of any feasible test method that I would trust. Using the same utility to test all processors (as we have been doing for some time) is less than ideal because different chips have different characteristics — the most stressful load is different for every chip. If we're serious about comparing a maximum power figure (vs. a typical figure), we need to use a customized utility for every processor.

Let me talk about CPUs for a bit, since it will allow me to use CPUBurn as an example. One of the nice things about CPUBurn was that it includes different utilities for P6 (P-III?) vs. K6 architectures. The fact that Intel and AMD's later chips were built on these earlier architectures has helped CPUBurn remain a useful tool, out of date as it is. Subsequent testing of the utility, which showed that no greater load could be found, confirmed the usefulness of CPUBurn.

However, with the release of Conroe, which is a significantly different architecture, things have changed. 2xCPUBurn no longer even approaches the heaviest load it is possible to put on the Conroe. Our testing showed it to dissipate about 35W — unbelievable on a chip with a TDP of 65W. To hit 65W, I would guess that we need a utility that simultaneously loads all of the chips SSE units, plus occupies all 4MB of cache. As far as I know, this utility does not exist. Real world encoding applications may come close, but ultimately they're not easily repeatable and are unlikely to be optimized to use 4MB of cache.

So, not only do we not have a utility that we can trust to give us maximum power dissipation on Conroe, but the usefulness of such a utility is less than it has ever been. Why should maximum load matter now when
a) typical usage is so far below, much lower than it has been with Intel's previous processors if CPUBurn is anything to judge by, and
b) the heatsink market is filled with products designed to cool twice the TDP of the Conroe.

It seems to me that the maximum power approach is really only useful when judging what is needed to cool a processor, and, given the two points I just made above, it is less relevant now than ever before. When comparing heatsinks, it is really only necessary to maintain the same chip/load combination for all heatsinks tested; maximum power is not a necessity in that case.

If your interest is power itself (i.e. energy efficiency, not cooling), then maximum power is of little interest, since no modern chips run at full load all the time (I have a GeForce 3 that seemed to ... but I haven't seen any more recent examples).

For efficiency, it is far more useful to come up with a ballpark "typical" figure; for most people this is probably very close to the power consumption at idle. However, trying to test for typical usage is an exercise in futility. Nobody actually has "typical" usage (no family has 2.3 kids), and discovering and trying to characterize how different usage patterns vary from "typical" would require a wide-scale study. Even if such a study were done, the practical use of the information seems very low to me — making use of the information would require a close study of one's personal computing habits... and how many people do you know who could/would do that?

It seems to me that GPUs are very much in the same position as CPUs. nVidia's and ATI's chips have different power profiles, and, even if a steady state "maximum power" load could be found for each of their various processors, gaming is such a dynamic activity that such a load would be useful only for testing coolers, not judging differences in power consumption.

What does all this mean? I suppose part of me is defending my article; the results of our test are valid, because the same GPU/load combination was used for several tests, providing a wealth of valid comparison points that show that the Condor is potentially very good, but is heavily dependant on airflow. I don't think anyone here will dispute that.

I think some of you have been complaining because I didn't give you a yes-or-no answer to the question to whether the Condor can handle an X1900XTX. I say: Tough. There is no simple answer to this question. There are too many variables to say for sure, not just in the airflow setup but in how the card is used. If you don't use SM3.0 in any of your games, then chances are the Condor (and the X1900XTX) is overkill; the card almost doubled its power consumption when SM3.0 was in use.

Heatsink reviews are about comparing heatsinks, not graphics cards. If I had set out to review the X1900XTX, then I would have needed to give a firmer answer... and then my answer would have been no, it can't be cooled passively. There are too many possible scenarios where something can go wrong. However, for this review, I didn't need to do that. What I needed was to create a comparison of the Condor with its competition, and I did that, with many various permutations to show how variable its performance can be.

The debate about power consumption, while very interesting and very relevant, is meaningless in a review of a heatsink. Ditto smilingcrow's comment about safe temperatures for a GPU. What matters is not how the Condor did with the specific cards that we tested, but how it did compared to other heatsinks.

I realize that most people use our reviews to make decisions about what is needed to cool a specific product. However, no matter how thorough our review is, unless we test the card in exactly the same system as yours, we can't eliminate the gamble that you take when buying a product. Use common sense and you should probably do OK, but there will always be circumstances where, no matter how good a product is, your gamble fails: It's not the right product for you. We can't anticipate or answer every question about the Condor. What we can do is provide you with some information to make a decision for yourself. If our message is mixed, take that as a warning, but make your own decision. We can't do that for you.

Rusty075
SPCR Reviewer
Posts: 4000
Joined: Sun Aug 11, 2002 3:26 pm
Location: Phoenix, AZ
Contact:

Post by Rusty075 » Tue Aug 22, 2006 2:51 pm

I think Devon raises some very important points that should be discussed more fully. To facilitate that, without filling this thread with OT posts (with respect to the Condor), I started a new thread CPU/GPU/HSF Thermal Testing

Lets continue all non-Condor-specific discussion over there.

roadie
Posts: 166
Joined: Fri May 12, 2006 2:07 am
Location: Liverpool, UK

Post by roadie » Wed Aug 23, 2006 2:11 am

I greatly enjoyed the Condor review. However, if you were to cool a x1900 passively, I think you need to build around it from the outset. Using the Condor with the different orientation would have been one of the things needed.

It is possible to cool the x1900 well with a very low level of noise and I am proud of my setup, although not of the cost that it was achieved at.

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Wed Aug 23, 2006 5:09 am

Devonavar wrote:The debate about power consumption, while very interesting and very relevant, is meaningless in a review of a heatsink. Ditto smilingcrow's comment about safe temperatures for a GPU. What matters is not how the Condor did with the specific cards that we tested, but how it did compared to other heatsinks.
If you had purely been testing the heatsink on a simulated test bed, then I would agree with you.
But since you were testing a heatsink on a GPU installed in a closed system and reporting power consumption and temperature levels as part of the review, surely these are relevant. Otherwise, why include this data in the review in the first place?

From a silence perspective, the amount of heat a GPU outputs, the power it consumes and the temperature range that it’s safe to run it at, can have enormous ramifications for cooling it silently. E.g.

It can take only a small amount of extra system power consumption to push a power supply across a threshold so that its fan speed ramps, whereby it is no longer silent enough for a user.
The difference between a GPU that is safe at 110C and one that is safe at 130C, can lead to the difference between quiet case cooling and case cooling that is a louder than ideal.

This isn’t mean to be a criticism, as I thought the review was fine. Putting together a complete test that looks at all these parameters with just one heatsink would be very exhaustive. You would need to look at multiple GPUs from both ATI & NVidia and even dual VGA solutions if you wanted to be absolutely thorough.
To also include multiple heatsinks would add extra time also. Although, once you’ve run the tests with the initial heatsink tested and determined the nature of each GPU, you could then run tests on future heatsinks using maybe only 2 to 4 GPUs, to show how they perform when installed on both a mid-high end card and top end card.

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Wed Aug 23, 2006 9:10 am

jaganath wrote:I was under the impression that the 7600GT used 51W at 3D load?

VRZone NVIDIA 7600GT, 7900GT & 7900GTX Reviews
It’s a shame that neither VRZone or XBitlabs showed system power wattage figures alongside their figures for GPU wattage, as that would possibly allow us to determine which of them had produced the more accurate wattage figures.
Unless they both produced accurate VGA wattage figures under very different conditions, which does seem unlikely.

Longwalker
Posts: 53
Joined: Sun Aug 06, 2006 2:35 pm

Post by Longwalker » Thu Aug 24, 2006 8:55 am

Did they really anodize the heatsink? Anodization turns the surface into a thermal insulator--really not the kind of thing one wants in a heatsink.

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Thu Aug 24, 2006 10:01 am

Longwalker wrote:Did they really anodize the heatsink? Anodization turns the surface into a thermal insulator--really not the kind of thing one wants in a heatsink.
This has been discussed before, anodization also increases emissivity, whether anodization improves overall cooling depends on how much emissivity is increased and how thin the insulating layer is.

stigmata
Posts: 32
Joined: Sun Apr 18, 2004 5:30 am
Location: Merate - ITALY

Post by stigmata » Wed Jan 31, 2007 1:47 am

Hi all
Very interesting article and discussion here.

I'm thinking about using this cooler for my x1900gt videocard, with a nearby 120mm nexus fan @ 5v (@12v during gameplay) blowing at it. Would this be ok? If not, what could I try? My VF900 @ 5v is too noisy for me...

According to xbitlabs, my videocard should be 75W at peak:
Image
From here: http://www.xbitlabs.com/articles/video/ ... ise_4.html

Thankyou all! :)

cmthomson
Posts: 1266
Joined: Sun Oct 09, 2005 8:35 am
Location: Pleasanton, CA

Post by cmthomson » Wed Jan 31, 2007 8:21 am

stigmata wrote: I'm thinking about using this cooler for my x1900gt videocard, with a nearby 120mm nexus fan @ 5v (@12v during gameplay) blowing at it. Would this be ok? If not, what could I try? My VF900 @ 5v is too noisy for me...

According to xbitlabs, my videocard should be 75W at peak
You should be fine. My overvolted overclocked (95W) 7900 card is adequately cooled with a Condor and a not-very-nearby Nexus at 600 RPM (about 6V).

s_xero
Posts: 154
Joined: Sun Sep 10, 2006 2:56 pm

Post by s_xero » Wed Jan 31, 2007 8:55 am

Mike,
I've got an 6800GT :D

I'm only thinking about the 6800XT...
first, ofcourse the XT is pretty more crap in "frames"-performance.

What is the difference between the two in wattage?

Maybe I'm interested... If the 6800XT has got an better "performance/watt"-ratio. My 6800GT has no stoch-cooler anymore.. but I guess you've got so many aftermarket coolers there that it's no problem :P

It's also an possibility that I'm going to buy a new graphics card. For better performance on my 22 inch-monitor....you need GPU-power to run @ high-res.

srogers
Posts: 1
Joined: Sun Sep 23, 2007 4:47 pm

Condor Cooler - My experience after ~10 months of use

Post by srogers » Sun Sep 23, 2007 6:47 pm

Hi,

I am newly registered to SPCR. I have lurked here for some time in my quest to build a silent pc and I would like to first thank all the SPCR contributors and the forum participants. WIthout the information I had learned here, I might never had built a quiet computer.

I would like to post my experience with the Condor cooler after ~ 10 months of use.

System summary:

CPU - Core Duo x6400 - OC to 3.4 GHz
Memory - 2 GB - OC to 425 MHz FSB
Sound Card - Creative Audigy 2 ZS
Power SUpply - Seasonic SS550HT
Graphics - Sapphire ATI x1900GT - Not OC, as this card doesn't oc very well
Case - Antec P180
Cooling - 3 120mm fans at ~ 5-7 volts ( depends on case temp )


Note that my system was based on the Core Duo DIY system that Chris Thomson wrote about. It is compartmentalized in a very similar manner, and uses one fan to blow the cpu heat out, one for general exhaust and one fan to cool the video / sound card compartment. I did not use a fanless power supply, so the loudest component is....you guessed it, the power supply.

About 1 month ago, I started to experience random freezes of my computer while gaming. At first, it only occured every few days, but about two weeks ago it started doing it every few hours, and about 1 week ago, it started doing it quite often. Sometimes as many as 3-4 times / hour. After tearing down the system quite a few times over the last few days, running bechmarks trying to duplicate the issue, reconfiguring the cooling, reconfiguring the sound, stopping the overclocking, etc. I finally traced it to the graphics card overheating !!!!

Now, it's not the Condor that's the issue. The Condor is a wonderful cooler that works as well as the stock cooler, but without a loud fan. I can't call it fanless, as I have a Nexus 120mm blowing air across its massive fin, but I can't hear the Nexus. The Condor gives me temps while gaming in the mid 50s, which is what the stock cooler would do with the fan at 100%. In other words, you could hear the computer in the next room with the stock fan.

So, the GPU isn't overheating and the graphics chips have heatsinks applied ( probably not needed, actually ) so they shouldn't be the issue. And they are not.

The problem is with 3 small chips, approx. 5mm square, that located at the back of the board. Two are close together at the top of the board and the other is near the PCIe connector side. I don't know their function (switching power converters ?) as I didin't record/look up the designators when I was messing with the board, but these suckers get HOT. Really HOT. Did I mention they get HOT ?! . Apparently over the last 10 months they have finally been damaged enough by not being sufficiently cooled by the airflow that they are causing the video card to stop functioning when they warm up.I went and got the original heatsink out and looked at it, and behold, there are actually small thermal interfaces for these chips that I had overlooked. So by me not looking at the original heatsink carefully, I overlooked that these are probably critical components that need to be cooled. I resintalled the original heatsink and the problem has not re-occured. Of course, I can't hear myself think with the stock cooler in place.

I am getting additional heatsinks for these chips. Of course, the 3 small chips are close to some components that are taller, so I have to figure out how to interface the heatsinks, but its not rocket science. Once these heatsinks are ready, I will re-install the Condor and return to my quiet computer configuration.

I just thought I would share this with ya'll so no one else makes the same mistake I did. I can only hope that the card continues to function for some time, as I don't want/can't afford to replace the graphics card at this time.

Thanks again for all the help I got at SPCR.

cmthomson
Posts: 1266
Joined: Sun Oct 09, 2005 8:35 am
Location: Pleasanton, CA

Re: Condor Cooler - My experience after ~10 months of use

Post by cmthomson » Mon Sep 24, 2007 11:22 am

srogers wrote:The problem is with 3 small chips, approx. 5mm square, that located at the back of the board. Two are close together at the top of the board and the other is near the PCIe connector side. I don't know their function (switching power converters ?) as I didin't record/look up the designators when I was messing with the board, but these suckers get HOT. Really HOT. Did I mention they get HOT ?!
Those are indeed MOSFETs for the onboard VRM that outputs the GPU core voltage. On many video cards they get very hot. I've burnt out two TV cards because their VRMs cooked either themselves or nearby components, even though I had stuck RAMsinks on them.
Thanks again for all the help I got at SPCR.
Welcome.

Post Reply