2. How reliable are the power consumption figures for GPUs from sites such as Xbitlabs?
This is a question that has been on my mind a lot recently. It applies to CPU figures as well as GPU figures. I think smilingcrow has it right: They're not terribly reliable, and they are certainly
not representative of typical usage.
I think this issue has already been dealt with, at least where cooling is concerned. SPCR has known for years that CPUBurn is one of the heaviest loads that you can use to stress a CPU; realistic, non-benchmark loads do not eat up as much power or generate as much heat. The purpose of using CPUBurn is to test heatsinks in a worst-case scenario. If a heatsink can handle a processor running CPUBurn, it will be fine for other strenuous activities. In addition, it is such a simple load that it makes testing easily repeatable â€” not something you can say about "typical" usage scenarios.
Knowing the maximum power draw has a similar use in the context of GPUs â€” it helps single out worst-case scenarios for testing. It was Xbitlabs' testing that helped me recognize that the stress-test that we were using (ATI Tool) wasn't stressing the X1900XTX enough to trust it as a benchmark.
, when it comes to comparing different GPUs (or CPUs for that matter) with each other, I don't know that I can think of any feasible test method that I would trust. Using the same utility to test all processors (as we have been doing for some time) is less than ideal because different chips have different characteristics â€” the most stressful load is different for every chip. If we're serious about comparing a maximum power figure (vs. a typical figure), we need to use a customized utility for every processor.
Let me talk about CPUs for a bit, since it will allow me to use CPUBurn as an example. One of the nice things about CPUBurn was that it includes different utilities for P6 (P-III?) vs. K6 architectures. The fact that Intel and AMD's later chips were built on these earlier architectures has helped CPUBurn remain a useful tool, out of date as it is. Subsequent testing of the utility, which showed that no greater load could be found, confirmed the usefulness of CPUBurn.
However, with the release of Conroe, which is a significantly different architecture, things have changed. 2xCPUBurn no longer even approaches the heaviest load it is possible to put on the Conroe. Our testing showed it to dissipate about 35W â€” unbelievable on a chip with a TDP of 65W. To hit 65W, I would guess that we need a utility that simultaneously loads all of the chips SSE units, plus occupies all 4MB of cache. As far as I know, this utility does not exist. Real world encoding applications may come close, but ultimately they're not easily repeatable and are unlikely to be optimized to use 4MB of cache.
So, not only do we not have a utility that we can trust to give us maximum power dissipation on Conroe, but the usefulness of such a utility is less than it has ever been. Why should maximum load matter now when
a) typical usage is so far below, much lower than it has been with Intel's previous processors if CPUBurn is anything to judge by, and
b) the heatsink market is filled with products designed to cool twice the TDP of the Conroe.
It seems to me that the maximum power approach is really only useful when judging what is needed to cool a processor, and, given the two points I just made above, it is less relevant now than ever before. When comparing heatsinks, it is really only necessary to maintain the same chip/load combination for all heatsinks tested; maximum power is not a necessity in that case.
If your interest is power itself (i.e. energy efficiency, not cooling), then maximum power is of little interest, since no modern chips run at full load all the time (I have a GeForce 3 that seemed to ... but I haven't seen any more recent examples).
For efficiency, it is far more useful to come up with a ballpark "typical" figure; for most people this is probably very close to the power consumption at idle. However, trying to test
for typical usage is an exercise in futility. Nobody actually has "typical" usage (no family has 2.3 kids), and discovering and trying to characterize how different usage patterns vary from "typical" would require a wide-scale study. Even if such a study were done, the practical use of the information seems very low to me â€” making use of the information would require a close study of one's personal computing habits... and how many people do you know who could/would do that?
It seems to me that GPUs are very much in the same position as CPUs. nVidia's and ATI's chips have different power profiles, and, even if a steady state "maximum power" load could be found for each of their various processors, gaming is such a dynamic activity that such a load would be useful only for testing coolers, not judging differences in power consumption.
What does all this mean? I suppose part of me is defending my article; the results of our test are valid, because the same GPU/load combination was used for several tests, providing a wealth of valid comparison points that show that the Condor is potentially very good, but is heavily dependant on airflow. I don't think anyone here will dispute that.
I think some of you have been complaining because I didn't give you a yes-or-no answer to the question to whether the Condor can handle an X1900XTX. I say: Tough. There is
no simple answer to this question. There are too many variables to say for sure, not just in the airflow setup but in how the card is used. If you don't use SM3.0 in any of your games, then chances are the Condor (and the X1900XTX) is overkill; the card almost doubled its power consumption when SM3.0 was in use.
Heatsink reviews are about comparing heatsinks, not graphics cards. If I had set out to review the X1900XTX, then
I would have needed to give a firmer answer... and then my answer would have been no, it can't be cooled passively. There are too many possible scenarios where something can go wrong. However, for this review, I didn't need to do that. What I needed was to create a comparison of the Condor with its competition, and I did that, with many various permutations to show how variable its performance can be.
The debate about power consumption, while very interesting and very relevant, is meaningless in a review of a heatsink. Ditto smilingcrow's comment about safe temperatures for a GPU. What matters is not how the Condor did with the specific cards that we tested, but how it did compared to other heatsinks
I realize that most people use our reviews to make decisions about what is needed to cool a specific product. However, no matter how thorough our review is, unless we test the card in exactly the same system as yours, we can't eliminate the gamble that you take when buying a product. Use common sense and you should probably do OK, but there will always be circumstances where, no matter how good a product is, your gamble fails: It's not the right product for you
. We can't anticipate or answer every question about the Condor. What we can do is provide you with some information to make a decision for yourself. If our message is mixed, take that as a warning, but make your own decision. We can't do that for you.