Utilities like that are useful if you are testing a CPU, but CPU errors are meaningless if you are testing a Heatsink. For a heatsink the only things that matter are Â°C/W and dBa. Now if it was a CPU review where you were testing overclocking, underclocking, unvolting abilites, etc, then error checking would be important part of the methodology. But to a heatsink, the only thing that matters is it's input wattage. Think about it this way...if in our testing that our CPU had Prime95 errors while testing with a Brand X heatsink. Is that result meaningful? No, because CPU's are individually unique...just because our had errors doesn't mean your's will, and the heatsink probbaly had nothing to do with it.
Agreed, I didnâ€™t intend to imply otherwise, I was simply stating that Prime95 is a useful tool for testing stability when under-volting.
The idea of uses "real" apps to get typical wattage information seems like the right track, but even that is full of pitfalls. Those apps will typically run as fast as the CPU will let them, so it doesn't really help us come up with a single "typical" wattage to use for testing. Do you test with a wattage that equals the fastest CPU out there, or the median? I think an easier way would be to pick one arbitrary wattage to test heatsinks at, and then allow the community to compile a "typical wattage" of actual CPU's. Then a user could look at what their individual wattage is in comparison to the "testing" wattage and extrapolate what the temps would be in their system if they bought that heatsink.
I like the idea of testing at two CPU speeds; one at a midrange speed for a particular CPU range and the other at the top end and preferably beyond using over-clocking.
I think there may well be a need for this because at the higher CPU frequencies the extra VCore needed tends to mean that the cooling efficiency is not linear at all, so extrapolation becomes too inaccurate to be meaningful.
A remaining question is, do you under-volt when testing the mid-range CPU? Or you could run the test twice, once at stock VCore and once undervolted.
Itâ€™s more work, but at least you donâ€™t have to run the audio tests for each variation as the fan speeds are constant between them.
And you only need to determine the lowest stable VCore for the particular CPU & motherboard combination the once and that information can be carried forward for other tests.
For the current C2D range, I would test at 2.13 GHz and ~3.2 GHz.
GPU testing is equivalent to CPU testing, really. You just swap "app" for "game". I think you're on the right track of using gameplay to come up with a wattage. Do the testing with a few different cards and a pattern may emerge, something to the effect of, "Doom3 averages a GPU wattage that is 75% of the GPU's theoretical max. 3DMark is 95%, etc" That sort of testing you'd only have to do once, not everytime you testes a new heatsink.
Thereâ€™s a complication here in that ATI & NVidia use very different methods for achieving the same ends due to their architectures being quite different. This is much more pronounced than the differences between AMD & Intel CPU designs I suggest. I say this because whereas C2D has pretty much got K8 beat across the board and by a significant margin, in the ATI versus Nvidia battle they both win certain battles (games) and lose others.
I agree that this has nothing to do per se with GPU heatsink efficiency, but if you intend to test a heatsink on a VGA card installed in a case, then it should influence how you approach this test.
Thereâ€™s a good chance that game A is more power efficient on an ATI chip, whereas game B is more power efficient on a NVidia chip. This means that the reported GPU temps when used with a particular combination of heatsink and game may not be representative for most scenarios.
The way I see around this is to determine which game stresses a particular architecture the most and use that when testing a heatsink on a GPU built with that architecture.
The downside is that if you really want to perform real-world testing of GPU heatsinks, you need to test them on both GPU platforms using the most demanding game for each platform
The kick in the ass for this sort of testing is that GPU technology is evolving much faster than CPU technology and Games are also evolving more quickly than typical desktop applications. So keeping on top of this would be fairly demanding.
Then thereâ€™s the really murky area of how do all these CPUs, GPUs & heatsinks interact within the confines of a myriad of differing PC setups.
Can of worms..can of worms indeed.
Itâ€™s starting to mutate into a great big bucket of writhing worms in front of my very eyes. Aaaaaaaaaaaaaaagh.