Dual core Intel chips...

Cooling Processors quietly

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Dual core Intel chips...

Post by Beyonder » Tue Oct 26, 2004 2:32 pm

Interesting tidbit from an Anandtech article:

http://www.anandtech.com/cpuchipsets/sh ... spx?i=2252

"As far as architecture goes, the x-series of dual core CPUs from Intel are built on the little talked-about Smithfield core. While many have speculated that Smithfield may be Banias or Dothan based, it's now clear that Smithfield is little more than two 90nm Prescott cores built on the same die. There is a requirement for a very small amount of arbitration logic that will balance bus transactions between the two CPUs, but for the most part Smithfield is basically two Prescotts.

But doesn't Prescott run too hot already? How could Intel possibly build their first dual core chip out of the 90nm beast that is Prescott? The issue with Prescott hitting higher clock speeds ends up being thermal density - too many transistors, generating too much heat, in too small of a space. Intel's automated layout tools do help reduce this burden a bit, but what's important is that the thermal density of Smithfield is no worse than Prescott. If you take two Prescotts and place them side by side, the areas of the die with the greatest thermal density will still be the same, there will simply be twice as many of them. So overall power consumption will obviously be increased by a factor of two and there will be much more heat dissipated, but the thermal density of Smithfield will remain the same as Prescott.

In order to deal with the fact that Smithfield needs to be able to run with conventional cooling, Intel dropped the clock speed of Smithfield down to the 2.8 - 3.2GHz range, from the fastest 3.8GHz Prescott that will be out at the time. The reducing in clock speed will make sure that temperatures and power consumption is more reasonable for Smithfield."



Smithfield, regardless of the clock speed drop, is going to run hot. Current Prescott cores running at 2.8 ghz put out roughly ~100W max. Perhaps intel can tweak 20% of efficiency out of them, but that still puts a dual core at 160W? :roll:

That doesn't sound "reasonable" to me--it sounds like sheer lunacy.

I honestly think Intel is shooting themselves in the foot by continuing on with the P4 architecture, which is clearly getting out of control from a thermal design standpoint. Give us the "M" already, damn it! :D

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Oct 26, 2004 2:44 pm

Lastly, I'd like to add that these concluding comments in the article:

"The best way to evaluate the impact of dual core CPUs on the desktop is to look at the impact by moving to a multiprocessor setup on the desktop. The vast majority of applications on the desktop are still single threaded, thus garnering no real performance benefit from moving to dual core. The areas that we saw improvements in thanks to Hyper Threading will see further performance improvements due to dual core on both AMD and Intel platforms, but in most cases buying a single processor running at a higher clock speed will end up yielding higher overall performance. "

...are mostly rubbish. ALL games these days are multithreaded (so much so, I bet the average game has no less than ten threads running at once) in some sense, and SMP is *not* hyperthreading. Most intensive applications are threaded to some degree, if not heavily.

As a developer, I can say the DirectX application I'm working on has 22 threads at program start. ;)

pony-tail
Posts: 488
Joined: Sat Aug 23, 2003 4:39 pm
Location: Brisbane AU

Post by pony-tail » Tue Oct 26, 2004 3:31 pm

Let Intel bring it on
- After one PressHot (which was very quickly removed and replaced with a Northy) , my next rig will be An NForce4 Athlon 64
I do not require less performance for MUCH more heat . I still have a couple of Athlon XPs around . so I have no preference as to which brand I buy , just the suitability for the task I set them up for.

ActionAttackJohn
Posts: 35
Joined: Fri Oct 15, 2004 8:56 pm
Location: NY
Contact:

Post by ActionAttackJohn » Tue Oct 26, 2004 8:11 pm

Having used both a Northwood core Pentium 4 2.4C HT 800 and an AMD Athlon XP 2500+ on my current setup, I will tell you that HyperThreading works much better in everyday tasks than it does on the charts and graphs.

Going from two very similar CPUs, the same amount of ram, and the same everything else, the difference is amazing.

For example, i'm currently running the following applications:

1) AIM
2) Mozilla
3) iTunes
4) SmartFTP
5) Bit Torrent #1
6) Bit Torrent #2
7) SoulSeek
8) Adobe Premiere

With my AMD setup, multitasking was very cumbersome. With the P4, I can run this number of apps and many more, no problem. Dual core CPUs are the wave of the future.

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Tue Oct 26, 2004 8:53 pm

Beyonder wrote:"The best way to evaluate the impact of dual core CPUs on the desktop is to look at the impact by moving to a multiprocessor setup on the desktop. The vast majority of applications on the desktop are still single threaded, thus garnering no real performance benefit from moving to dual core. The areas that we saw improvements in thanks to Hyper Threading will see further performance improvements due to dual core on both AMD and Intel platforms, but in most cases buying a single processor running at a higher clock speed will end up yielding higher overall performance. "
See, my take on this is.... about damn time.

I hate taking a performance hit if an application hangs or if I'm doing a lot of stuff. And worse, I hate having to pay a major premium for an SMP setup. The Athlon MP/XP is/was the last budget level SMP system. Those wanting to upgrade have to decide between Xeon or Opteron. (Really, it was the XP that was budget, since they could be hacked into SMP configuration. But my complaint still stands) P4 w/ HT was a step in the right direction. Dual Core will probably be even better.

Putz
Posts: 368
Joined: Thu Aug 21, 2003 1:25 am
Location: Ottawa, Canada
Contact:

Post by Putz » Tue Oct 26, 2004 9:48 pm

So, since Prescotts have HyperThreading, does that mean one could run four instances of Folding@Home? :lol:

EDIT: Just read the article, and it says that HT is disabled in the dual-core implementation. I find myself asking why... for more room to grow (and charge more) later on?

tragus
*Lifetime Patron*
Posts: 356
Joined: Fri Apr 18, 2003 11:19 am
Location: Baltimore, MD

Post by tragus » Wed Oct 27, 2004 7:18 am

ActionAttackJohn wrote:Having used both a Northwood core Pentium 4 2.4C HT 800 and an AMD Athlon XP 2500+ on my current setup, I will tell you that HyperThreading works much better in everyday tasks than it does on the charts and graphs.
See also my own real-world comparison between reasonably high-end AMD and Intel HT systems in this post and a few following.. I still would like to hear the results from anyone who runs my challenge test on an AMD system.

Laurent
Posts: 25
Joined: Tue Jun 15, 2004 7:14 pm

Post by Laurent » Wed Oct 27, 2004 9:50 am

I have been using Windows systems for a while and it has always been the case that when there is a CPU-intensive task in the background, the GUI grinds to a halt. I remember the days of NT3.51 servers that where serving files and other stuff fine even under massive load, but had a frozen mouse on a screen, with a cursor moving every few seconds.

On the desktop, with the OS tuned for better foreground app responsiveness, it is not that bad, but it is still the case that when there is a CPU-intensive task in the background, the GUI slows down a lot.

I have always bought dual-processor machines just to fix this particular software issue. It has not been easy recently with MP boxes limited to the high-end. So when it was time to upgrade my hardware this year, I went for a P4 2.8C with HT, just for the multi-threading part, even though the K8 looked like a better power/performance solution.

I'll be glad when they sell dual-cores. With the usual chicken-and-egg issue, it will be the final incentive for software maker to think of their threading strategy. MP machines and Windows version have been around forever, but lots of software still can't use MP efficiently. Hopefully, having manufacturers go to mandatory MP will fix that once and for all.

Laurent

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Wed Oct 27, 2004 4:13 pm

I agree with most of what everyone here is saying--but the point of this wasn't really to bring up threading issues. Assuming Intel somehow manages to optemize the Prescott core by 20% in terms of thermal output, a dual core chip at 2.8 ghz is going to dump out 160W at full load. If they can't milk out 20% increase in efficiency, then the total output for a dual core at 2.8 ghz is going to be a hair over 200W.

At outputs like that, it's going to be quite difficult for people to have a quiet machine without very good heatsinks, compartmentalized cases, and elaborate cooling plans.




Hyperthreading on the dual cores is going to be disabled probably, since XP doesn't support more than two cores. If you had two HT-enabled cores, that's four logical processors, and for that you need a server version of windows. Rumor has it that Intel is negotiating with MSFT, but.....it's just rumor.

As for multitasking on AMD verses Intel systems, I don't think it's likely people will see a difference between a regular processor and a hyperthreaded one. A lot of it depends upon what applications are being run, as hyperthreading can result in overall performance decreases depending upon the programs in question. One thing should be clear: hyperthreading is *not* SMP, and it's too bad Anandtech seems to be confusing the two a bit. Looking at hyperthreading benchmarks is not going to be any sort of representation of how a dual core system will perform.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Wed Oct 27, 2004 4:24 pm

tragus wrote:
ActionAttackJohn wrote:Having used both a Northwood core Pentium 4 2.4C HT 800 and an AMD Athlon XP 2500+ on my current setup, I will tell you that HyperThreading works much better in everyday tasks than it does on the charts and graphs.
See also my own real-world comparison between reasonably high-end AMD and Intel HT systems in this post and a few following.. I still would like to hear the results from anyone who runs my challenge test on an AMD system.
Tragus, I read your account. The fact that you would manually have to tweak an application's priority means the people who made it didn't really think about the implications of having a huge CPU gobbling thread running on a system. In both C++ and .NET environments, it's really easy to tweak an application's priority. You can also schedule individual threads with a given priority as well. It just sounds like poor design to me. Folding@home is sort of like this, where they give the application a very low priority so other threads coming in trump the F@H core's priority

HT helped in this instance, but it very well could have been the opposite. Because resources in HT are shared, it's possible that the CPU-heavy thread has too large of a footprint in one of the caches on the CPU--in that case, the memory swap for another thread to come in and run results in a performance hit.

But, in this particular app, it sounds like it paid off.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Thu Oct 28, 2004 7:28 am

The fact that you would manually have to tweak an application's priority means the people who made it didn't really think about the implications of having a huge CPU gobbling thread running on a system.
Well, it is not an application issue as I see it. When running Linux on a single processor machine with CPU intensive application I do not experience the GUI to become sluggish. So Linux works fine with with just a single CPU.
Windows on the other hand does not coupe well with a high CPU load application running. So if there is a priority that is wrongly set in Windows it is not the applications, it is the GUI itself that should have higher priority.

HyperThreading or Dual-cores or both is nice, but it is really a software problem in Windows that we need a hardware fix for. Absurd!
A software problem should be fixed with a software fix.

Now back to the original issue of Intels announcement.
Originally Intel announced that they would not release any dual-core CPU's during 2005. AMD announced that they would release there first dual-core CPU by the middle of 2005.
That would put Intel at least 6 months, probably 9 months after AMD to put out a dual-core CPU. As a bussiness this would have been a disaster for Intel.
So, what could Intel do to fix this?
My guess is that making the Prescott in dual core version was what Intel could produce fastest, so that is what they are anouncing now. I think this is just a nessecary intermediate step before they can release dual core CPU's with other cores so they can gain about 6 months time.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Thu Oct 28, 2004 8:31 am

silvervarg wrote:
The fact that you would manually have to tweak an application's priority means the people who made it didn't really think about the implications of having a huge CPU gobbling thread running on a system.
Well, it is not an application issue as I see it. When running Linux on a single processor machine with CPU intensive application I do not experience the GUI to become sluggish. So Linux works fine with with just a single CPU.
Windows on the other hand does not coupe well with a high CPU load application running. So if there is a priority that is wrongly set in Windows it is not the applications, it is the GUI itself that should have higher priority.
The scheduling on both operating systems are very similar. More likely is that on Linux, the people who wrote your programs got their priorities right in the first place, since both systems allow programmers to put their applications in a real-time priority class. Both systems allow programmers to muck with the priorities of their application.

As for Windows not coping with high CPU loads--what about stuff like folding? That's an example of someone properly using the thread priorities to make full usage of cycles without bogging down the UI.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Fri Oct 29, 2004 1:51 am

The major difference is that Linux application developers does not have to do anything. The default priority settings for applications works fine in combination with the operating system and GUI. This is not the case in windows.

So, it is possible that windows can be tweaked to a nice behaviour, but this is not a practical solution for most users today. Buying a CPU with hyperthreading is a working solution today for windows.
If someone can come up with a general software fix for windows (not application specific) I would be all ears.

tragus
*Lifetime Patron*
Posts: 356
Joined: Fri Apr 18, 2003 11:19 am
Location: Baltimore, MD

Post by tragus » Fri Oct 29, 2004 5:44 am

Silverarg essentially restates my concerns: I do not have the expertise nor time to write a proper MS Windows compatible operating system with proper scheduling algorithms, so I make my choices of software that I can buy and hardware that's available to make a system that I can use today.

My initial tests came from the AMD discussion that had a number of people anecdotally describing their AMD systems as "feeling faster" in multi-tasking situations and overall just "snappier". My observation is that multi-tasking with at least one process that is CPU intensive in a Windows XP environment makes user interactions (context switching, GUI actions, etc.) extremely sluggish on high-end AMD systems but only slightly slower on high-end Intel HT systems. My conclusion is that Intel's HT is preferred in these circumstances. My secondary conclusion is the confirmation that the plural of "anecdote" is not "data".

Back to the original point in this thread, I am very much looking forward to dual-core CPUs. If AMD's past is any predictor, we'll be seeing quite a fast and relatively cool slip of silicon. The existing Windows XP will be able to grossly take advantage of the "multiple processors", which will be a major gain for users like me. I suspect that Intel is quite aware of the heat issue, but may punt on teh first set of releases in favor of timeliness. While the current generation of dual-CPUs are functional, they are a small specialized (and expensive) market requiring double heat sinks (and double fans) for CPU cooling. Despite higher wattage on a dual-core, using a high-end heat sink like the recently reviewed Thermalright 120 with a single fan may produce comparable performance for overall less noise (i.e., the point of SPCR).

P.S. BTW, I have received one PM replicating my results of AMD system slowing to a crawl with high-CPU load competing with "normal" priority application interactions.

P.P.S. I am neither an AMD nor Intel fan-person. I just want to share my real-world experiences to provide context to hardware recommendations. Thus, I have an AMD 3400+ at home, but Intel 3.2 (with HT) for my lab's data acquisition machines.

[[ Edited only for two embarrasing typos that I caught on second reading ]]
Last edited by tragus on Fri Oct 29, 2004 9:51 am, edited 1 time in total.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Fri Oct 29, 2004 8:49 am

Silverarg,

I guess being a programmer, I expect other programmers to be proficient enough to deal with priority issues, which is probably why I have the position that I do have. I strongly feel that it's up to the programmer to set the priority in their application--OS programmers cannot know the correct priority for applications other people write, so they have to guess a bit with the default priority. An errant programmer is still going to be able to break that to some degree if they're writing very CPU intensive applications. In the end, I strongly feel it has to be the responsibility of the application writer.



Tragus,

I know the XP-120 could most definetely cool a dual core chip, but if we're talking two prescott cores, that's ~200W being pumped into the average case. Add hard drives, video card, PSU heat, other odds and ends, and I think hitting the 300W mark isn't out of the question. Can this be cooled quietly? Sure. Is it going to be easy? I doubt it.

I'm also pretty excited about dual cores, but if Intel is using a Prescott, I think I'll wait for the dual Athlons, which probably will have a much more reasonable thermal dissipation.


As for fanboy stuff, we agree. I think it's for the birds. I really like HT-processors, but it's also hard to not like AMD's bang for the buck.

Post Reply