It is currently Wed Aug 27, 2014 12:32 pm

All times are UTC - 8 hours




Post new topic Reply to topic  [ 11 posts ] 
Author Message
 Post subject: Datacenters: Gas-Guzzling Jalopies of the Technology World
PostPosted: Wed Aug 08, 2007 11:41 am 
Offline

Joined: Tue Sep 20, 2005 6:55 am
Posts: 5085
Location: UK
http://www.physorg.com/news105800312.html

Quote:
To understand the scope of the problem, it helps to grasp why data centers are so power hungry.

Depending on the configuration and the equipment involved, as little as 30 to 40 percent of the juice flowing into a data center is used to run computers. Most of the rest goes to keeping the hardware cool

This is why big data centers can devour several megawatts of power, enough for a small city.

For example, almost all the energy that goes into the air conditioning systems is used to run giant chillers that make the air pumped through the rooms' raised floors a brisk 55 degrees or so, sometimes as low as the 40s. Such extremely cold air is blasted in to guarantee that no single server's temperature gets much above the optimum level, which is around 70 degrees.

One commonly talked-about effort [to save energy] involves virtualization, which lets one computer handle the functions of multiple machines at once. Rather than having dozens of servers operating at far less than their maximum level of utilization, data centers can use virtualization to consolidate those same machines' functions on just a few computers.

The result can be striking - in its solar-powered center, AISO uses virtualization to mimic the functions of 120 servers on just four machines - and clearly it saves electricity.


air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.


Top
 Profile  
 
 Post subject: Re: Datacenters: Gas-Guzzling Jalopies of the Technology Wor
PostPosted: Wed Aug 08, 2007 1:00 pm 
Offline

Joined: Wed Mar 08, 2006 8:29 am
Posts: 83
Location: Luleå, Sweden
jaganath wrote:
air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.


Well the air conditioning is freon based! :D

As I see it, one big reason for air cooling is reliability. If a fan breaks in one of the servers, no problem. The fans are quite often placed two in series for redundance. An alarm is triggered and someone can hotswap the fan without downtime even.

Say a hose breaks, be it water or other cooling medium, you are in trouble. The machine might have time enough to shut down before any damage, but you still have downtime. Changing the hose, refilling system, cleaning up if any coolant gets on the floor, etc etc.

Also the cool climate in the server room allows for other components to do their work in a comfortably chilly environment. I'm talking about hard drives, switches, media converters and so on. All the things you would never care cool in some elaborate way with water or phase change.


That beeing said, I agree they are resource hogs.


Top
 Profile  
 
 Post subject:
PostPosted: Wed Aug 08, 2007 1:03 pm 
Offline

Joined: Wed Aug 23, 2006 7:09 pm
Posts: 536
Location: Nova Scotia, Canada
Quote:
air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.


You would still have to cool the water, and using water cooling on such a scale would likely be ultimately more costly and potentially damaging if you spring a leak. Refrigerants as Freon are banned in many/most places and the possibility of a leak would be a serious concern.

Those objections aside I'm not sure how you could implement either on a practical level to cool rack after rack of servers of different configurations and designs, not to mention shelved systems in tower cases.

HVAC is always a tradeoff of cost and efficiency. some of the best solutions I've seen are the new ducted cooling cabinet system from APC. Not cheap though.

_________________
Obsolesence is just a lack of imagination!


Top
 Profile  
 
 Post subject:
PostPosted: Thu Aug 09, 2007 2:32 pm 
Offline

Joined: Sun Oct 09, 2005 8:35 am
Posts: 1253
Location: Pleasanton, CA
The form of cooling used in data centers is traditional, and dates back to (at least) the 60's. Nearly all mainframe machine rooms built since the early 60s (or even late 50s) have used raised flooring to serve both as a positive pressure cool air plenum, and for cabling. The cool air is provided by large (typically 20-ton) evaporator-blowers, and the heat is sent to roof-mounted condensor-blowers.

This form of cooling has several advantages: reliability (these A/C units have really long lifetimes), redundancy (lots of A/C units), scalability (add more units as needed without rebuilding anything), reconfigurability (add/replace computer equipment as needed by moving around or replacing tiles in the floor without changing the cooling system), tidiness (all cables are under the floor, which is therefore easy to clean), continuous operation (pulling a few tiles during maintenance does not compromise cooling), etc.

[I spent many years in such rooms, doing all of the above. One thing they are not is quiet! The typical noise level is 85-90 dB.]

More recent data centers, built from racked servers instead of big mainframe boxes, simply adopted the existing model, for all the reasons above, or even more simply because they were originally mainframe machine rooms.

_________________
i7 2600K CPU@4.4 GHz, Asrock Z68, 8GB Corsair Vengeance 1866 CL9, Intel 335 240GB SSD + Samsung HD502HI 500GB, Internal i7 graphics, Antec P180 case, Seasonic X-400 fanless PS, Megahalems CPU HS, Nexus 3-pin & AC PWM fans ~ 600 RPM, AcoustiPack foam, homemade ducts.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Nov 10, 2007 9:19 am 
Offline

Joined: Sun Oct 09, 2005 8:35 pm
Posts: 270
Location: CA
I work in a campus datacenter.

I have to say if people want less power used in a data-center STOP MAKING INSANE DATA DEMANDS!

People want their 4-gigabyte mailbox and their campus ITunes server and their this and that........ All in a redundant load-balanced configuration because GOD FORBID anything should ever go down at 2AM heads would roll in my department. You need a development system of same type as the production systems to test software on extensively before production rollout, which is usually powered up all the time.

There are consequences to these expectations. Many people think that by putting their data needs off in a Google center somewhere they are being clever. They should look at the power consumption of a Google setup with all it's redundancy and it's throwaway PC thinking. They leave even dead systems powered up in the racks because it's too much trouble.

_________________
My "quiet PC" build: E7500@stock, Ninja RevB (passive), Gigabyte DS3, Corsair 520HX, Antec Solo, Sapphire 5750 Vapor-X , WD Velociraptor, 4Gb G.Skill DDR2-800.


Top
 Profile  
 
 Post subject:
PostPosted: Sat Nov 10, 2007 9:48 am 
Offline
Friend of SPCR

Joined: Sun Jun 03, 2007 12:03 am
Posts: 777
Location: Norway
The problem isnt always insane data demands, but the morons speccing new systems with 20x146GB 15krpm SCSI drives instead of using lower-cost/higher capacity SATA drives.

20 x 146 is 2.9TB raw storage
8x 500 is 4TB raw storage...

a little googling shows the 146Gb drives using the same amount of power as a Samsung 500GB, with less than 1/3 of the storage... obviously the SCSI drive is much faster (lower latency times etc), but is that really needed for, say, a mail server with less than 10k users? nah.

An in-house built fileserver using high-end commodity hardware, 500GB SATA drives and Solaris/ZFS for storage will use drastically less power than a SCSI based setup with HWraid for a very small difference in real life performance. Obviously SCSI drives are rated for higher temps and longer MTBF, but with Raid5/6 (Raidz1/2 equiv.) its not much of a problem.

_________________
Workstation | HTPC | 9.1TB | 19.1TB


Top
 Profile  
 
 Post subject:
PostPosted: Sat Nov 10, 2007 10:23 am 
Offline

Joined: Mon Sep 10, 2007 1:05 pm
Posts: 759
Location: Colorado, USA
Hah, my friend does this stuff. They use massive amount of 15k RPM scsi drives off of fiber channel, and the power useage in the data center he works in is measured in kilowatts.

_________________
Gaming HTPC: Antec NSK-2480/ Antec EW430 Bronze/ i5-2400/ MSI H67/ Ninja-Mini/ 4GB DDR3/ 500GB WD Sata 3.0/ XFX HD6850/ Windows 7 x64/ Toshiba 46" 1080p LED/LCD TV


Top
 Profile  
 
 Post subject:
PostPosted: Sat Nov 10, 2007 11:11 am 
Offline
Friend of SPCR

Joined: Sun Jun 03, 2007 12:03 am
Posts: 777
Location: Norway
Haha, I bet.

I've got 3.2TB of storage in raid5 now, probably using less than 400W, thats with folding running on the 3.2 Celeron (Prescott)...

_________________
Workstation | HTPC | 9.1TB | 19.1TB


Top
 Profile  
 
 Post subject:
PostPosted: Sat Nov 10, 2007 1:37 pm 
Offline

Joined: Tue Jun 07, 2005 11:00 am
Posts: 471
Location: Puget Sound, WA
Wibla wrote:
The problem isnt always insane data demands, but the morons speccing new systems with 20x146GB 15krpm SCSI drives instead of using lower-cost/higher capacity SATA drives.

20 x 146 is 2.9TB raw storage
8x 500 is 4TB raw storage...

a little googling shows the 146Gb drives using the same amount of power as a Samsung 500GB, with less than 1/3 of the storage... obviously the SCSI drive is much faster (lower latency times etc), but is that really needed for, say, a mail server with less than 10k users? nah.

An in-house built fileserver using high-end commodity hardware, 500GB SATA drives and Solaris/ZFS for storage will use drastically less power than a SCSI based setup with HWraid for a very small difference in real life performance. Obviously SCSI drives are rated for higher temps and longer MTBF, but with Raid5/6 (Raidz1/2 equiv.) its not much of a problem.


Speaking as someone who has 8+ years of experience with large-capacity datacenters and server farms, the problem with ATA drives (parallel or serial) in the past have been:

1) Reliability
2) Speed
3) Throughput

SAS drives in a 2.5" format are revolutionizing data storage, but the biggest problem is that in order to have a performant disk array, you need to have a lot of spindles, preferably in a RAID 0+1 or RAID 1+0 configuration. Raid 5 / 6 just doesn't have the I/O throughput to keep up with real-world load (especially in regards to web traffic and highly transitive SQL DBs). If anything, the biggest problem (at least for our servers) is that for SQL servers, there is too much disk space as the drive sizes have increased but we still need a high number of spindles for I/O performance purposes, so there are situations where we have highly transitive DBs that are relatively small (e.g. under 100GB) that are sitting on 1/2TB arrays. Unfortunately, all that space goes to waste. Until disk I/O performance improves to the point where you can get good DB performance from a relatively small number of spindles (e.g. 4-6), you'll be stuck with these large, wasteful datacenters.

On the other hand, some server companies (Rackable and Verari Systems, for example) are building plenum-style integrated rack solutions that are positively coupled to the (raised) floor and (suspended) ceiling that allows just the servers to be cooled and not the surrounding workspace. The servers themselves have fans, but also rely quite a bit on natural convection and 'chimney effect'. Unfortunately, HP, Dell and IBM have not gotten on the bandwagon yet.

-D

_________________
Desktop: Antec P150|Gigabyte GA-EP45-UD3R|Intel Xeon E3110|2x2048MB PC8500|HR-01|eVGA 8800GT SC|HR-03 GT|Scythe PWM 120mm|Scythe PWM 92mm|WD5000AAKS|Seasonic SS400-HT
HTPC: OrigenAE X11|Intel DG45ID|Intel E8400|2x2048MB PC6400|Scythe Big Shuriken|ATI HD4550|2xATI DCT|80mm Nexus|2TB WD 3.5" SATA + 100GB Seagate 2.5" SATA|NeoHE 430


Top
 Profile  
 
 Post subject:
PostPosted: Sun Nov 11, 2007 12:03 am 
Offline

Joined: Sat Apr 08, 2006 10:31 pm
Posts: 82
Location: Northern California
derekva wrote:
If anything, the biggest problem (at least for our servers) is that for SQL servers, there is too much disk space as the drive sizes have increased but we still need a high number of spindles for I/O performance purposes, so there are situations where we have highly transitive DBs that are relatively small (e.g. under 100GB) that are sitting on 1/2TB arrays. Unfortunately, all that space goes to waste. Until disk I/O performance improves to the point where you can get good DB performance from a relatively small number of spindles (e.g. 4-6), you'll be stuck with these large, wasteful datacenters.

Maybe it's time to revive fixed-head disk or drum memory.

SDS Sigma Rapid Access Data

Cost per-GB would be horrendous, but it sure would be fast. 8)


Top
 Profile  
 
 Post subject:
PostPosted: Fri Nov 30, 2007 8:41 pm 
Offline

Joined: Fri Nov 30, 2007 8:16 pm
Posts: 28
Good find.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 11 posts ] 

All times are UTC - 8 hours


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group