Datacenters: Gas-Guzzling Jalopies of the Technology World

Ecological issues around computing. This is an experimental forum.

Moderators: Ralf Hutter, Lawrence Lee

Post Reply
jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Datacenters: Gas-Guzzling Jalopies of the Technology World

Post by jaganath » Wed Aug 08, 2007 11:41 am

http://www.physorg.com/news105800312.html
To understand the scope of the problem, it helps to grasp why data centers are so power hungry.

Depending on the configuration and the equipment involved, as little as 30 to 40 percent of the juice flowing into a data center is used to run computers. Most of the rest goes to keeping the hardware cool

This is why big data centers can devour several megawatts of power, enough for a small city.

For example, almost all the energy that goes into the air conditioning systems is used to run giant chillers that make the air pumped through the rooms' raised floors a brisk 55 degrees or so, sometimes as low as the 40s. Such extremely cold air is blasted in to guarantee that no single server's temperature gets much above the optimum level, which is around 70 degrees.

One commonly talked-about effort [to save energy] involves virtualization, which lets one computer handle the functions of multiple machines at once. Rather than having dozens of servers operating at far less than their maximum level of utilization, data centers can use virtualization to consolidate those same machines' functions on just a few computers.

The result can be striking - in its solar-powered center, AISO uses virtualization to mimic the functions of 120 servers on just four machines - and clearly it saves electricity.
air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.

Trekmeister
Posts: 83
Joined: Wed Mar 08, 2006 8:29 am
Location: Luleå, Sweden
Contact:

Re: Datacenters: Gas-Guzzling Jalopies of the Technology Wor

Post by Trekmeister » Wed Aug 08, 2007 1:00 pm

jaganath wrote: air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.
Well the air conditioning is freon based! :D

As I see it, one big reason for air cooling is reliability. If a fan breaks in one of the servers, no problem. The fans are quite often placed two in series for redundance. An alarm is triggered and someone can hotswap the fan without downtime even.

Say a hose breaks, be it water or other cooling medium, you are in trouble. The machine might have time enough to shut down before any damage, but you still have downtime. Changing the hose, refilling system, cleaning up if any coolant gets on the floor, etc etc.

Also the cool climate in the server room allows for other components to do their work in a comfortably chilly environment. I'm talking about hard drives, switches, media converters and so on. All the things you would never care cool in some elaborate way with water or phase change.


That beeing said, I agree they are resource hogs.

NyteOwl
Posts: 536
Joined: Wed Aug 23, 2006 7:09 pm
Location: Nova Scotia, Canada

Post by NyteOwl » Wed Aug 08, 2007 1:03 pm

air cooling for such massive conglomerations of computers is clearly very inefficient. you would think some kind of water or freon-based system would be better.
You would still have to cool the water, and using water cooling on such a scale would likely be ultimately more costly and potentially damaging if you spring a leak. Refrigerants as Freon are banned in many/most places and the possibility of a leak would be a serious concern.

Those objections aside I'm not sure how you could implement either on a practical level to cool rack after rack of servers of different configurations and designs, not to mention shelved systems in tower cases.

HVAC is always a tradeoff of cost and efficiency. some of the best solutions I've seen are the new ducted cooling cabinet system from APC. Not cheap though.

cmthomson
Posts: 1266
Joined: Sun Oct 09, 2005 8:35 am
Location: Pleasanton, CA

Post by cmthomson » Thu Aug 09, 2007 2:32 pm

The form of cooling used in data centers is traditional, and dates back to (at least) the 60's. Nearly all mainframe machine rooms built since the early 60s (or even late 50s) have used raised flooring to serve both as a positive pressure cool air plenum, and for cabling. The cool air is provided by large (typically 20-ton) evaporator-blowers, and the heat is sent to roof-mounted condensor-blowers.

This form of cooling has several advantages: reliability (these A/C units have really long lifetimes), redundancy (lots of A/C units), scalability (add more units as needed without rebuilding anything), reconfigurability (add/replace computer equipment as needed by moving around or replacing tiles in the floor without changing the cooling system), tidiness (all cables are under the floor, which is therefore easy to clean), continuous operation (pulling a few tiles during maintenance does not compromise cooling), etc.

[I spent many years in such rooms, doing all of the above. One thing they are not is quiet! The typical noise level is 85-90 dB.]

More recent data centers, built from racked servers instead of big mainframe boxes, simply adopted the existing model, for all the reasons above, or even more simply because they were originally mainframe machine rooms.

vincentfox
Posts: 271
Joined: Sun Oct 09, 2005 8:35 pm
Location: CA

Post by vincentfox » Sat Nov 10, 2007 9:19 am

I work in a campus datacenter.

I have to say if people want less power used in a data-center STOP MAKING INSANE DATA DEMANDS!

People want their 4-gigabyte mailbox and their campus ITunes server and their this and that........ All in a redundant load-balanced configuration because GOD FORBID anything should ever go down at 2AM heads would roll in my department. You need a development system of same type as the production systems to test software on extensively before production rollout, which is usually powered up all the time.

There are consequences to these expectations. Many people think that by putting their data needs off in a Google center somewhere they are being clever. They should look at the power consumption of a Google setup with all it's redundancy and it's throwaway PC thinking. They leave even dead systems powered up in the racks because it's too much trouble.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Sat Nov 10, 2007 9:48 am

The problem isnt always insane data demands, but the morons speccing new systems with 20x146GB 15krpm SCSI drives instead of using lower-cost/higher capacity SATA drives.

20 x 146 is 2.9TB raw storage
8x 500 is 4TB raw storage...

a little googling shows the 146Gb drives using the same amount of power as a Samsung 500GB, with less than 1/3 of the storage... obviously the SCSI drive is much faster (lower latency times etc), but is that really needed for, say, a mail server with less than 10k users? nah.

An in-house built fileserver using high-end commodity hardware, 500GB SATA drives and Solaris/ZFS for storage will use drastically less power than a SCSI based setup with HWraid for a very small difference in real life performance. Obviously SCSI drives are rated for higher temps and longer MTBF, but with Raid5/6 (Raidz1/2 equiv.) its not much of a problem.

djkest
Posts: 766
Joined: Mon Sep 10, 2007 1:05 pm
Location: Colorado, USA

Post by djkest » Sat Nov 10, 2007 10:23 am

Hah, my friend does this stuff. They use massive amount of 15k RPM scsi drives off of fiber channel, and the power useage in the data center he works in is measured in kilowatts.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Sat Nov 10, 2007 11:11 am

Haha, I bet.

I've got 3.2TB of storage in raid5 now, probably using less than 400W, thats with folding running on the 3.2 Celeron (Prescott)...

derekva
Posts: 477
Joined: Tue Jun 07, 2005 11:00 am
Location: Puget Sound, WA
Contact:

Post by derekva » Sat Nov 10, 2007 1:37 pm

Wibla wrote:The problem isnt always insane data demands, but the morons speccing new systems with 20x146GB 15krpm SCSI drives instead of using lower-cost/higher capacity SATA drives.

20 x 146 is 2.9TB raw storage
8x 500 is 4TB raw storage...

a little googling shows the 146Gb drives using the same amount of power as a Samsung 500GB, with less than 1/3 of the storage... obviously the SCSI drive is much faster (lower latency times etc), but is that really needed for, say, a mail server with less than 10k users? nah.

An in-house built fileserver using high-end commodity hardware, 500GB SATA drives and Solaris/ZFS for storage will use drastically less power than a SCSI based setup with HWraid for a very small difference in real life performance. Obviously SCSI drives are rated for higher temps and longer MTBF, but with Raid5/6 (Raidz1/2 equiv.) its not much of a problem.
Speaking as someone who has 8+ years of experience with large-capacity datacenters and server farms, the problem with ATA drives (parallel or serial) in the past have been:

1) Reliability
2) Speed
3) Throughput

SAS drives in a 2.5" format are revolutionizing data storage, but the biggest problem is that in order to have a performant disk array, you need to have a lot of spindles, preferably in a RAID 0+1 or RAID 1+0 configuration. Raid 5 / 6 just doesn't have the I/O throughput to keep up with real-world load (especially in regards to web traffic and highly transitive SQL DBs). If anything, the biggest problem (at least for our servers) is that for SQL servers, there is too much disk space as the drive sizes have increased but we still need a high number of spindles for I/O performance purposes, so there are situations where we have highly transitive DBs that are relatively small (e.g. under 100GB) that are sitting on 1/2TB arrays. Unfortunately, all that space goes to waste. Until disk I/O performance improves to the point where you can get good DB performance from a relatively small number of spindles (e.g. 4-6), you'll be stuck with these large, wasteful datacenters.

On the other hand, some server companies (Rackable and Verari Systems, for example) are building plenum-style integrated rack solutions that are positively coupled to the (raised) floor and (suspended) ceiling that allows just the servers to be cooled and not the surrounding workspace. The servers themselves have fans, but also rely quite a bit on natural convection and 'chimney effect'. Unfortunately, HP, Dell and IBM have not gotten on the bandwagon yet.

-D

truckman
Posts: 82
Joined: Sat Apr 08, 2006 10:31 pm
Location: Northern California

Post by truckman » Sun Nov 11, 2007 12:03 am

derekva wrote:If anything, the biggest problem (at least for our servers) is that for SQL servers, there is too much disk space as the drive sizes have increased but we still need a high number of spindles for I/O performance purposes, so there are situations where we have highly transitive DBs that are relatively small (e.g. under 100GB) that are sitting on 1/2TB arrays. Unfortunately, all that space goes to waste. Until disk I/O performance improves to the point where you can get good DB performance from a relatively small number of spindles (e.g. 4-6), you'll be stuck with these large, wasteful datacenters.
Maybe it's time to revive fixed-head disk or drum memory.

SDS Sigma Rapid Access Data

Cost per-GB would be horrendous, but it sure would be fast. 8)

hikeskool
Posts: 32
Joined: Fri Nov 30, 2007 8:16 pm

Post by hikeskool » Fri Nov 30, 2007 8:41 pm

Good find.

Post Reply