[AusNOG] Data Centre Design was Internode goes Carbon Neutral
lists
technical at halenet.com.au
Thu Nov 19 13:38:00 EST 2009
Thanks for everyones input. There has been some good points raised in this
discussion
I am like some others, and am one of those that is investing in technolgies
that reduce the amount of power I use. For those in that camp there seems
to be a lot of focus on designing data centres to suit the existing building
designs. Correct me if I am wrong, but many data centres are located in
existing buildings in capital cities. These are generally the larger of the
data centres, and as such these are where the major data centre vendors are
focusing thier attention and solutions.
Some have mentioned good data centre design, which raises the question, what
is good data centre design?
If we were to build a new data centre from scratch and we wanted to build it
to be enviromentally friendly and power efficient then how would we do it?
Leaving aside having the data centre located as near as possible to 2 power
grids and good access to carrier fibre and those issues and looking at it
from a purely efficient power and cooling design point
Do data centres need to be located in capital cities?
Do we need physicall access to the data centres or do we need some capable
hands and legs.?
When it comes to building design which material is best for maintaining low
ingress or egress of heat and cold?
Is 240 volts the most efficient voltage to power such data centre equipment?
perhaps 48volts DC may address some heat issues
Do we need raised floors?
etc etc
Personally I think there may be a case to do away with raised floors, there
may also be a case for extracting heat from the racks and reusing it for
alternative uses. It may be possible to locate a data centre outside of a
metro area and provide redundant paths back to metro areas. There are many
possibilities, and most of them in one way or another relate to the 4 or 5
questions above.
While I commend Internode for taking the carbon neutral approach, I would
like to see some discussion regarding what is "needed" to enable a data
center to maintain uptime and avoid equipment failure. Many of the vendors
are now saying their equipments working temperature range is 18 to 28
degrees C or even higher. Perhaps 22 degrees is not longer what needs to be
achieved.
My analysis shows that equipment will survive at higher temps, and I am
always puzzled why we are using 240 volts instead of 48 Vdc. The excess
heat generated is substantial. I did an install of some equipment in 2004,
which had a 240 volt power supply that did POE at 48 volts to the equipment.
Due to the location we wanted to solar power it. We installed it on solar
power and it worked fine using the manufactures power supply and an
inverter. The issue was that we needed to but 16 solar panels and the
associated battery capacity to make it reliable. We then decided to ask
"why", we found the equipment itself used 0.133 amps at 48 volts, yet it
used 2.2 amps at 48v when using the manufactures power supply, most of which
was disapated as heat. We built a new power supply did away with the
inverter and manufactures power supply, used 4 solar panels and a much
smaller battery bank and 5 years later it is still working like it was in
2004 ( with higher uptime than mains power), which is why I raised the
questions above. Each of us on our own are not big enough purchases of
equipment to influence vendors, however collectively over time we may be
able to drag the vendors along. We just need to work out what in fact is
actually needed as opposed to the status quo.
Regards
Tim McCullagh
HaleNET
More information about the AusNOG
mailing list