[AusNOG] Data Centre Design was Internode goes Carbon Neutral

Rick Jones RJones at enterprisedata.com.au
Thu Nov 19 14:42:13 EST 2009


Hi All,

The main point is going to come down to whether or not you are refurbishing an existing building or building from scratch.  There are many different ways that power efficiencies can be built in to a good data centre.  Our second facility planned for Norwest has many of these inherent in its design and achieves figures better than you can do with hot aisle containment alone.

You don't have to move to Tasmania, as any thermal difference between outside and inside air temperatures is an opportunity somewhere.  We do have a great HVAC engineer that works with us in our designs.

Having said that, as an industry we need to educate our end users that modern IT equipment can survive temperatures above 22 degrees and that we can reduce the cooling component appropriately.  Since we pass power costs on to our customers, they have a financial inducement to consider it.  However, with 5-year and 10-year contracts, these changes take time.


Best Regards,

Rick Jones
Chief Technology Officer, Enterprise Data Corporation


-----Original Message-----
From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Bill Walker
Sent: Thursday, 19 November 2009 2:30 PM
To: Daniel Hooper; ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral

Obviously the biggest cost is in power. Emerson have done some research on energy use in data centres that showed for every watt your equipment used, the support services used 1.84 watts.  The general rule of thumb though is one watt of consumed power requires one watt of cooling. Based on that 50% of your power bill is cooling.  So you would need to do the numbers. If you are building based on 1MW for equipment, you would need an additional 1MW for cooling. But if you could use Free air cooling, Variable capacity cooling, cold aisle containment, more efficient PSU's etc you could significantly reduce that figure.

The key I think is to find a heating/cooling design engineer who knows his stuff.

-----Original Message-----
From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Daniel Hooper
Sent: Thursday, 19 November 2009 3:21 p.m.
To: ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral

I'd like to see idea's on the most ideal place in .AU to build a green DC.

I'm still pondering if the location would be based on cheap cooling or cheap power (forgetting about current carrier & power infrastructure)

Ie, you could build somewhere in SA and take advantage of geothermal power, or you could build in Tasmania where (correct me if I'm wrong) the average summer temp is 22 degrees.

-Dan

-----Original Message-----
From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of lists
Sent: Thursday, 19 November 2009 10:38 AM
To: Matt Carter; 'Curtis Bayne'; Mark Prior
Cc: ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral

Thanks for everyones input.  There has been some good points raised in this
discussion

I am like some others, and am one of those that is investing in technolgies
that reduce the amount of power I use.  For those in that camp there seems
to be a lot of focus on designing data centres to suit the existing building
designs.  Correct me if I am wrong, but many data centres are located in
existing buildings in capital cities.  These are generally the larger of the
data centres, and as such these are where the major data centre vendors are
focusing thier attention and solutions.

Some have mentioned good data centre design, which raises the question, what
is good data centre design?

If we were to build a new data centre from scratch and we wanted to build it
to be enviromentally friendly and power efficient then how would we do it?

Leaving aside having the data centre located as near as possible to 2 power
grids and good access to carrier fibre and those issues and looking at it
from a purely efficient power and cooling design point

Do data centres need to be located in capital cities?
Do we need physicall access to the data centres or do we need some capable
hands and legs.?
When it comes to building design which material is best for maintaining low
ingress or egress of heat and cold?
Is 240 volts the most efficient voltage to power such data centre equipment?
perhaps 48volts DC may address some heat issues
Do we need raised floors?
etc etc

Personally I think there may be a case to do away with raised floors, there
may also be a case for extracting heat from the racks and reusing it for
alternative uses.  It may be possible to locate a data centre outside of a
metro area and provide redundant paths back to metro areas.  There are many
possibilities, and most of them in one way or another relate to the 4 or 5
questions above.

While I commend Internode for taking the carbon neutral approach, I would
like to see some discussion regarding what is "needed" to enable a data
center to maintain uptime and avoid equipment failure.   Many of the vendors
are now saying their equipments working temperature range is 18 to 28
degrees C or even higher.  Perhaps 22 degrees is not longer what needs to be
achieved.

My analysis shows that equipment will survive at higher temps, and I am
always puzzled why we are using 240 volts instead of 48 Vdc.  The excess
heat generated is substantial.   I did an install of some equipment in 2004,
which had a 240 volt power supply that did POE at 48 volts to the equipment.
Due to the location we wanted to solar power it.  We installed it on solar
power and it worked fine using the manufactures power supply and an
inverter.  The issue was that we needed to but 16 solar panels and the
associated battery capacity to make it reliable.  We then decided to ask
"why", we found the equipment itself used 0.133 amps at 48 volts,  yet it
used 2.2 amps at 48v when using the manufactures power supply, most of which
was disapated as heat.   We built a new power supply did away with the
inverter and manufactures power supply, used 4 solar panels and a much
smaller battery bank and 5 years later it is still working like it was in
2004 ( with higher uptime than mains power), which is why I raised the
questions above.  Each of us on our own are not big enough purchases of
equipment to influence vendors, however collectively over time we may be
able to drag the vendors along.  We just need to work out what in fact is
actually needed as opposed to the status quo.

Regards

Tim McCullagh

HaleNET


_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.5.425 / Virus Database: 270.14.72/2511 - Release Date: 11/18/09 07:50:00
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog

This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you receive this email by mistake, please notify the author and do not make any use of the email. We do not waive any privilege, confidentiality or copyright associated with it.

Please consider the environment before printing this e-mail.



More information about the AusNOG mailing list