[AusNOG] Data Centre Design was Internode goes Carbon Neutral

McDonald Richards macca at vocus.com.au
Thu Nov 19 14:37:04 EST 2009


I'm guilty of skimming the message below which did say (forgetting about
current carrier & power infrastructure)  - but sure.  If you are serving
content globally, it's cheaper to do from a communications perspective the
closer you move to  main global Internet corridor. From SA you're paying for
the extra transmission hop over to Sydney (via Melbourne?) and if your
target truly is global, hosting in the USA is obviously the best option
where you can cut transmission costs significantly.

 

A major requirement I'd see for anybody looking to build a DC, green, red or
blue, is access to competitive communications infrastructure. I don't know
of many IT or facilities managers who would put "green power" high on their
requirements list when seeking new sites.

 

Macca

 

 

 

From: Mark Smith [mailto:mark.smith at team.adam.com.au] 
Sent: Thursday, 19 November 2009 2:31 PM
To: McDonald Richards
Cc: ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral

 

McDonald Richards wrote: 

Green DC is one thing but you have to consider access to infrastructure. SA
and Tasmania are not exactly the most ideal places to be distributing your
content from if you are trying to reach a global or even national
audience...
 
  

Can you be more specific about infrastructure SA is apparently missing?



Macca
 
 
-----Original Message-----
From: ausnog-bounces at lists.ausnog.net
[mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Daniel Hooper
Sent: Thursday, 19 November 2009 1:21 PM
To: ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral
 
I'd like to see idea's on the most ideal place in .AU to build a green DC.
 
I'm still pondering if the location would be based on cheap cooling or cheap
power (forgetting about current carrier & power infrastructure)
 
Ie, you could build somewhere in SA and take advantage of geothermal power,
or you could build in Tasmania where (correct me if I'm wrong) the average
summer temp is 22 degrees.
 
-Dan
 
-----Original Message-----
From: ausnog-bounces at lists.ausnog.net
[mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of lists
Sent: Thursday, 19 November 2009 10:38 AM
To: Matt Carter; 'Curtis Bayne'; Mark Prior
Cc: ausnog at ausnog.net
Subject: Re: [AusNOG] Data Centre Design was Internode goes Carbon Neutral
 
Thanks for everyones input.  There has been some good points raised in this 
discussion
 
I am like some others, and am one of those that is investing in technolgies 
that reduce the amount of power I use.  For those in that camp there seems 
to be a lot of focus on designing data centres to suit the existing building
 
designs.  Correct me if I am wrong, but many data centres are located in 
existing buildings in capital cities.  These are generally the larger of the
 
data centres, and as such these are where the major data centre vendors are 
focusing thier attention and solutions.
 
Some have mentioned good data centre design, which raises the question, what
 
is good data centre design?
 
If we were to build a new data centre from scratch and we wanted to build it
 
to be enviromentally friendly and power efficient then how would we do it?
 
Leaving aside having the data centre located as near as possible to 2 power 
grids and good access to carrier fibre and those issues and looking at it 
from a purely efficient power and cooling design point
 
Do data centres need to be located in capital cities?
Do we need physicall access to the data centres or do we need some capable 
hands and legs.?
When it comes to building design which material is best for maintaining low 
ingress or egress of heat and cold?
Is 240 volts the most efficient voltage to power such data centre equipment?
 
perhaps 48volts DC may address some heat issues
Do we need raised floors?
etc etc
 
Personally I think there may be a case to do away with raised floors, there 
may also be a case for extracting heat from the racks and reusing it for 
alternative uses.  It may be possible to locate a data centre outside of a 
metro area and provide redundant paths back to metro areas.  There are many 
possibilities, and most of them in one way or another relate to the 4 or 5 
questions above.
 
While I commend Internode for taking the carbon neutral approach, I would 
like to see some discussion regarding what is "needed" to enable a data 
center to maintain uptime and avoid equipment failure.   Many of the vendors
 
are now saying their equipments working temperature range is 18 to 28 
degrees C or even higher.  Perhaps 22 degrees is not longer what needs to be
 
achieved.
 
My analysis shows that equipment will survive at higher temps, and I am 
always puzzled why we are using 240 volts instead of 48 Vdc.  The excess 
heat generated is substantial.   I did an install of some equipment in 2004,
 
which had a 240 volt power supply that did POE at 48 volts to the equipment.
 
Due to the location we wanted to solar power it.  We installed it on solar 
power and it worked fine using the manufactures power supply and an 
inverter.  The issue was that we needed to but 16 solar panels and the 
associated battery capacity to make it reliable.  We then decided to ask 
"why", we found the equipment itself used 0.133 amps at 48 volts,  yet it 
used 2.2 amps at 48v when using the manufactures power supply, most of which
 
was disapated as heat.   We built a new power supply did away with the 
inverter and manufactures power supply, used 4 solar panels and a much 
smaller battery bank and 5 years later it is still working like it was in 
2004 ( with higher uptime than mains power), which is why I raised the 
questions above.  Each of us on our own are not big enough purchases of 
equipment to influence vendors, however collectively over time we may be 
able to drag the vendors along.  We just need to work out what in fact is 
actually needed as opposed to the status quo.
 
Regards
 
Tim McCullagh
 
HaleNET
 
 
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
 
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
  

 

-- 
Regards,

Mark Smith
Technology Group

Adam Internet

Office Level 2 / 117 King William Street, Adelaide, SA, 5000
Postal GPO Box 121, Adelaide, SA, 5001
Phone +61 (0)8 8423 4017 | Mobile +61 (0)41 22 44 871 | Fax +61 (0)8 8231
0223

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20091119/dc2335c0/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 4714 bytes
Desc: not available
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20091119/dc2335c0/attachment.gif>


More information about the AusNOG mailing list