[AusNOG] Internode goes Carbon Neutral

Phillip Grasso phillip.grasso at gmail.com
Thu Nov 19 17:36:39 EST 2009


Perhaps you can learn something from what Google has done in this space.

http://www.google.com/corporate/green/datacenters/

<http://www.google.com/corporate/green/datacenters/>Firstly, I'd consider
WHO is your target audience of your datacenter. e.g. selling colo or
compute/render farms?
The requirements could be very different and will determine your datacenter
design.

Secondly, consider best practices here (this is not the only source and
please keep looking around).
http://www.google.com/corporate/green/datacenters/best-practices.html
<snip>

*Measure PUE.* Know your data center's efficiency performance by measuring
energy consumption and frequent PUE monitoring.

*Manage air flow.* Good air flow management is a fundamental to efficient
data center operation. Start with minimizing hot and cold air mixing and
eliminate hot spots.

*Adjust the thermostat.* Raising the cold aisle temperature will minimize
chiller energy use. Don't try to run at 70F in the cold aisle, try to run at
80F; virtually all equipment manufacturers allow this.

*Use free cooling.* Water or air-side economizers can greatly improve energy
efficiency.

*Optimize power distribution.*  Whenever possible use high-efficiency
transformers and UPS systems.

*Buy efficient servers.* Specify high-efficient servers and data storage
systems. The Climate Savers Computing
Initiative<http://www.climatesaverscomputing.org/tools/smarter-computing-catalog/>
offers
resources to identify power-efficient servers.

</snip>

Next, question some assumptions about why your building the center,

check ; http://www.vibrant.com/blog/googles-hard-drive-study-on-sata-disks/

and http://labs.google.com/papers/disk_failures.pdf


Best of luck.


Best Regards

   Phill.




On Thu, Nov 19, 2009 at 2:34 PM, Bill Walker
<Bill.Walker at staff.snap.net.nz>wrote:

> "One obvious pitfall that comes to mind though is making sure you don't
> cause a flood in your DC from all the moisture that gets condensed out of
> the colder outside air thats coming into the DC to replace your hot
> exhaust."
>
> Proper design would mean that air would need to come through filters etc,
> then into your coolers intake? (or the heat exchange in a liquid system).
> Once in the cooler its built in (de)humidifier should sort out the moisture
> issue.
>
> Both APC and Rittal use Liquid Process Coolers, so within the data centre
> it's a sealed system. The cooling is done via an external chiller which uses
> a heat exchanger to cool the water from the sealed system.
>
> On 19/11/2009, at 8:05 AM, Bill Walker wrote:
>
> > If you contain the Cold aisle and have a common hot area, then you can
> vent the hot into space. IMO a free air chiller is a better option. But
> retro fitting a vent is much easier than a new chiller.
> >
> > Most of the major rack vendors can do in row cooling / cold aisle
> containment solutions, I've seen solutions from:
> >
> > APC
> > Rittal
> > And Rack Technologies
> >
> > The Rittal one stands out as their coolers intake only requires the water
> to be at 15 degree, which at least in NZ means we can use tap water in the
> event of a chiller failure. But Rack technologies will give you the option
> of a fully tailored rack.
> >
> > Cheers,
> >
> > Bill
> >
> > -----Original Message-----
> > From: ausnog-bounces at lists.ausnog.net [mailto:
> ausnog-bounces at lists.ausnog.net] On Behalf Of Matt Carter
> > Sent: Thursday, 19 November 2009 12:42 p.m.
> > To: 'Curtis Bayne'; Mark Prior
> > Cc: ausnog at ausnog.net
> > Subject: Re: [AusNOG] Internode goes Carbon Neutral
> >
> >>
> >> What about venting hot air if atmosphere temperatue is lower than the
> >> outtake of the hot aisle (especially if our hot aisle is contained).
> >
> > Who here is actually doing this in .au ? I know of a variety of DC in the
> New York area doing this, curious to know who has or is in the process of
> building DC's that have these innovations. It's something our new DC will be
> doing from day 1, would certainly be nice to chat to some people who are
> also doing it and have experienced the challenges of the Australian
> environment.
> >
> >> Datacentre efficency doesn't have to be hard or expensive, most of the
> >> time it is a combination of common sense and pragmatic implimentation.
> >
> > One of the very interesting things I am seeing is what appears to be an
> apparent use of other technologies to overcome outright design efficiencies
> (which then exclude other future options of a traditional space) Eg in-row
> and in-rack cooling has become rather popular in certain areas, and although
> I don't disagree it has it's applications, yep, sure, you have a container
> to work with, your ceilings are 20 ft high, you've been given a tin shed,
> etc. BUT, if you have the capacity to build the DC from scratch and you have
> the capacity to design it properly, with good hot/cold separation, overhead
> plenums, etc, and it's going to be around for a while, the assertion that
> things like in-row are better than a properly designed DC, are imho, a
> fallacy. Esp when you consider things like the option to vent the hot aisle
> to the outside, which I don't see how you could do easily with in-rack or
> in-row cooling, (but I'm not a expert in this field either).
> >
> > <non colo provider related stuff skipped>
> >
> >> I would be very interested to hear from anyone who is employing of
> >> these optimization strategies in their organization.
> >
> > As you mention a lot of this is common sense from the lessons others have
> learned over many years. Eg today still, DC being built where the practice
> is still to use perimeter cooling design, and it's just left like that. Is
> there any containment between hot and cold? No. is there any exhaust chimney
> into overhead plenum in the roof space? No. Have we even made any attempt to
> adjust the grills so we have even at least even distribution of air through
> the rack instead of 10x as much CFM at the ends versus the middle? No. If we
> have a bit of a mish mash and poor hot/aisle containment, have we attempted
> to do any CFD modeling and/or reorganise? Then this DC be shocked and
> horrified when <insert vendor here> comes along and shows the massive
> savings they can make by moving to another cooling methodology!! That said,
> I do appreciate, we all do what we can with the budgets we have, and that
> has a factor to play, but if we are trying to claim a 'state of the art'
> facility ...........
> >
> > --matt
> >
> > _______________________________________________
> > AusNOG mailing list
> > AusNOG at lists.ausnog.net
> > http://lists.ausnog.net/mailman/listinfo/ausnog
> >
> > No virus found in this incoming message.
> > Checked by AVG - www.avg.com
> > Version: 8.5.425 / Virus Database: 270.14.72/2511 - Release Date:
> 11/18/09 07:50:00
> > _______________________________________________
> > AusNOG mailing list
> > AusNOG at lists.ausnog.net
> > http://lists.ausnog.net/mailman/listinfo/ausnog
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 8.5.425 / Virus Database: 270.14.72/2511 - Release Date: 11/18/09
> 07:50:00
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20091119/8ec5ac92/attachment.html>


More information about the AusNOG mailing list