[AusNOG] Back of envelope II
shane at short.id.au
Fri Mar 6 20:42:49 EST 2009
Sure, I understand that point of view, but it isn't really the point I
was getting at.
I can see the benefit in being able to 'partition' the resources you
allocate your customers on their 'dedicated' servers, whilst allowing
them to burst and have more CPU cycles than they're allocated when
they need it.
I was talking more specifically about intensive high IO load based
environments. I'd actually like to see some hard figures for the
'virtualization tax' you pay when you stick a heavily loaded machine
inside VMware (and even VMI vs 'legacy' virtualization).
Admittedly, I've had a bit of a dodgy run with ESX specifically-- I've
had VM's 'crash' to the point where they refuse to be stopped/started,
requiring a reboot of the host to fix, Endless clock skew problems
with 64bit hosts (with clocks getting so far out of whack the machines
completely lock) and some weird networking problems with certain intel
Again, I'm happy for someone to convince me otherwise, but at the
moment, I'll keep using it in moderation.
On 06/03/2009, at 7:24 PM, Darren Moss wrote:
> Hi Shane,
> Funny you should mention that.
> We don't use virtualisation for "sharing" the same resources, we
> generally use it to 'loan' resources for tasks operating on a grunty
> For example, we have customers who host their financials and payroll
> apps with us.
> So, we 'loan' CPU cycles (that they would not normally have access to)
> from the app server when utilisation allows.
> This means they can run payroll and reporting overnight when nobody
> is utilising CPU cycles.
> We also run newsletter broadcast systems (Linux/PHP/Apache/MySQL) and
> they run well on VM's.
> When we send 100,000 or more emails in a short period, the VM config
> able to 'loan' cycles to the task at hand, which is a feature we don't
> have on non-VM machines.
> Windows virtualisation has been average at best. Laggy and CPU
> just to run basic apps.
> Linux has been much better, however we think this is because the OS is
> much more capable of multi-threading and does not have the overheads
> a GUI. (horses for courses, of course.. We do have a lot of Windows
> boxes and they work fine).
> Blades are good.... However we have found you really need to be
> a new project or moving more than just a few servers to justify the
> and redesign of infrastructure. Something we've discovered is that we
> can re-use the old infrastructure at an alternate datacentre to act
> as a
> backup to the blade server farms.
> Darren Moss
> General Manager, Director
> [p] 1300 131 083 x105 [f] 03 9532 6100 [m] 0421 042 999
> [e] Darren.Moss at em3.com.au [w] www.em3.com.au
> [h] www.em3.com.au/TechnicalSupport
> Reach me by dialing Extension 105.
> em3 People and Technology | Managed Technology Experts
> postal: PO Box 2333, Moorabbin VIC 3189
> New Zealand Airedale Street, Auckland City
> postal: PO Box 39573, Howick 2045 NZ
> [p] (09) 92 555 26 [m] 021 841 541
> Managed IT Services : Specialist Application Hosting: Hosted Microsoft
> Exchange Server : Disaster Recovery
> Blackberry and iPhone email : Spam Filtering and Virus Control :
> Security & Load Testing : Technical Support
> Find out more about our Business Technology services at
> This communication may contain confidential or privileged information
> intended solely for the individual or entity above. If you are not the
> intended recipient you must not use, interfere with, disclose, copy or
> retain this email and you should notify the sender immediately by
> email or by contacting our office on +61 1300 131 083. Opinions
> expressed here are not necessarily those of em3 People and Technology
> Pty Ltd.
> -----Original Message-----
> From: ausnog-bounces at lists.ausnog.net
> [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Shane Short
> Sent: Friday, 6 March 2009 9:05 PM
> To: Skeeve Stevens
> Cc: ausnog at ausnog.net
> Subject: Re: [AusNOG] Back of envelope II
> Ouch is right. :)
> I'm one of those not-so-big fans of Virtualisation, but mostly because
> most of the workloads I'm used to dealing with consume an entire host
> full of resources (mail/web clusters), however if it's one of those
> usage scenarios, then sure, throw it in a VM.
> A previous employer of mine recently moved their entire mail/web
> clustering (Previously running on Poweredge 1855 blades) into VMware
> (which were running pretty hot at the best of times) and I suddenly
> start hearing about customers having long mail delays..
> I guess it's my ISP background which makes me a bit iffy on the whole
> thing, if not done properly it gets expensive and introduces single
> points of failure.
> If someone other than corp. IT people can convince me its the way to
> I'll listen. :)
> On Fri, 6 Mar 2009 14:36:30 +1100, Skeeve Stevens
> <skeeve at eintellego.net>
>> Skeeve Stevens, CEO/Technical Director
>> eintellego Pty Ltd - The Networking Specialists
>> skeeve at eintellego.net / www.eintellego.net
>> Phone: 1300 753 383, Fax: (+612) 8572 9954
>> Cell +61 (0)414 753 383 / skype://skeeve
>> NOC, NOC, who's there?
>> Disclaimer: Limits of Liability and Disclaimer: This message is for
>> named person's use only. It may contain sensitive and private
>> or legally privileged information. You must not, directly or
>> use, disclose, distribute, print, or copy any part of this message
>> if you
>> are not the intended recipient. eintellego Pty Ltd and each legal
>> the Tefilah Pty Ltd group of companies reserve the right to monitor
>> e-mail communications through its networks. Any views expressed in
>> message are those of the individual sender, except where the message
>> otherwise and the sender is authorised to state them to be the views
>> such entity. Any reference to costs, fee quotations, contractual
>> transactions and variations to contract terms is subject to separate
>> confirmation in writing signed by an authorised representative of
>> eintellego. Whilst all efforts are made to safeguard inbound and
>> e-mails, we cannot guarantee that attachments are virus-free or
>> with your systems and do not accept any liability in respect of
>> computer problems experienced.
>>> -----Original Message-----
>>> From: Campbell, Alex [mailto:Alex.Campbell at ogilvy.com.au]
>>> Sent: Friday, 6 March 2009 2:32 PM
>>> To: Skeeve Stevens; Nathan Gardiner
>>> Cc: ausnog at ausnog.net
>>> Subject: RE: [AusNOG] Back of envelope II
>>> VI Foundation (the $6k package below) doesn't achieve server
>>> as it doesn't include VMotion, HA etc.
>>> To get VMotion you need VI Enterprise which is $19,595 USD for a 6
>>> Acceleration Kit. I don't think that price includes
>>> which is mandatory.
>>> -----Original Message-----
>>> From: ausnog-bounces at lists.ausnog.net
>>> [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Skeeve Stevens
>>> Sent: Friday, 6 March 2009 2:21 PM
>>> To: Nathan Gardiner
>>> Cc: ausnog at ausnog.net
>>> Subject: Re: [AusNOG] Back of envelope II
>>> I disagree. There are some services/applications that lend
>>> to clustering and many which do not unless a lot of expensive is
>>> involved. Windows Servers, Citrix, Oracle and other DB servers,
>>> Exchange and so on are not easy to provide hardware redundancy
>>> significant cost.
>>> I don't think the costs of VMware are that excessive.
>>> VMware Infrastructure Foundation Acceleration Kit for 6 Processors
>>> Foundation, vCenter Server Foundation) + Gold (12x5) 1 Year Support
>>> US$3624 / AU$6194
>>> Gives you everything you want. Not free no, but very reasonably
>>> for what you get.
>>> I so agree however, if the application is simple and can be dealt
>>> by load balancer or reverse proxy, such as web hosting, smtp or
>>> simple solutions, then that is the way to go.
>>> Skeeve Stevens, CEO/Technical Director
>>>> -----Original Message-----
>>>> From: Nathan Gardiner [mailto:ngardiner at gmail.com]
>>>> Sent: Friday, 6 March 2009 1:56 PM
>>>> To: Skeeve Stevens
>>>> Cc: ausnog at ausnog.net
>>>> Subject: Re: [AusNOG] Back of envelope II
>>>> VMWare ESX is an expensive way to achieve server redundancy, if
>>>> your only goal. SAN redundancy can be achieved through multipath on
>>>> linux with equivalent solutions on Windows. Network redundancy can
>>>> achieved through bonding or teaming of NIC adaptors.
>>>> The equivalent of what you are achieving through virtualisation is
>>>> possible by deploying several hosts with the same function and
>>>> content switches, or even OSPF/anycast, to allow a single node to
>>>> taken down without (any/much) operational impact. Shared SAN
>>>> and clustered filesystems can allow several nodes (with the correct
>>>> application intelligence) to access the same data volumes.
>>>> Virtualisation works well and reduces cost, but is not without
>>>> limitation. High network utilisation can saturate shared network
>>>> connections, high CPU can cause latency across the host, high SAN
>>>> utilisation can cause storage latency. High memory utilisation can
>>>> cause swapping, which in turn causes significant latency. You can
>>>> always scale VMWare hosts but there is a cost involved - the higher
>>>> you scale to deal with infrequent utilisation, the less of an
>>>> advantage you gain by virtualising (not to mention licensing costs
>>>> On Fri, Mar 6, 2009 at 1:34 PM, Skeeve Stevens
>>> <skeeve at eintellego.net>
>>>>> The ONLY solid way that I know to do good server redundancy is
>>>> Virtual Platforms that support SAN, Fibre Channel/iSCSI with
>>>>> We manage multiple instances of VMware ESX/ESXi that have 2+ heads
>>>> backed into SAN's with both heads fed into Cisco switches - nearly
>>>> always 3560G/3750G-stacked configurations.
>>>>> Those have never gone down, even when upgrading the physical
>>>> - VM's just migrate between heads.
>>>>> Some say VM's aren't appropriate for some applications... I would
>>>> debate that as even in a dedicated VM solution there is not many
>>>> that wouldn't happily work with that given dedicated NIC, Storage,
>>>> and RAM access.
>>>>> Skeeve Stevens, CEO/Technical Director
>>>>> eintellego Pty Ltd - The Networking Specialists
>>>>> skeeve at eintellego.net / www.eintellego.net
>>>>> Phone: 1300 753 383, Fax: (+612) 8572 9954
>>>>> Cell +61 (0)414 753 383 / skype://skeeve
>>>>> NOC, NOC, who's there?
>>>>> Disclaimer: Limits of Liability and Disclaimer: This message is
>>>> the named person's use only. It may contain sensitive and private
>>>> proprietary or legally privileged information. You must not,
>>>> or indirectly, use, disclose, distribute, print, or copy any part
>>>> this message if you are not the intended recipient. eintellego Pty
>>>> and each legal entity in the Tefilah Pty Ltd group of companies
>>>> the right to monitor all e-mail communications through its
>>>> Any views expressed in this message are those of the individual
>>>> except where the message states otherwise and the sender is
>>>> to state them to be the views of any such entity. Any reference to
>>>> costs, fee quotations, contractual transactions and variations to
>>>> contract terms is subject to separate confirmation in writing
>>>> an authorised representative of eintellego. Whilst all efforts are
>>>> to safeguard inbound and outbound e-mails, we cannot guarantee that
>>>> attachments are virus-free or compatible with your systems and do
>>>> accept any liability in respect of viruses or computer problems
>>>>>> -----Original Message-----
>>>>>> From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-
>>>>>> bounces at lists.ausnog.net] On Behalf Of Michael Bethune
>>>>>> Sent: Friday, 6 March 2009 12:14 PM
>>>>>> To: ausnog at ausnog.net
>>>>>> Subject: [AusNOG] Back of envelope II
>>>>>> Thanks folks for all the responses.
>>>>>> Is it possible to do auto fail over redundant switching and what
>>>>>> in the Cisco range would do it?
>>>>>> I remember using a dual cisco catalyst, but you ended up with a
>>>>>> tails, 1 from each catalyst, with a heart beat connecting the two
>>>>>> together. Has the state moved on to allow you to have transparent
>>>>>> connected hosts) redundant switching?
>>>>>> AusNOG mailing list
>>>>>> AusNOG at lists.ausnog.net
>>>>> AusNOG mailing list
>>>>> AusNOG at lists.ausnog.net
>>> AusNOG mailing list
>>> AusNOG at lists.ausnog.net
>> AusNOG mailing list
>> AusNOG at lists.ausnog.net
> AusNOG mailing list
> AusNOG at lists.ausnog.net
More information about the AusNOG