[AusNOG] VMWare (WAS: Re: Back of envelope II)
Daniel Hooper
dhooper at gold.net.au
Sat Mar 7 19:25:28 EST 2009
I've had nothing but joy with esxi .. we moved approx 50 physicals onto an ibm blade server and it's been sweet all the way apart from some screw ups with the raid controllers in the san, I would like to point out though that m$ win 2k3 seems to be the only real guest that performs as well on esx as it does on a physical host, it also took alot of tuning of those guests to get them running sweet how ever, things like disabling ntfs file access time stamps, making sure they never run out of dram and slip streaming the installs to cull out alot of the un needed services and processes from starting up to conserve dram and storage space was I've spent teh last 5 months doing.
One thing I've found with virtualisation is that the bottle neck is rarely cpu, but dram & storage performance .. and those things we can keep throwing money at easily.
-Dan
________________________________________
From: ausnog-bounces at lists.ausnog.net [ausnog-bounces at lists.ausnog.net] On Behalf Of Trent Lloyd [lathiat at bur.st]
Sent: Friday, 6 March 2009 11:44 PM
To: ausnog at ausnog.net
Subject: [AusNOG] VMWare (WAS: Re: Back of envelope II)
Then there was the time I had my storage servers dump their cache..
now admittedly that is a bad thing but the coolest part was VMFS
completely corrupted it's allocation table .. of which there is only 1
copy .. and the VMs aren't allocated sequentially .. and even VMware
support couldn't help (and told me they see this at least twice a
day) .. they said they were working on a product to backup the
allocation tables but it wasn't available yet. Complete backup
restore job that was. The same problem cauesd some various bits of
corruption on other ext3 filesystems on the storage array but none of
them were completely destroyed just a few broken files.
So I moved to NFS and am now using a more trusty filesystem (ext3) :)
- I hear quite a lot of people are doing this now .. as long as you
don't need a couple of the VMFS benefits (multi-host access to share
LUNs, etc .. you can still vmotion etc with shared NFS) it seems to
work well . . and I fixed the root cause of the cache dumping ,..
that's a whole 'nother story. dodgy dell hardware (MD3000)
I've also had quite a few issues where VMs refused to start, even with
VMware support help again (and a guy that seemed to have a very good
idea of what he was doing) - a VM was hung and couldn't be restarted
(even tho the others were going fine) - had to do a host reboot and
take out the rest. All sorts of little problems like that which have
repeated quite a few times and been driving me bananas. None of my
VMs would power up .. I really hate licensing servers (and this wasn't
the infamous bug where no-one in the world's VMs would power up)
Although I should note I'm not bad mouthing VMware support .. just
problems not even they can fix. Overall I have found as long as I
file a Priority-1 case I get some very knowledgable engineers. Not so
much on the lower priority cases though.
If you want to run a lot of separate servers that don't do a whole
lot .. 'tis good overall.. i've found ESXi to be much more reliable
than ESX so far but they do share most of the core so I think it's
been mostly luck and a slightly newer version. I also see some very
nice management and resource moving abilities .. unfortunately the
problems I've run into get in the way of me getting too excited about
that.. it does *seem* nice though.
The one feature I would consider killer and love to see .. Dell Blades
feed all their blades as new channels into Dell/Avocent KVMs you can
switch to on the KVM directly.. it would be awesome if all the VMs
would pop up in the KVM too and I could get console... but I doubt
thats gonna happen.
Trent
On 06/03/2009, at 7:51 PM, Darren Moss wrote:
> Hi Shane,
>
> I see your point.
> As a rule of thumb, we would _not_ place a heavily loaded machine on a
> virtualised platform as it will just make matters worse.
>
> And a quick comment on Rick's message, yes, we have seen this too on
> both Windows and Linux.
> Even on the Xen platform we run for Linux, hung machines can be
> terminated... However this sometimes affects the overall stability of
> the virtualised host, which usually results in a 2am reboot. Yuck.
> Usually no volunteers for that either.
>
> Cheers.
>
>
> Regards,
>
>
> Darren Moss
> General Manager, Director
> [p] 1300 131 083 x105 [f] 03 9532 6100 [m] 0421 042 999
> [e] Darren.Moss at em3.com.au [w] www.em3.com.au
> [h] www.em3.com.au/TechnicalSupport
>
> Reach me by dialing Extension 105.
>
> em3 People and Technology | Managed Technology Experts
> postal: PO Box 2333, Moorabbin VIC 3189
>
> New Zealand Airedale Street, Auckland City
> postal: PO Box 39573, Howick 2045 NZ
> [p] (09) 92 555 26 [m] 021 841 541
>
>
>
>
> Managed IT Services : Specialist Application Hosting: Hosted Microsoft
> Exchange Server : Disaster Recovery
> Blackberry and iPhone email : Spam Filtering and Virus Control :
> Security & Load Testing : Technical Support
>
> Find out more about our Business Technology services at
> http://services.em3.com.au
>
> This communication may contain confidential or privileged information
> intended solely for the individual or entity above. If you are not the
> intended recipient you must not use, interfere with, disclose, copy or
> retain this email and you should notify the sender immediately by
> return
> email or by contacting our office on +61 1300 131 083. Opinions
> expressed here are not necessarily those of em3 People and Technology
> Pty Ltd.
>
> -----Original Message-----
> From: Shane Short [mailto:shane at short.id.au]
> Sent: Friday, 6 March 2009 9:43 PM
> To: Darren Moss
> Cc: ausnog at ausnog.net
> Subject: Re: [AusNOG] Back of envelope II
>
> Hi Darren,
> Sure, I understand that point of view, but it isn't really the point I
> was getting at.
> I can see the benefit in being able to 'partition' the resources you
> allocate your customers on their 'dedicated' servers, whilst allowing
> them to burst and have more CPU cycles than they're allocated when
> they
> need it.
>
> I was talking more specifically about intensive high IO load based
> environments. I'd actually like to see some hard figures for the
> 'virtualization tax' you pay when you stick a heavily loaded machine
> inside VMware (and even VMI vs 'legacy' virtualization).
>
> Admittedly, I've had a bit of a dodgy run with ESX specifically-- I've
> had VM's 'crash' to the point where they refuse to be stopped/started,
> requiring a reboot of the host to fix, Endless clock skew problems
> with
> 64bit hosts (with clocks getting so far out of whack the machines
> completely lock) and some weird networking problems with certain intel
> NICs.
>
> Again, I'm happy for someone to convince me otherwise, but at the
> moment, I'll keep using it in moderation.
>
> -Shane
>
> On 06/03/2009, at 7:24 PM, Darren Moss wrote:
>
>> Hi Shane,
>>
>> Funny you should mention that.
>> We don't use virtualisation for "sharing" the same resources, we
>> generally use it to 'loan' resources for tasks operating on a grunty
>> server.
>>
>> For example, we have customers who host their financials and payroll
>> apps with us.
>> So, we 'loan' CPU cycles (that they would not normally have access
>> to)
>
>> from the app server when utilisation allows.
>> This means they can run payroll and reporting overnight when nobody
>> else is utilising CPU cycles.
>>
>> We also run newsletter broadcast systems (Linux/PHP/Apache/MySQL) and
>> they run well on VM's.
>> When we send 100,000 or more emails in a short period, the VM config
>> is able to 'loan' cycles to the task at hand, which is a feature we
>> don't have on non-VM machines.
>>
>> Windows virtualisation has been average at best. Laggy and CPU
>> intensive just to run basic apps.
>> Linux has been much better, however we think this is because the OS
>> is
>
>> much more capable of multi-threading and does not have the overheads
>> of a GUI. (horses for courses, of course.. We do have a lot of
>> Windows
>
>> boxes and they work fine).
>>
>> Blades are good.... However we have found you really need to be
>> starting a new project or moving more than just a few servers to
>> justify the cost and redesign of infrastructure. Something we've
>> discovered is that we can re-use the old infrastructure at an
>> alternate datacentre to act as a backup to the blade server farms.
>>
>> Cheers.
>>
>>
>>
>> Regards,
>>
>>
>> Darren Moss
>> General Manager, Director
>> [p] 1300 131 083 x105 [f] 03 9532 6100 [m] 0421 042 999 [e]
>> Darren.Moss at em3.com.au [w] www.em3.com.au [h]
>> www.em3.com.au/TechnicalSupport
>>
>> Reach me by dialing Extension 105.
>>
>> em3 People and Technology | Managed Technology Experts
>> postal: PO Box 2333, Moorabbin VIC 3189
>>
>> New Zealand Airedale Street, Auckland City
>> postal: PO Box 39573, Howick 2045 NZ
>> [p] (09) 92 555 26 [m] 021 841 541
>>
>>
>>
>>
>> Managed IT Services : Specialist Application Hosting: Hosted
>> Microsoft
>
>> Exchange Server : Disaster Recovery Blackberry and iPhone email :
>> Spam
>
>> Filtering and Virus Control :
>> Security & Load Testing : Technical Support
>>
>> Find out more about our Business Technology services at
>> http://services.em3.com.au
>>
>> This communication may contain confidential or privileged information
>> intended solely for the individual or entity above. If you are not
>> the
>
>> intended recipient you must not use, interfere with, disclose, copy
>> or
>
>> retain this email and you should notify the sender immediately by
>> return email or by contacting our office on +61 1300 131 083.
>> Opinions
>
>> expressed here are not necessarily those of em3 People and Technology
>> Pty Ltd.
>>
>> -----Original Message-----
>> From: ausnog-bounces at lists.ausnog.net
>> [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Shane Short
>> Sent: Friday, 6 March 2009 9:05 PM
>> To: Skeeve Stevens
>> Cc: ausnog at ausnog.net
>> Subject: Re: [AusNOG] Back of envelope II
>>
>> Ouch is right. :)
>>
>> I'm one of those not-so-big fans of Virtualisation, but mostly
>> because
>
>> most of the workloads I'm used to dealing with consume an entire host
>> full of resources (mail/web clusters), however if it's one of those
>> low usage scenarios, then sure, throw it in a VM.
>>
>> A previous employer of mine recently moved their entire mail/web
>> clustering (Previously running on Poweredge 1855 blades) into VMware
>> (which were running pretty hot at the best of times) and I suddenly
>> start hearing about customers having long mail delays..
>>
>> I guess it's my ISP background which makes me a bit iffy on the whole
>> thing, if not done properly it gets expensive and introduces single
>> points of failure.
>>
>> If someone other than corp. IT people can convince me its the way to
>> go, I'll listen. :)
>>
>> -Shane
>>
>> On Fri, 6 Mar 2009 14:36:30 +1100, Skeeve Stevens
>> <skeeve at eintellego.net>
>> wrote:
>>> Ouch
>>>
>>> --
>>> Skeeve Stevens, CEO/Technical Director eintellego Pty Ltd - The
>>> Networking Specialists skeeve at eintellego.net / www.eintellego.net
>>> Phone: 1300 753 383, Fax: (+612) 8572 9954 Cell +61 (0)414 753 383 /
>>> skype://skeeve
>>> --
>>> NOC, NOC, who's there?
>>>
>>> Disclaimer: Limits of Liability and Disclaimer: This message is for
>>> the named person's use only. It may contain sensitive and private
>>> proprietary or legally privileged information. You must not,
>>> directly
>
>>> or indirectly, use, disclose, distribute, print, or copy any part of
>>> this message if you are not the intended recipient. eintellego Pty
>>> Ltd and each legal entity
>> in
>>> the Tefilah Pty Ltd group of companies reserve the right to monitor
>>> all e-mail communications through its networks. Any views expressed
>>> in this message are those of the individual sender, except where the
>>> message
>> states
>>> otherwise and the sender is authorised to state them to be the views
>>> of
>> any
>>> such entity. Any reference to costs, fee quotations, contractual
>>> transactions and variations to contract terms is subject to separate
>>> confirmation in writing signed by an authorised representative of
>>> eintellego. Whilst all efforts are made to safeguard inbound and
>>> outbound e-mails, we cannot guarantee that attachments are virus-
>>> free
>
>>> or
>> compatible
>>> with your systems and do not accept any liability in respect of
>>> viruses
>> or
>>> computer problems experienced.
>>>
>>>
>>>> -----Original Message-----
>>>> From: Campbell, Alex [mailto:Alex.Campbell at ogilvy.com.au]
>>>> Sent: Friday, 6 March 2009 2:32 PM
>>>> To: Skeeve Stevens; Nathan Gardiner
>>>> Cc: ausnog at ausnog.net
>>>> Subject: RE: [AusNOG] Back of envelope II
>>>>
>>>> VI Foundation (the $6k package below) doesn't achieve server
>>>> redundancy, as it doesn't include VMotion, HA etc.
>>>>
>>>> To get VMotion you need VI Enterprise which is $19,595 USD for a 6
>>>> CPU Acceleration Kit. I don't think that price includes
>>>> support/maintenance which is mandatory.
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: ausnog-bounces at lists.ausnog.net
>>>> [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Skeeve
>>>> Stevens
>>>> Sent: Friday, 6 March 2009 2:21 PM
>>>> To: Nathan Gardiner
>>>> Cc: ausnog at ausnog.net
>>>> Subject: Re: [AusNOG] Back of envelope II
>>>>
>>>> I disagree. There are some services/applications that lend
>>>> themselves to clustering and many which do not unless a lot of
>>>> expensive is involved. Windows Servers, Citrix, Oracle and other
>>>> DB
>
>>>> servers, Exchange and so on are not easy to provide hardware
>>>> redundancy without significant cost.
>>>>
>>>> I don't think the costs of VMware are that excessive.
>>>>
>>>>
>> http://store.vmware.com/DRHM/servlet/ControllerServlet?
>> Action=DisplayPr
>>>> o
>>>> ductDetailsPage
>>>> &SiteID=vmware&Locale=en_US&Env=BASE&productID=83617500
>>>>
>>>> VMware Infrastructure Foundation Acceleration Kit for 6 Processors
>>>> (VI Foundation, vCenter Server Foundation) + Gold (12x5) 1 Year
>>>> Support
>>>> US$3624 / AU$6194
>>>>
>>>> Gives you everything you want. Not free no, but very reasonably
>>>> priced for what you get.
>>>>
>>>> I so agree however, if the application is simple and can be dealt
>>>> with by load balancer or reverse proxy, such as web hosting, smtp
>>>> or
>
>>>> other simple solutions, then that is the way to go.
>>>>
>>>> ...Skeeve
>>>>
>>>>
>>>> --
>>>>
>>>> Skeeve Stevens, CEO/Technical Director
>>>>
>>>>> -----Original Message-----
>>>>> From: Nathan Gardiner [mailto:ngardiner at gmail.com]
>>>>> Sent: Friday, 6 March 2009 1:56 PM
>>>>> To: Skeeve Stevens
>>>>> Cc: ausnog at ausnog.net
>>>>> Subject: Re: [AusNOG] Back of envelope II
>>>>>
>>>>> VMWare ESX is an expensive way to achieve server redundancy, if
>>>> that's
>>>>> your only goal. SAN redundancy can be achieved through multipath
>>>>> on
>
>>>>> linux with equivalent solutions on Windows. Network redundancy can
>>>>> be achieved through bonding or teaming of NIC adaptors.
>>>>>
>>>>> The equivalent of what you are achieving through virtualisation is
>>>>> possible by deploying several hosts with the same function and
>>>>> using content switches, or even OSPF/anycast, to allow a single
>>>>> node to be taken down without (any/much) operational impact.
>>>>> Shared
>
>>>>> SAN storage and clustered filesystems can allow several nodes
>>>>> (with
>
>>>>> the correct application intelligence) to access the same data
>>>>> volumes.
>>>>>
>>>>> Virtualisation works well and reduces cost, but is not without
>>>>> limitation. High network utilisation can saturate shared network
>>>>> connections, high CPU can cause latency across the host, high SAN
>>>>> utilisation can cause storage latency. High memory utilisation can
>>>>> cause swapping, which in turn causes significant latency. You can
>>>>> always scale VMWare hosts but there is a cost involved - the
>>>>> higher
>
>>>>> you scale to deal with infrequent utilisation, the less of an
>>>>> advantage you gain by virtualising (not to mention licensing costs
>>>>> on top).
>>>>>
>>>>>
>>>>> Nathan
>>>>>
>>>>> On Fri, Mar 6, 2009 at 1:34 PM, Skeeve Stevens
>>>> <skeeve at eintellego.net>
>>>>> wrote:
>>>>>> The ONLY solid way that I know to do good server redundancy is
>>>>>> with
>>>>> Virtual Platforms that support SAN, Fibre Channel/iSCSI with
>>>>> diverse heads.
>>>>>>
>>>>>> We manage multiple instances of VMware ESX/ESXi that have 2+
>>>>>> heads
>>>>> backed into SAN's with both heads fed into Cisco switches - nearly
>>>>> always 3560G/3750G-stacked configurations.
>>>>>>
>>>>>> Those have never gone down, even when upgrading the physical
>>>> hardware
>>>>> - VM's just migrate between heads.
>>>>>>
>>>>>> Some say VM's aren't appropriate for some applications... I would
>>>>> debate that as even in a dedicated VM solution there is not many
>>>>> apps that wouldn't happily work with that given dedicated NIC,
>>>>> Storage,
>>>> CPU
>>>>> and RAM access.
>>>>>>
>>>>>> ...Skeeve
>>>>>>
>>>>>> --
>>>>>> Skeeve Stevens, CEO/Technical Director eintellego Pty Ltd - The
>>>>>> Networking Specialists skeeve at eintellego.net / www.eintellego.net
>>>>>> Phone: 1300 753 383, Fax: (+612) 8572 9954 Cell +61 (0)414 753
>>>>>> 383
>
>>>>>> / skype://skeeve
>>>>>> --
>>>>>> NOC, NOC, who's there?
>>>>>>
>>>>>> Disclaimer: Limits of Liability and Disclaimer: This message is
>>>>>> for
>>>>> the named person's use only. It may contain sensitive and private
>>>>> proprietary or legally privileged information. You must not,
>>>>> directly or indirectly, use, disclose, distribute, print, or copy
>>>>> any part of this message if you are not the intended recipient.
>>>>> eintellego Pty
>>>> Ltd
>>>>> and each legal entity in the Tefilah Pty Ltd group of companies
>>>> reserve
>>>>> the right to monitor all e-mail communications through its
>>>>> networks.
>>>>> Any views expressed in this message are those of the individual
>>>> sender,
>>>>> except where the message states otherwise and the sender is
>>>> authorised
>>>>> to state them to be the views of any such entity. Any reference to
>>>>> costs, fee quotations, contractual transactions and variations to
>>>>> contract terms is subject to separate confirmation in writing
>>>>> signed
>>>> by
>>>>> an authorised representative of eintellego. Whilst all efforts are
>>>> made
>>>>> to safeguard inbound and outbound e-mails, we cannot guarantee
>>>>> that
>
>>>>> attachments are virus-free or compatible with your systems and do
>>>>> not accept any liability in respect of viruses or computer
>>>>> problems
>
>>>>> experienced.
>>>>>>
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-
>>>>>>> bounces at lists.ausnog.net] On Behalf Of Michael Bethune
>>>>>>> Sent: Friday, 6 March 2009 12:14 PM
>>>>>>> To: ausnog at ausnog.net
>>>>>>> Subject: [AusNOG] Back of envelope II
>>>>>>>
>>>>>>> Thanks folks for all the responses.
>>>>>>>
>>>>>>> Is it possible to do auto fail over redundant switching and what
>>>> if
>>>>>>> anything
>>>>>>> in the Cisco range would do it?
>>>>>>>
>>>>>>> I remember using a dual cisco catalyst, but you ended up with a
>>>> pair
>>>>> of
>>>>>>> tails, 1 from each catalyst, with a heart beat connecting the
>>>>>>> two
>
>>>>>>> catalysts together. Has the state moved on to allow you to have
>>>>>>> transparent
>>>>> (to
>>>>>>> the
>>>>>>> connected hosts) redundant switching?
>>>>>>>
>>>>>>> Michael.
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> AusNOG mailing list
>>>>>>> AusNOG at lists.ausnog.net
>>>>>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>>>>> _______________________________________________
>>>>>> AusNOG mailing list
>>>>>> AusNOG at lists.ausnog.net
>>>>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>>>>>
>>>> _______________________________________________
>>>> AusNOG mailing list
>>>> AusNOG at lists.ausnog.net
>>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>> _______________________________________________
>>> AusNOG mailing list
>>> AusNOG at lists.ausnog.net
>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>> _______________________________________________
>> AusNOG mailing list
>> AusNOG at lists.ausnog.net
>> http://lists.ausnog.net/mailman/listinfo/ausnog
>
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
>
_______________________________________________
AusNOG mailing list
AusNOG at lists.ausnog.net
http://lists.ausnog.net/mailman/listinfo/ausnog
More information about the AusNOG
mailing list