[AusNOG] Assistance with Cisco vPC configuration on 4 x Cisco Nexus 3000 switches

Ahad Aboss ahad at swiftelnetworks.com
Mon Nov 26 18:39:43 EST 2018


Spot on, Jason. Since each SW pair is treated as a single switch
(logically) you can run a single port channel between them.

Your suggested setup is also ideal for data center interconnects and
multi-layer vPC environment.


Ahad

On Mon, Nov 26, 2018 at 2:21 PM Jason Leschnik <jason at leschnik.me> wrote:

> Oh and obviously the normal  😁
>
> feature lacp ; feature vpc ; feature interface-vlan
>
> On Mon, 26 Nov 2018 at 14:19, Jason Leschnik <jason at leschnik.me> wrote:
>
>> Hi all,
>>
>> With vPC on both pairs, all 4 links between the pairs can be aggregated
>> into a single port-channel like the one pictured below. Practically you're
>> going to setup two different vPC domains (vpc domain 1, vpc domain 2 for
>> example) on the two pairs (11,12) and (13,14). With a port-channel
>> configuration like the following example:
>>
>> Assuming e1/1-2 are the downstream links (between pairs) and e1/10-11 are
>> the links between vPC peer switches
>>
>> *# Setup vPC*
>> sw11# vpc domain 11 ; peer-keepalive destination <mgmt-ip sw12> ; int
>> e1/10-11 ; channel-group 1 mode active ; int po1 ; vpc peer-link ;
>> peer-switch ; peer-gateway
>> sw12# vcp domain 11 ; peer-keepalive destination <mgmt-ip sw11> ; int
>> e1/10-11 ; channel-group 1 mode active ; int po1 ; vpc peer-link ;
>> peer-switch ; peer-gateway
>>
>> sw13# vpc domain 13 ; peer-keepalive destination <mgmt-ip sw14> ; int
>> e1/10-11 ; channel-group 1 mode active ; int po1 ; vpc peer-link ;
>> peer-switch ; peer-gateway
>> sw14# vpc domain 13 ; peer-keepalive destination <mgmt-ip sw13> ; int
>> e1/10-11 ; channel-group 1 mode active ; int po1 ; vpc peer-link ;
>> peer-switch ; peer-gateway
>>
>> ! Peer-switch will allow both vPC peers to propagate the same STP BPDU
>> information by sharing a vMAC
>> ! Peer-gateway allows the partner to route on behalf of the other, this
>> gets around some non-RFC behavior seen in certain NAS vendors products when
>> using FHRP
>> ! It's perfectly fine to use mgmt0 interfaces for the peer-keepalive,
>> it's a stream of UDP packets to src/dst port 3200. This link can go down
>> with no ill effect on vPC
>> ! Note conventions for numbering of port-channels and domains is just for
>> example, use whatever numbering system works for you
>>
>> *# Setup the port-channels*
>> sw11# int e1/1-2 ; channel-group 1314 mode active ; int po1314 ; vpc
>> sw12# int e1/1-2 ; channel-group 1314 mode active ; int po1314 ; vpc
>>
>> sw13# int e1/1-2 ; channel-group 1112 mode active ; int po1112 ; vpc
>> sw14# int e1/1-2 ; channel-group 1112 mode active ; int po1112 ; vpc
>>
>> [image: vpc.png]
>>
>> Regards,
>> Jason.
>>
>> On Mon, 26 Nov 2018 at 11:10, Ahad Aboss <ahad at swiftelnetworks.com>
>> wrote:
>>
>>> Radeck,
>>>
>>> To avoid blocked ports, you will need to configure your upstream links
>>> as follows;
>>>
>>> Built a port-channel from SW13 to SW11/12.
>>> Build a port-channel from SW14 to SW11/12.
>>>
>>> Physically, SW 11 and 12 looks as 2 x Switches but logically, SW 13 and
>>> 14 treats SW11/12 as a single switch.
>>>
>>> I suggest using at least 2 x 10GE ports between SW13 & 14 as SFP's do
>>> fail from time to time.
>>> You can use MGM for peer link keep alive or a dedicated link.
>>>
>>> You can increase your port-channel capacity as your traffic grows.
>>>
>>> Ahad
>>>
>>> On Mon, Nov 26, 2018 at 10:16 AM Radek Tkaczyk <radek at tkaczyk.id.au>
>>> wrote:
>>>
>>>> Hi Guys,
>>>>
>>>>
>>>>
>>>> So from the feedback so far is that we should also link SW-13 and SW-14
>>>> directly – updated as below.
>>>>
>>>>
>>>>
>>>> The primary purpose for this design is to ensure redundancy across
>>>> switches, and to be able to provide approx. 50 x 10Gbps ports, and 50 x 1
>>>> Gbps ports – this also leaves heaps of room for growth as well.
>>>>
>>>>
>>>>
>>>> Still looking for someone to go over the config with, so if you are
>>>> interested (paid gig) please let me know.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Radek
>>>>
>>>>
>>>>
>>>> *From:* AusNOG [mailto:ausnog-bounces at lists.ausnog.net] *On Behalf Of *Radek
>>>> Tkaczyk
>>>> *Sent:* Sunday, 25 November 2018 4:34 PM
>>>> *To:* Jacob Taylor <me at jacobtaylor.id.au>
>>>> *Cc:* <ausnog at lists.ausnog.net> <ausnog at lists.ausnog.net>
>>>> *Subject:* Re: [AusNOG] Assistance with Cisco vPC configuration on 4 x
>>>> Cisco Nexus 3000 switches
>>>>
>>>>
>>>>
>>>> Hi Jake,
>>>>
>>>>
>>>>
>>>> That’s something that I wanted to check if it was needed/recommended.
>>>>
>>>>
>>>>
>>>> Can certainly put it in if it will help achieve better performance and
>>>> redundancy
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Radek
>>>>
>>>>
>>>> On 25 Nov 2018, at 4:26 pm, Jacob Taylor <me at jacobtaylor.id.au> wrote:
>>>>
>>>> Hi Radek,
>>>>
>>>>
>>>>
>>>> Not personally familiar with vPC, more so Arista MLAG and Juniper MC-AE.
>>>>
>>>>
>>>>
>>>> In the diagram there isn’t a peer link between 13 & 14 - is that a
>>>> mistake in the diagram or the actual design?
>>>>
>>>>
>>>>
>>>> If you intend to build 2x20G bonds to two standalone nexus switches,
>>>> that’ll work fine.
>>>>
>>>>
>>>>
>>>> If you are trying to achieve a 40G bowtie between the two pairs I’m
>>>> fairly certain that won’t work (unless Cisco has some special black magic
>>>> to transport signalling/MAC synchronisation over the bond itself).
>>>>
>>>>
>>>>
>>>> Cheers,
>>>>
>>>> Jake
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 25 Nov 2018, at 15:04, Radek Tkaczyk <radek at tkaczyk.id.au> wrote:
>>>>
>>>> Hi Guys,
>>>>
>>>>
>>>>
>>>> I have a need to configure vPC on 4 x Cisco Nexus 3000 switches at one
>>>> of our data centres – a design that we will replicate to other data centres
>>>> as well.
>>>>
>>>>
>>>>
>>>> I think I have the config down pat, but I’d like someone with another
>>>> pair of eyes to go over it with me to ensure it’s 100% correct.
>>>>
>>>>
>>>>
>>>> Is there anyone who can give me a hand here with this configuration,
>>>> happy to pay for someone’s time to go over it to ensure we are doing this
>>>> correctly.
>>>>
>>>>
>>>>
>>>> The Physical setup that I’m looking for:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <image001.png>
>>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> Radek
>>>>
>>>> _______________________________________________
>>>> AusNOG mailing list
>>>> AusNOG at lists.ausnog.net
>>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>>>
>>>> _______________________________________________
>>>> AusNOG mailing list
>>>> AusNOG at lists.ausnog.net
>>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>>>
>>>
>>> _______________________________________________
>>> AusNOG mailing list
>>> AusNOG at lists.ausnog.net
>>> http://lists.ausnog.net/mailman/listinfo/ausnog
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20181126/e259f660/attachment.html>


More information about the AusNOG mailing list