<div dir="ltr"><br><div>It looks like the outage was largely due to a max-prefix issue then (or lack thereof). And their change management processes don't seem to come into play (except perhaps during restoration?). Given that this was from prefixes received over an exchange, I'm curious to know why no-one else seems to have suffered as it's unlikely just 1 peer would be affected.</div><div><br></div><div>Something glaringly missing from the Senate submission is information about why the restoration took so long. 6 hours is an embarrasingly long time to fix what was essentially a max-prefix trip. I would really like to know more details about:</div><div><br></div><div>- OOB access</div><div>- Remote power / reboot capability</div><div>- Potential issues about comms between engineers and otherwise accessing a downed network - i bet it took a long time to contact some key engineers.</div><div><br></div><div>Again it looks like they explained what happened (max prefix trip and then engineers working + onsite for 6 hours to mitigate). But not why they feel 6 hours was an acceptable duration - the submission seems to imply 6 hours is a normal investigation time. This aspect really needs to be picked apart further.</div><div><br></div><div>Outages happen - it's a fact of life. But prevention only goes so far, you need to build and test robust mitigation strategies and incident management plans.</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 17 Nov 2023 at 13:36, Christopher O'Shea <<a href="mailto:casper.oshea@gmail.com">casper.oshea@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I wouldn't be so quick to blame it on a single thing. We have all been there, An incident always comes down to many things not going the way you think. <br><br>Reading between the lines, I see that a peer's network creates larger than "normal" routes, and seeing they called out IPv6 in their submission to Senate [1]<br>Lack of filtering of v6 for that peer due to an oversight or misunderstanding of the template/group between v4 and v6.<br><br>Then, when it was shared with their PE routers (Which seem to be Cisco) On the ASK9K (Not sure what they use), the default limit of 524288 [2] for v6 could lead to the session's termination by default. <br><br>We should read these reports and understand if the same thing could happen to your network, what protection you have to stop this, and your device's default behaviour. <br><br>I would like to know more about their out-of-band and why it had issues. (Could it be that DNS broke, issue getting to internal documentation or was the password vault access broken, or the IP limit of the OOB device was too tight). <br><div><br clear="all"><div><div dir="ltr" class="gmail_signature">Chris O'Shea<br><br>[1] <a href="https://www.aph.gov.au/DocumentStore.ashx?id=2ed95079-023d-49d5-87fd-d9029740629b&subId=750333" target="_blank">https://www.aph.gov.au/DocumentStore.ashx?id=2ed95079-023d-49d5-87fd-d9029740629b&subId=750333</a> reports of the Optus outage<br>[2] <a href="https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/routing/command/reference/b-routing-cr-asr9000/bgp-commands.html#wp3192417938" target="_blank">https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/routing/command/reference/b-routing-cr-asr9000/bgp-commands.html#wp3192417938</a> </div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Nov 17, 2023 at 2:02 AM Tony Wicks <<a href="mailto:tony@wicks.co.nz" target="_blank">tony@wicks.co.nz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">To be fair, Assuming there were config issues (i.e. the lack of maximum-prefixes and the lack of filtering preventing large route tables hitting devices that can not carry full tables) the behaviour of a network device when its RIB/FIB or memory is exceeded also significantly comes into play. Dropping BGP is fine, crashing the router so it requires a hard reset is another case entirely. In my experience (I have not used Cisco's in a telco environment for many years however) Cisco devices have been much more pre-disposed to crash catastrophically than over vendor devices like Nokia or Juniper.<br>
<br>
<br>
<br>
-----Original Message-----<br>
From: AusNOG <<a href="mailto:ausnog-bounces@lists.ausnog.net" target="_blank">ausnog-bounces@lists.ausnog.net</a>> On Behalf Of DaZZa<br>
Sent: Friday, November 17, 2023 2:38 PM<br>
To: Andrew Oakeley <<a href="mailto:andrew@oakeley.com.au" target="_blank">andrew@oakeley.com.au</a>><br>
Cc: <a href="mailto:michael.bethune@australiaonline.au" target="_blank">michael.bethune@australiaonline.au</a>; Luke Thompson <<a href="mailto:luke.t@tncrew.com.au" target="_blank">luke.t@tncrew.com.au</a>>; <a href="mailto:ausnog@lists.ausnog.net" target="_blank">ausnog@lists.ausnog.net</a><br>
Subject: Re: [AusNOG] Optus downtime chat + affecting SMS verification toTelstra?<br>
<br>
What a load of crap.<br>
<br>
The root cause was they're morons, and configured the routers incorrectly.<br>
<br>
Cisco had nothing to do with it. I'll bet the routers behaved exactly as they were intended to behave.<br>
<br>
<br>
_______________________________________________<br>
AusNOG mailing list<br>
<a href="mailto:AusNOG@lists.ausnog.net" target="_blank">AusNOG@lists.ausnog.net</a><br>
<a href="https://lists.ausnog.net/mailman/listinfo/ausnog" rel="noreferrer" target="_blank">https://lists.ausnog.net/mailman/listinfo/ausnog</a><br>
</blockquote></div>
_______________________________________________<br>
AusNOG mailing list<br>
<a href="mailto:AusNOG@lists.ausnog.net" target="_blank">AusNOG@lists.ausnog.net</a><br>
<a href="https://lists.ausnog.net/mailman/listinfo/ausnog" rel="noreferrer" target="_blank">https://lists.ausnog.net/mailman/listinfo/ausnog</a><br>
</blockquote></div>