<div dir="ltr"><div><div><div>Assuming the architects have done basic due diligence, how does one DDOS an HTTPS site exactly? (assuming the DOS was TCP and wasn't at a lower layer just exhausting bandwidth).<br><br></div>And if they're running on SoftLayer, did they really have no ability to scale out elastically?<br><br></div>Kind regards<br><br></div>Paul Wilkins<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 10 August 2016 at 14:28, Mark Delany <span dir="ltr"><<a href="mailto:g2x@juliet.emu.st" target="_blank">g2x@juliet.emu.st</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 10Aug16, James Braunegg allegedly wrote:<br>
> No need for Geo Blocking.. that???s hard work<br>
<span class="">><br>
> Just only advertise the route locally within Australia i.e... to Optus, Telstra and on peering exchanges... Job done..<br>
<br>
</span>Nope. Job not done. This sort of single-bullet approach is probably<br>
why they failed.<br>
<br>
If you want scale and resiliency there are many many things you do to<br>
ensure success. For example how would an AU-only route announcement<br>
protect against a DDOS initiated here? Australians love their ancient<br>
Windows boxen so there are plenty of locally available bots for rent.<br>
<br>
It's hard to know where to even begin with the census site as they got<br>
it wrong in so many ways. It's obvious they never even did a mental<br>
walk thru of what-ifs.<br>
<br>
Based on HTTP responses with failure text, we can guess that that they<br>
had a coupled system when a de-coupled one would have been more<br>
resilient. They relied on physical scaling which is obviously<br>
impossible to augment in any reasonable time frame. They did not do a<br>
trial run of anything to try and get a sense of the traffic profile so<br>
they were completely guessing. Why not get everyone to register a week<br>
beforehand to get a feel for the traffic and load? Their servers were<br>
centralized, which is an obvous no-no. Even their DNS setup was such<br>
that they couldn't swing traffic quickly if they had to.<br>
<br>
Their efforts at switching routing during the evening suggests that<br>
they though it was some sort of traffic based DOS, but as other<br>
observed, there is not a lot of evidence that that was actually the<br>
case. It looks like all they knew was that their service was failing<br>
and they were scrambling to deal with it. Did they do a practise run<br>
with an actually DDOS? Their 6h DNS TTL suggests not as that's one of<br>
the first things you want to be able to change rapidly.<br>
<br>
I also saw no evidence of their ability to gracefully degrade. Either<br>
they were up or they were down. No ability to redistribute the<br>
traffic, nor to have the browser-based JS reach for an alternative<br>
site or for the site to do less work when it got too busy, such as<br>
dump and defer validation.<br>
<br>
Their one bullet seems to be to have provisioned twice as much<br>
front-end server capacity as they thought they'd need. A mere 2x<br>
margin for a completely new, unknown traffic profile system? That's<br>
pretty scandalous for such a high-profile site right there.<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
Mark.<br>
</font></span><div class="HOEnZb"><div class="h5">______________________________<wbr>_________________<br>
AusNOG mailing list<br>
<a href="mailto:AusNOG@lists.ausnog.net">AusNOG@lists.ausnog.net</a><br>
<a href="http://lists.ausnog.net/mailman/listinfo/ausnog" rel="noreferrer" target="_blank">http://lists.ausnog.net/<wbr>mailman/listinfo/ausnog</a><br>
</div></div></blockquote></div><br></div>