[AusNOG] International traffic speeds

Glen Turner gdt at gdt.id.au
Tue Jul 23 23:19:10 EST 2013


On 23/07/2013, at 8:51 PM, Andrew Yager wrote:
> 
> Looking for a convincing way to get some concept of the "real" international speeds we are getting on our network that doesn't involve transit provider FUD.
> 
> Does anyone have any good suggestions of good ways to test such things in ways that have reliable speed and coverage? (Particularly interested in testing PIPE and Vocus in case anyone was wondering...)

If only it were that easy :-)

The undersea links are typically an order of magnitude greater than your machine's interface. So the idea that the average end-user can measure the real throughput of these links isn't realistic (Nor is it helpful, I'll assure you that the undersea links do in fact run at their designed rate. The useful result is that link loaded up as part of the ISP's network). Conversely the cost per Mbps of undersea capacity is rather impressive and so it's very tempting for some ISPs to think "some congestion at traffic peaks won't hurt".

From a end-user point of view the question is "is the ISP the bottleneck"? That is, when TCP decides it has reached the bandwidth-delay product then is that decision the result of the performance of the server, the links, or the receiver?  If it is the result indicates the links, then is that result acceptable (for example, most people will be satisfied if the result is the bitrate of the tail link to the tester's site).

That TCP behaviour is easy enough to determine if you can test from both ends with a server which is instrumenting TCP's actions. Google for Web100, NDT and NPAD. These aren't simple to set up (kernel patching, etc).  Google make these tools available via Mlab, but the geolocation means that you are not testing the undersea link. If you explicitly choose a different server then you may be testing the performance of the ISP hosting the Mlab server, not of the ISP you wish to test. If you're serious the only choice is to arrange hosting of your own server.

Note carefully that these sort of tests can very rapidly turn into debugging performance issues on the end systems. For example, Windows and Linux underprovision TCP buffering -- they are both 4MB which is fine for US but needs to be 16MB for Australia at 1Gbps. So it's pointless to test without increasing that first. And just how much under- or over-buffering does your local network have (you'll see that there has been a lot of fingers pointed at ADSL routers making poor AQM choices)?

And then there's middleboxes -- routers are so lovingly designed, but most firewalls are just expensive software running on inadequately few, inadequately fast CPUs with inadequately wide buses to the interfaces. For most high-speed networks the site firewalls are the throughput bottleneck, not the link speeds.

Even after that sort of testing there are still additional performance parameters. For example, a lot of ISPs don't deploy jumbo frames, and that can make a huge difference to real-world performance (because one way to make a slower network is to use a MTU which requires every 4KB disk I/O block cut into three).

In short, acceptance testing of network performance isn't straightforward. I'd recommend using artificial tests to check baseline performance, and then using instrumented replays of your production services for the acceptance test.

I don't work in the work of commercial providers, so I'm not going to make a recommendation. But good engineering is good engineering wherever you find it. Indications and contraindications of that is what I'd be looking for in my discussions with vendors.

-glen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ausnog.net/pipermail/ausnog/attachments/20130723/01cb9e4d/attachment.html>


More information about the AusNOG mailing list