[AusNOG] IPv4 Exhaustion date changed to December.
Geoff Huston
gih at apnic.net
Tue Jun 22 17:29:56 EST 2010
On 22/06/2010, at 4:10 PM, Mark Andrews wrote:
>
> In message <5BC5686B-1897-4935-A49A-BECFFCDE2E7D at apnic.net>, Geoff Huston writes:
>>
>> On 22/06/2010, at 9:37 AM, Mark Andrews wrote:
>>>>>
>>>>
>>>> I disagree, with the added latency, overhead and resulting poor speed
>> I
>>>> personally think tunnels are only good for testing and a "hey I am on
>> the
>>>> v6 net!". I think a poor experience with tunnels would taint the view
>> of
>>>> IPv6 from an end user point of view.
>>>
>>> Hogwash. NAT vs encapsulation is about equal cost in the CPE device.
>>> You only have to size/distribute the tunnel endpoints on the ISP's
>>> side. The extra 20 bytes per packet is nothing.
>>
>>
>> In theory this is true. In practice the world is not quite so
>> straightforward.
>>
>> We've added fine-grained timers to the web server and we've started
>> timing the delivery of extremely small gif images comparing the delivery
>> times of IPv4 to IPv6. The results are interesting, in that while the
>> time to deliver the IPv6 object has been comparable to IPv4 when the
>> client has a conventional unicast IPv6 address, when the client is using
>> a source address that identifies it as using a 6to4 tunnel, or when the
>> client uses a source address that identifies it as a Teredo based
>> client, then the server-side measurement of the object delivery time
>> shows the tunnel IPv6 delivery times as being, on average, slower.
>>
>> http://www.potaroo.net/stats/1x1/6uv6typesdiff.png
>
> And where was the 6to4/Terado traffic being relayed?
For the 6to4 I can't readily tell - its something the client knows but the server cannot tell. I have not done any diving into the teredo relays, in any
> Did you see
> any significant difference in traffic from free.fr where the endpoints
> of the tunnels are topologically close to each other as they use 6rd.
The 6rd approach is somewhat different to 6to4 and from the server it looks like unicast and behaves a lot like unicast (as the tunnel spans the ISP's infrastructure, but no further). So to the extent that asia pacific sees 6rd traffic it is part of the V6 unicast measurement which is as fast as V4.
>
>>>> Really I think native transit is the best way to bring it out to the
>>>> consumer. Someone like an Internode (or another large ISPs) providing
>> CPEs
>>>> with native IPv6 enabled by default (on the wan and lan) (correct me
>> if I
>>>> am wrong and this is already happening) would be an excellent way to
>>>> improve IPv6 adoption, probably without the end user even knowing it!
>>>
>>> Native is better if only because there is likely to be less
>> configuration.
>>
>> Or less opportunity to get the 6to4 tunnel relay addresses confused.
>>
>> The relatively longer delivery times for 6to4 clients are interesting
>> because the reverse path of the server to client is V4 all the way, as
>> the server has a 6tf interface and it routes 2002::/16 down that tunnel
>> interface. So the longer time to deliver an object to a 6to4 client may
>> well be because they are using a remote 6to4 relay server and the path
>> of client -> relay server -> V6 web server is far longer than the direct
>> client -> web server path using V4.
>
> Yep.
And the sooner we all get off tunnels the better, but right now 3.5% of the clients who head to the APNIC web sites use 6to4 when they use V6, so that is still a large enough proportion of the client base to say to the ISPs: it really makes commercial sense to put up a 6to4 relay and terminate your customers 6to4 traffic and send it onward and outward as native V6. Your customers will like you and everyone else who puts up V6 services will like you too. By all means, do native IPv6 as well, but provide a local tunnel server for 6to4.
Geoff
More information about the AusNOG
mailing list