[AusNOG] So who's read an RFC or Internet Draft?

Bevan Slattery bevan at slattery.net.au
Sun Oct 11 15:18:09 EST 2015


Keep it up.  It's excellent.  Great for the Ausnog Archives too.

[b]

> On 11 Oct 2015, at 11:59 AM, Geoff Huston <gih at apnic.net> wrote:
> 
> I have no idea is this is interesting or not for others - in some sense its 
> just re-telling old history, but maybe if you have an interest in the 
> design of networking protocols you may find this learned experience useful.
> 
> 
>> On 10 Oct 2015, at 1:22 PM, Mark Smith <markzzzsmith at gmail.com> wrote:
>> 
>> Hi Geoff,
>> 
>> On 7 October 2015 at 06:42, Geoff Huston <gih at apnic.net> wrote:
>>>> "Addresses
>>>> 
>>>>  Addresses are variable length strings of 4 bit chunks prefixed by a
>>>>  length.  As address chunks are processed they are removed from their
>>>>  position at the head of the address chunk string and placed at the
>>>>  end of the string.  This chunk by chunk circular shifting of the
>>>>  address allows each node in the hop by hop processing of a message
>>>>  to examine the part of the address it consumes with out knowing how
>>>>  much address preceeds or follows that part."
>>>> 
>>>> It is also interesting as there were no source address in the
>>>> "internet protocol message”!
>>> 
>>> 
>>> 
>>> Yes, at one point IP played with the concept of variable length addresses,
>>> which also appeared later in the OSI NSAP address structure, as I recall.
>> 
>> Yes, OSI NSAPs are variable length. Going by what Christian Huitema
>> said in his "IPv6: The New Internet Protocol" book, one of the reasons
>> for fixed length IPv6 addresses was that even though OSI NSAPs could
>> be variable length, people opted for the simplicity of using the same
>> OSI NSAP length anyway. I think most humans tend towards simplicity if
>> they can (a.k.a. laziness).
> 
> I recall it as one of those big debate topics at the time in the IETF. The outcome
> was the argument that every variable system has a maximum length they have to cope with,
> and every host has to cope with a maximal length address. So all variable length addresses
> save are just bits on the wire, not bits in the hosts. Given that there was no great
> desire to pursue header compression at the time then the outcome in IPver design
> was to eschew variable length in 6 and go big.
> 
> (and then we got sucked in by what turned out to be a specious 8+8 approach 
> which never worked, so we chopped off the lower 64 bits anyway! Sod’s Law
> I suppose, that now that IPv6 carries 128 bits everywhere but only 64 of them 
> are significant.)
> 
>> 
>>> There is a story here I was told by Jon about the involvement of a gentleman from
>>> Digital in the who was adamant that Digital’s equipment could not process
>>> variable length addresses at wire speed and he pushed hard for fixed
>>> boundaries in the packet. The compromise was 32 bits fixed size addresses.
>>> 
>>> (The same argument resurfaced in the IPv6 chained extensions header structure.
>>> What goes around come around!)
>> 
>> (following for the general AusNOG audience, I'm sure I don't need to
>> tell you how to suck eggs :-) )
>> 
>> I think one of the fundamental questions to ask or consider is where
>> the specific packet fields are intended to be processed - in the
>> network somewhere, or at the ends/hosts?
>> 
>> I think the Internet is scaled by pushing complexity to or towards the
>> edge i.e., into the hosts. I think that is a form of solving a problem
>> using the "divide-and-conquer" approach, or horizontal rather than
>> vertical scaling. Pushing complexity to the edge means processing in
>> the network is and should be kept as simple as possible. I think that
>> is then the argument for simpler fixed length fields, such as
>> addresses, rather than variable length ones in fields that are to be
>> processed by the network. Vertical scaling (e.g., buying a bigger
>> router or link), if there is no choice to scale horizontally, is
>> either easier, or just actually able to be achieved, if the thing
>> being vertically scaled is simple rather than complex.
>> 
>> Only one of the IPv6 EH's is intended to be processed in the network -
>> the Hop-by-Hop EH - and that is why it is required to be directly
>> after the IPv6 header. The rest are to be processed by the end or host
>> destination of the packet, which means that other than the HBH EH, all
>> the other EHs are "end" fields rather than "network" fields. EH
>> processing in the network at high speed may be costly, hard and/or
>> impossible. Even HBHs are in some cases being punted to software
>> processing in the control plane of routers, and that can make the
>> control plane vulnerable to a denial of service attack, so in some
>> cases packets with HBHs are being intentionally dropped rather than
>> punted to software by some networks.
> 
> 
> That’s all good computer science no doubt. These days all network gear
> looks deeply into every packet and pulls out whatever hints about flow state
> it can. And because its a chained structure their introspection costs cycles.
> 
> https://www.nanog.org/sites/default/files/monday_general_freedman_flow.pdf
> 
> So while the protocol designers _thought_ that the network switch would
> never peek into end-to-end signalling information, they were just plain wrong.
> Today’s networks glom up data however and wherever they can, and they perform
> packet introspection well down into TCP and UDP protocol headers, and at times
> further into the application protocol.
> 
> 
> 
>> 
>>> At the _IP layer_ who needs source? I’ve forgotten who pushed for the
>>> source to be added to the IP packet, if I ever knew, but in IPv4 the model
>>> was that the IP information state was forward, and nothing was meant to head
>>> backward, so the source address was unnecessary.
>>> 
>>> (You could argue that IPv6 PTB treatment is a basic violation of this
>>> ‘forward flow’ principle, and you could well be correct!)
>> 
>> I think for pure destination based delivery, certainly you're right.
>> Having source addresses in packets certainly helps with
>> troubleshooting though!
>> 
>> It is a while since I looked into it, however it is my understanding
>> that the IEEE dropped carrying of source addresses in 802.11 frames
>> (i.e., IIRC, those under the 802.3 frames) because technically they're
>> not needed, as all traffic is (or was) either between a station and an
>> access point, so each receiving device inherently knows what device
>> sent the frame, because there is no ambiguity - it is a point-to-point
>> link. However, since then the IEEE have had to define a "4 address
>> format" mechanism, which puts source addresses back in frames, so that
>> things such as wifi bridges could be built. Dropping the source
>> addresses may have saved half a dozen bytes in a frame, but it has now
>> created compatibility issues.
>> 
>> So it is probably better to always include a source address in frames
>> or packets regardless for troubleshooting and possible other
>> unforeseeable future reasons.
> 
> 
> heh heh
> 
> protocol design is full of compromises when you are dealing with intangibles
> to compromise on. As you say source addresses in IP “helps”. It sure “helps” 
> with NATs, and onces it was out into the header, it was used in ways nobody ever
> imagined at the time. Today’s Internet crucially depends on having source IP
> addresses in the IP header, even though if you jumped back into the time machine
> and popped out in the early 1980s, you’d be very hard pressed to justify why
> the additional 32 bits of IP header was ‘useful’ at the IP level!
> 
> 
> (I’m happy to continue this conversation on the list, but if you find it off topic
> tpo ausnog just send me (or Mark) personal mail and we’ll continue it privately!)
> 
> 
> cheers,
> 
>   Geoff
> 
> 
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog


More information about the AusNOG mailing list