[AusNOG] Tech Q: Looking for Cheapest 10G Switches for Hobby Lan.
Andrew Oskam
percy at th3interw3bs.net
Thu Jul 22 00:58:06 EST 2010
Wow that's some serious gear lol.
Sent from my iPhone
-------------
Andrew Oskam
On 20/07/2010, at 10:47 AM, "Sean K. Finn" <sean.finn at ozservers.com.au> wrote:
> Network map of competing LAN currently running 10Gb Switching for those that are interested:
>
> http://mattbermingham.com/images/Network/Network.jpg
>
> S
>
> -----Original Message-----
> From: Mark Smith [mailto:nanog at 85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org]
> Sent: Monday, 19 July 2010 8:09 PM
> To: Sean K. Finn
> Cc: 'Narelle'; 'ausnog at lists.ausnog.net'
> Subject: Re: [AusNOG] Tech Q: Looking for Cheapest 10G Switches for Hobby Lan.
>
> On Mon, 19 Jul 2010 11:30:54 +1000
> "Sean K. Finn" <sean.finn at ozservers.com.au> wrote:
>
>> The other thing to be aware of here:
>>
>> Jumbo Frames increases latency, obviously.
>>
>> Were you referring to L2 latency or L3 ?
>>
>
> Layer 3 I suppose, but I'm not really sure it matters. Since it was a
> measure of the time it took between when the packet entered the host,
> was processed, and sent back out, and the processing was functionally
> the same, I think the significance is that the 450Mhz 32 bit P3
> with a PCI 100Mbps NIC was consistently faster at performing the same
> processing as the 1.6Ghz (clocked down from 2.4Ghz) 64 bit Q6600 with a
> PCIe 1Gbps NIC. I'll admit it wasn't very scientific, because my goal
> wasn't specifically to measure the differences - it was to test the
> code out. The significance of the difference was surprising though,
> which is why it stood out. Up until that point I'd have assumed that
> there was nothing a machine built in 1998, and a network card of a
> similar vintage, would be able to do faster than a machine from 2008,
> running the same code. Then again, even cheap (~$70) yet quality 1Gbps
> cards are doing far more processing onboard, such as TCP Segmentation
> Offload, 802.1Q VLAN tag stripping etc, and they all add processing
> latency, even if the frames don't carry those protocols.
>
>
>
>> S
>>
>>
>>
>> -----Original Message-----
>> From: ausnog-bounces at lists.ausnog.net [mailto:ausnog-bounces at lists.ausnog.net] On Behalf Of Mark Smith
>> Sent: Monday, 19 July 2010 8:16 AM
>> To: Narelle
>> Cc: ausnog at lists.ausnog.net
>> Subject: Re: [AusNOG] Tech Q: Looking for Cheapest 10G Switches for Hobby Lan.
>>
>> On Sun, 18 Jul 2010 19:50:22 +1000
>> Narelle <narellec at gmail.com> wrote:
>>
>>> On Sat, Jul 17, 2010 at 10:20 AM, Mark Smith
>>> <nanog at 85d5b20a518b8f6864949bd940457dc124746ddc.nosense.org> wrote:
>>>> Want to make their head explode? In some experimentation a while back, I
>>>> measured latency of packet processing of an old Netgear FA312 100Mbps
>>>> NIC in a P3 450Mhz, verses an Intel 1Gbps PCIe in a Q6600 quad core. The
>>>> Netgear had lower latency, and IIRC, significantly lower latency. Of
>>>> course, all those measurements were in microseconds, and therefore
>>>> probably irrelevant to actual gaming, but I don't think they actually
>>>> care. The length of the cable was measurable too at that scale. (If you
>>>> get pictures of a fight breaking out at a LAN party over old 100Mbps
>>>> NICs, and who's sitting closest to the switch, send them to me
>>>> please :-) )
>>>
>>> How did you measure this?
>>>
>>
>> You could probably do it using conventional IP pings, I was doing it
>> while testing an implementation of the Ethernet v2.0 Configuration
>> Testing Protocol I wrote for the Linux kernel, which basically provides
>> ethernet layer ping, so didn't have the overheads of IP packet
>> processing or firewalling. I ran a tcpdump on the interface in question,
>> and then compared the in and out timestamps. I think it was also
>> measuring the effects of the CPU instruction case. IIRC, with a fast
>> packet rate, the first in/out difference was around 40 microseconds,
>> then would drop to between 4 and 8 microseconds. My guess at the time
>> was that the fast packet rate was keeping the protocol code in the
>> instruction cache. I didn't investigate further. Measuring in/out time
>> differences at the source interface, subtracting in/out latency on the
>> destination interface, indicated cable and/or switch latency.
>>
>>> In all of this discussion (great btw) the actual software architecture
>>> hasn't really been mentioned. IME that's usually where all the
>>> bottlenecks come from, rarely the LAN...
>>>
>>>
>>> --
>>>
>>>
>>> Narelle
>>> narellec at gmail.com
>> _______________________________________________
>> AusNOG mailing list
>> AusNOG at lists.ausnog.net
>> http://lists.ausnog.net/mailman/listinfo/ausnog
> _______________________________________________
> AusNOG mailing list
> AusNOG at lists.ausnog.net
> http://lists.ausnog.net/mailman/listinfo/ausnog
More information about the AusNOG
mailing list