<div dir="ltr">That's a good point - I am using conntrack, as I'm also running a couple of VRF's with NAT, as well as doing policy routing for sending traffic from a couple of legacy subnets out to the provider they belong to (due to ingress filtering on that provider's border)<div>
<br></div><div>I just re-tested with iperf in udp mode, and I can happily push 960mbytes/sec between vlans on the bond, which pushes one CPU core to 91% I wonder if it might be worthwhile moving those VRF's to a router in a VM, and doing away with conntrack.. Hmm! Thanks for the insight :)</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On 15 August 2014 11:38, Alexander Neilson <span dir="ltr"><<a href="mailto:alexander@neilson.net.nz" target="_blank">alexander@neilson.net.nz</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I suspect that some of this could be also the firewall rule changes etc on the boxes.<br>
<br>
Not sure what exactly they fixed but I found that traffic going through the devices seems to be processed by conn track or at least it went through some of the conn track processing before being handled even when conn track was turned off (this was the bug I reported)<br>
<br>
I could easily understand that something might be affecting TCP at this point and would let UDP go fairly well. But I guess we will see on release.<br>
<br>
Regards<br>
Alexander<br>
<br>
Alexander Neilson<br>
Neilson Productions Limited<br>
<br>
<a href="mailto:alexander@neilson.net.nz">alexander@neilson.net.nz</a><br>
021 329 681<br>
022 456 2326<br>
<div class="HOEnZb"><div class="h5"><br>
On 15/08/2014, at 1:26 pm, Matt Perkins <<a href="mailto:matt@spectrum.com.au">matt@spectrum.com.au</a>> wrote:<br>
<br>
> I would suspect it would go faster then 620M/b but perhaps TCP with overhead goes that badly. Was your test traffic UDP/TCP what sort of packet size's. I have a CCR with 5 bonded interfaces where I did a back to back test between a ccr and a linux box using btest with UDP (didnt test tcp) was well over 4G.<br>
><br>
> Matt.<br>
><br>
> On 15/08/2014 11:17 am, Damien Gardner Jnr wrote:<br>
>> I don't believe so - I'm running 2x1gbps ports in a bond to my switch, trying to pass traffic between two vlans on that bond. When testing, I'm making sure I'm picking machines with the right mac address combos so traffic goes in one interface in the bond, and out the other interface, but it just won't push much past 620mbps. While iperf is running, I have two cpu cores at 99%. If I pass the traffic between two hosts on the same vlan it happily hits the full 1gbps.<br>
>><br>
>> From what I could see in the docs, you can't use fastpath when using bonded interfaces? I haven't put a HUGE amount of effort into it, as I don't *need* the bandwidth (usual inter-vlan traffic is more in the 10s of mbps), it was just an irritation as I would have thought a router with a 10G interface would be able to handle a full 1G through it's 1G interfaces ;) Considering the power saving compared to the DL360G3 it replaced, I can live with the small bandwidth hit :)<br>
>><br>
>> --DG<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr">
<p>Damien Gardner Jnr<br>VK2TDG. Dip EE. GradIEAust<br><a href="mailto:rendrag@rendrag.net" target="_blank">rendrag@rendrag.net</a> - <span><a href="http://www.rendrag.net/" target="_blank">http://www.rendrag.net/</a><u><br>
</u></span>--<br>We rode on the winds of the rising storm,<br> We ran to the sounds of thunder.<br>We danced among the lightning bolts,<br> and tore the world asunder</p></div>
</div>