<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On 20 September 2015 at 10:50, Roland Dobbins <span dir="ltr"><<a href="mailto:rdobbins@arbor.net" target="_blank">rdobbins@arbor.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On 20 Sep 2015, at 7:49, Matt Palmer wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I also have a vague recollection that Google's experimental QUIC protocol may legitimately use UDP 80, for much the same reason.<br>
</blockquote>
<br></span>
Yes - this is madness on Google's part, IMHO, and several folks intend to bring it up at the next IETF meeting in Yokohama, FWIW.<br>
<br></blockquote><div><br></div><div>What are the issues with it, I can see it's reinventing a lot of TCP (eg reliability) but it (QUIC) does solve head of line blocking within TCP and for protocols like h2, where multiple requests/responses can be multiplexed onto the one network connection- blocking delivery of other requests because of a lost segment isn't ideal.</div></div><div><br></div>-- <br><div class="gmail_signature">Bradley Falzon<br><a href="mailto:brad@teambrad.net" target="_blank">brad@teambrad.net</a></div>
</div></div>