Recently the IETF working group on HTTP met in Berlin, Germany and discussed the concept of mandatory to offer TLS for HTTP/2, offered by Mark Nottingham. The current approach to transport security means only 1/5 of web transactions are given the protections of TLS. Currently all of the choices are made by the content owner via the scheme of the url in the markup.
It is time that the Internet infrastructure simply gave users a secure by default transport environment - that doesn't seem like a radical statement to me. From FireSheep to the Google Sniff-Wifi-While-You-Map Car to PRISM there is ample evidence to suggest that secure transports are just necessary table stakes in today's Internet.
This movement in the IETF working group is welcome news and I'm going to do everything I can to help iron out the corner cases and build a robust solution.
My point of view for Firefox has always been that we would only implement HTTP/2 over TLS. That means https:// but it has been my hope to find a way to use it for http:// schemes on servers that supported it too. This is just transport level security - for web level security the different schemes still represent different origins. If cleartext needed to be used it would be done with HTTP/1 and someday in the distant future HTTP/1 would be put out to pasture. This roughly matches Google Chrome's public stance on the issue.
Mandatory to offer does not ban the practice of cleartext - if the client did not want TLS a compliant cleartext channel could be established. This might be appropriate inside a data center for instance - but Firefox would be unlikely to do so.
This approach also does not ban intermediaries completely. https:// uris of course remain end to end and can only be tunneled (as is the case in HTTP/1), but http:// uris could be proxied via appropriately configured clients by having the HTTP/2 stream terminated on the proxy. It would prevent "transparent proxies" which are fundamentally man in the middle attacks anyhow.
Comments over here: https://plus.google.com/100166083286297802191/posts/XVwhcvTyh1R
Real data and musings on the performance of networks, servers, protocols, and their related folks.
Saturday, August 31, 2013
Tuesday, August 13, 2013
Go Read "Reducing Web Latency - the Virtue of Gentle Aggression"
One of the more interesting networking papers I've come across lately is Reducing Web Latency: the Virtue of Gentle Aggression. It details how poorly TCP's packet loss recovery techniques map onto HTTP and proposes a few techniques (some more backwards compatible than others) for improving things. Its a large collaborative effort from Google and USC. Go read it.
The most important piece of information is that TCP flows that experienced a loss took 5 times longer to complete than flows that did not experience a loss. 10% of flows fell into the "some loss" bucket. For good analysis go read the paper, but the shorthand headline is that this is because the short "mice" flows of HTTP/1 tend to be comprised mostly of tail packets and traditionally tail loss is only recovered using horrible timeout based strategies.
This is the best summary of the problem that I've seen - but its been understood for quite a while. It is one of the motivators of HTTP/2 and is a theme underlying several of the blog posts google has made about QUIC.
The "aggression" concept used in the paper is essentially the inclusion of redundant data in the TCP flow under some circumstances. This can, at the cost of extra bandwidth, bulk up mice flows so that their tail losses are more likely to be recovered with single RTT mechanisms such as SACK based fast recovery instead of timeouts. Conceivably this extra data could also be treated as an error correction coding which would allow some losses to be recovered independent of the RTT.
The most important piece of information is that TCP flows that experienced a loss took 5 times longer to complete than flows that did not experience a loss. 10% of flows fell into the "some loss" bucket. For good analysis go read the paper, but the shorthand headline is that this is because the short "mice" flows of HTTP/1 tend to be comprised mostly of tail packets and traditionally tail loss is only recovered using horrible timeout based strategies.
This is the best summary of the problem that I've seen - but its been understood for quite a while. It is one of the motivators of HTTP/2 and is a theme underlying several of the blog posts google has made about QUIC.
The "aggression" concept used in the paper is essentially the inclusion of redundant data in the TCP flow under some circumstances. This can, at the cost of extra bandwidth, bulk up mice flows so that their tail losses are more likely to be recovered with single RTT mechanisms such as SACK based fast recovery instead of timeouts. Conceivably this extra data could also be treated as an error correction coding which would allow some losses to be recovered independent of the RTT.
Saturday, August 3, 2013
Internet Society Briefing Panel @ IETF 87
This past week saw IETF 87, in Berlin Germany, come and go. As usual I met a number of interesting new folks, caught up with old acquaintances I respect (a number of whom now work for Google - it seems to happen when you're not looking at them :)), and got some new ideas into my head (or refined old ones).
On Tuesday I was invited to be part of a panel for the ISOC lunch with Stuart Cheshire and Jason Livingood. We were lead and moderated by the imitable Leslie Daigle.
The panel's topic was "Improving the Internet Experience, All Together Now." I made the case that the standards community needs to think a little less about layers and a little more about the end goal - and by doing so we can provide more efficient building blocks. Otherwise developers find them selves tempted to give up essential properties like security and congestion control in order to build more responsive applications. But we don't need to make that tradeoff.
The audio is available here http://www.internetsociety.org/internet-society-briefing-panel-ietf-87 .. unfortunately the audio is full of intolerable squelch until just after the 22:00 mark (I'm the one speaking at that point).. the good news is most of that is "hold music", introductions, and a recap of the panel topic that is also described on the web page.
On Tuesday I was invited to be part of a panel for the ISOC lunch with Stuart Cheshire and Jason Livingood. We were lead and moderated by the imitable Leslie Daigle.
The panel's topic was "Improving the Internet Experience, All Together Now." I made the case that the standards community needs to think a little less about layers and a little more about the end goal - and by doing so we can provide more efficient building blocks. Otherwise developers find them selves tempted to give up essential properties like security and congestion control in order to build more responsive applications. But we don't need to make that tradeoff.
The audio is available here http://www.internetsociety.org/internet-society-briefing-panel-ietf-87 .. unfortunately the audio is full of intolerable squelch until just after the 22:00 mark (I'm the one speaking at that point).. the good news is most of that is "hold music", introductions, and a recap of the panel topic that is also described on the web page.
Subscribe to:
Posts (Atom)