The Google transport networking crew (QUIC, TCP, etc..) deserve a shout out for identifying and fixing a nearly decade old Linux kernel TCP bug that I think will have an outsized impact on performance and efficiency for the Internet.
Their patch addresses a problem with cubic congestion control, which is the default algorithm on many Linux distributions. The problem can be roughly summarized as the controller mistakenly characterizing the lack of congestion reports over a quiescent period as positive evidence that the network is not congested and therefore it should send at a faster rate when sending resumes. When put like this, its obvious that an endpoint that is not moving any traffic cannot use the lack of errors as information in its feedback loop.
The end result is that applications that oscillate between transmitting lots of data and then laying quiescent for a bit before returning to high rates of sending will transmit way too fast when returning to the sending state. This consequence of this is self induced packet loss along with retransmissions, wasted bandwidth, out of order packet delivery, and application level stalls.
Unfortunately a number of common web use cases are clear triggers here. Any use of persistent connections, where the burst of web data on two separate pages is interspersed with time for the user to interpret the data is an obvious trigger. A far more dangerous class of triggers is likely to be the various HTTP based adaptive streaming media formats where a series of chunks of media are transferred over time on the same HTTP channel. And of course, Linux is a very popular platform for serving media.
As with many bugs, it all seems so obvious afterwards - but tracking this stuff down is the work of quality non-glamorous engineering. Remember that TCP is robust enough that it seems to work anyhow - even at the cost of reduced network throughput in this case. Kudos to the google team for figuring it out, fixing it up, and especially for open sourcing the result. The whole web, including Firefox users, will benefit.
Thanks!
Real data and musings on the performance of networks, servers, protocols, and their related folks.
Friday, September 25, 2015
Tuesday, September 22, 2015
Brotli Content-Encoding for Firefox 44
The best way to make data appear to move faster over the Web is to move less of it and lossless compression has always been a core tenet of good web design.
Sometimes that is done via over the top gzip of text resources (html, js, css), but other times it is accomplished via the compression inherent in the file format of media elements. Modern sites apply gzip to all of their text as a best practice.
Time marches on, and it turns out we can often do a better job than the venerable gzip. Until recently, new formats struggled with matching the decoding rates of gzip, but lately a new contender named brotli has shown impressive results. It has been able to improve on gzip anywhere from 20% to 40% in terms of compression ratios while keeping up on the decoding rate. Have a look at the author's recent comparative results.
The deployed WOFF2 font file format already uses brotli internally.
If all goes well in testing, Firefox 44 (ETA January 2016) will negotiate brotli as a content-encoding for https resources. The negotiation will be done in the usual way via the Accept-Encoding request header and the token "br". Servers that wish to encode a response with brotli can do so by adding "br" to the Content-Encoding response header. Firefox won't decode brotli outside of https - so make sure to use the HTTP content negotiation framework instead of doing user agent sniffing.
[edit note - around Oct 6 2015 the token was changed to br from brotli. The token brotli was only ever deployed on nightly builds of firefox 44.]
We expect Chrome will deploy something compatible in the near future.
The brotli format is defined by this document working its way through the IETF process. We will work with the authors to make sure the IANA registry for content codings is updated to reference it.
You can get tools to create brotli compressed content here and there is a windows executable I can't vouch for linked here
Sometimes that is done via over the top gzip of text resources (html, js, css), but other times it is accomplished via the compression inherent in the file format of media elements. Modern sites apply gzip to all of their text as a best practice.
Time marches on, and it turns out we can often do a better job than the venerable gzip. Until recently, new formats struggled with matching the decoding rates of gzip, but lately a new contender named brotli has shown impressive results. It has been able to improve on gzip anywhere from 20% to 40% in terms of compression ratios while keeping up on the decoding rate. Have a look at the author's recent comparative results.
The deployed WOFF2 font file format already uses brotli internally.
If all goes well in testing, Firefox 44 (ETA January 2016) will negotiate brotli as a content-encoding for https resources. The negotiation will be done in the usual way via the Accept-Encoding request header and the token "br". Servers that wish to encode a response with brotli can do so by adding "br" to the Content-Encoding response header. Firefox won't decode brotli outside of https - so make sure to use the HTTP content negotiation framework instead of doing user agent sniffing.
[edit note - around Oct 6 2015 the token was changed to br from brotli. The token brotli was only ever deployed on nightly builds of firefox 44.]
We expect Chrome will deploy something compatible in the near future.
The brotli format is defined by this document working its way through the IETF process. We will work with the authors to make sure the IANA registry for content codings is updated to reference it.
You can get tools to create brotli compressed content here and there is a windows executable I can't vouch for linked here
Friday, March 27, 2015
Opportunistic Encryption For Firefox
Firefox 37 brings more encryption to the web through opportunistic encryption of some http:// based resources. It will be released the week of March 31st.
OE provides unauthenticated encryption over TLS for data that would otherwise be carried via clear text. This creates some confidentiality in the face of passive eavesdropping, and also provides you much better integrity protection for your data than raw TCP does when dealing with random network noise. The server setup for it is trivial.
These are indeed nice bonuses for http:// - but it still isn't as nice as https://. If you can run https you should - full stop. Don't make me repeat it :) Only https protects you from active man in the middle attackers.
But if you have long tail of legacy content that you cannot yet get migrated to https, commonly due to mixed-content rules and interactions with third parties, OE provides a mechanism for an encrypted transport of http:// data. That's a strict improvement over the cleartext alternative.
Two simple steps to configure a server for OE
This mapping is saved and used in the future. It is important to understand that while the transaction is being routed to a different port the origin of the resource hasn't changed (i.e. if the cleartext origin was http://www.example.com:80 then the origin, including the http scheme and the port 80, are unchanged even if it routed to port 443 over TLS). OE is not available with HTTP/1 servers because that protocol does not carry the scheme as part of each transaction which is a necessary ingredient for the Alt-Svc approach.
You can control some details about how long the Alt-Svc mappings last and some other details. The Internet-Draft is helpful as a reference. As the technology matures we will be tracking it; the recent HTTP working group meeting in Dallas decided this was ready to proceed to last call status in the working group.
OE provides unauthenticated encryption over TLS for data that would otherwise be carried via clear text. This creates some confidentiality in the face of passive eavesdropping, and also provides you much better integrity protection for your data than raw TCP does when dealing with random network noise. The server setup for it is trivial.
These are indeed nice bonuses for http:// - but it still isn't as nice as https://. If you can run https you should - full stop. Don't make me repeat it :) Only https protects you from active man in the middle attackers.
But if you have long tail of legacy content that you cannot yet get migrated to https, commonly due to mixed-content rules and interactions with third parties, OE provides a mechanism for an encrypted transport of http:// data. That's a strict improvement over the cleartext alternative.
Two simple steps to configure a server for OE
- Install a TLS based h2 or spdy server on a separate port. 443 is a good choice :). You can use a self-signed certificate if you like because OE is not authenticated.
- Add a response header Alt-Svc: h2=":443" or spdy/3.1 if you are using a spdy enabled server like nginx.
This mapping is saved and used in the future. It is important to understand that while the transaction is being routed to a different port the origin of the resource hasn't changed (i.e. if the cleartext origin was http://www.example.com:80 then the origin, including the http scheme and the port 80, are unchanged even if it routed to port 443 over TLS). OE is not available with HTTP/1 servers because that protocol does not carry the scheme as part of each transaction which is a necessary ingredient for the Alt-Svc approach.
You can control some details about how long the Alt-Svc mappings last and some other details. The Internet-Draft is helpful as a reference. As the technology matures we will be tracking it; the recent HTTP working group meeting in Dallas decided this was ready to proceed to last call status in the working group.
Wednesday, February 18, 2015
HTTP/2 is Live in Firefox
The Internet is chirping loudly today with news that draft-17 of the HTTP/2 specification has been anointed proposed standard. huzzah! Some reports talk about it as the future of the web - but the truth is that future is already here today in Firefox.
9% of all Firefox release channel HTTP transactions are already happening over HTTP/2. There are actually more HTTP/2 connections made than SPDY ones. This is well exercised technology.
For both SPDY and HTTP/2 the killer feature is arbitrary multiplexing on a single well congestion controlled channel. It amazes me how important this is and how well it works. One great metric around that which I enjoy is the fraction of connections created that carry just a single HTTP transaction (and thus make that transaction bear all the overhead). For HTTP/1 74% of our active connections carry just a single transaction - persistent connections just aren't as helpful as we all want. But in HTTP/2 that number plummets to 25%. That's a huge win for overhead reduction. Let's build the web around that.
9% of all Firefox release channel HTTP transactions are already happening over HTTP/2. There are actually more HTTP/2 connections made than SPDY ones. This is well exercised technology.
- Firefox 35, in current release, uses a draft ID of h2-14 and you will negotiate it with google.com today.
- Firefox 36, in Beta to be released NEXT WEEK, supports the official final "h2" protocol for negotiation. I expect lots of new server side work to come on board rapidly now that the specification has stabilized. Firefox 36 also supports draft IDs -14 and -15. You will negotiate -15 with twitter as well as google using this channel.
- Firefox 37 and 38 have the same levels of support - adding draft-16 to the mix. The important part is that the final h2 ALPN token remains fixed. These releases also have relevant IETF drafts for opportunistic security over h2 via the Alternate-Service mechanism implemented. A blog post on that will follow as we get closer to release - but feel free to reach out and experiment with it on these early channels.
For both SPDY and HTTP/2 the killer feature is arbitrary multiplexing on a single well congestion controlled channel. It amazes me how important this is and how well it works. One great metric around that which I enjoy is the fraction of connections created that carry just a single HTTP transaction (and thus make that transaction bear all the overhead). For HTTP/1 74% of our active connections carry just a single transaction - persistent connections just aren't as helpful as we all want. But in HTTP/2 that number plummets to 25%. That's a huge win for overhead reduction. Let's build the web around that.
Wednesday, January 7, 2015
HTTP/2 Dependency Priorities in Firefox 37
Next week Firefox 35 will be in general release, and Firefox 37 will be promoted to the Developer Edition channel (aka Firefox Aurora).
HTTP/2 support will be enabled for the first time by default on a release channel in Firefox 35. Use it in good health on sites like https://twitter.com. Its awesome.
This post is about a feature that landed in Firefox 37 - the use of HTTP/2 priority dependencies and groupings. In earlier releases prioritization was done strictly through relative weightings - similar to SPDY or UNIX nice values. H2 lets us take that a step further and say that some resources are dependent on other resources in addition to using relative weights between transactions that are dependent on the same thing.
There are some simple cases where you really want a strict dependency relationship - for example two frames of the same video should be serialized on the wire rather than sharing bandwidth. More complicated relationships can be expressed through the use of grouping streams. Grouping streams are H2 streams that are never actually opened with a HEADERS frame - they exist simply to be nodes in the dependency tree that other streams depend on.
The canonical use case involves better prioritization for loading html pages that include js, css, and lots of images. When doing so over H1 the browser will actually defer sending the request for the images while the js and css load - the reason is that the transfer of the js/css blocks any rendering of the page and the high byte count of the images slows down the higher priority js/css if done in parallel. The workaround, not requesting the images at all while the js/css is loading, has some downsides - it incurs at least one extra round trip delay and it doesn't utilize the available bandwidth effectively in some cases.
The weighting mechanisms of H2 (and SPDY) can help here - and they are what are used prior to Firefox 37. Full parallelism is restored, but some unwanted bandwidth sharing still goes on.
I've implemented a scheme for H2 using 5 fixed dependency groups (known informally as leader, follower, unblocked, background, and speculative). They are created with the PRIORITY frame upon session establishment and every new stream depends on one of them.
Streams for things like js and css are dependent on the leader group and images are dependent on the follower group. The use of group nodes, rather than being dependent on the js/css directly, greatly simplifies dependency management when some streams complete and new streams of the same class are created - no reprioritization needs to be done when the group nodes are used.
This is experimental - the tree organization and its weights will evolve over time. Various types of resource loads will still have to be better classified into the right groups within Firefox and that too will evolve over time. There is an obvious implication for prioritization of tabs as well, and that will also follow over time. Nonetheless its a start - and I'm excited about it.
One last note to H2 server implementors, if you should be reading this, there is a very strong implication here that you need to pay attention to the dependency information and not simply implement the individual resource weightings. Given the tree above, think about what would happen if there was a stream dependent on leaders with a weight of 1 and a stream dependent on speculative with a weight of 255. Because the entire leader group exists at the same level of the tree as background (and by inclusion speculative) the leader descendents should dominate the bandwidth allocation due to the relative weights of those group nodes - but looking only at the weights of the individual streams gives the incorrect conclusion.
HTTP/2 support will be enabled for the first time by default on a release channel in Firefox 35. Use it in good health on sites like https://twitter.com. Its awesome.
This post is about a feature that landed in Firefox 37 - the use of HTTP/2 priority dependencies and groupings. In earlier releases prioritization was done strictly through relative weightings - similar to SPDY or UNIX nice values. H2 lets us take that a step further and say that some resources are dependent on other resources in addition to using relative weights between transactions that are dependent on the same thing.
There are some simple cases where you really want a strict dependency relationship - for example two frames of the same video should be serialized on the wire rather than sharing bandwidth. More complicated relationships can be expressed through the use of grouping streams. Grouping streams are H2 streams that are never actually opened with a HEADERS frame - they exist simply to be nodes in the dependency tree that other streams depend on.
The canonical use case involves better prioritization for loading html pages that include js, css, and lots of images. When doing so over H1 the browser will actually defer sending the request for the images while the js and css load - the reason is that the transfer of the js/css blocks any rendering of the page and the high byte count of the images slows down the higher priority js/css if done in parallel. The workaround, not requesting the images at all while the js/css is loading, has some downsides - it incurs at least one extra round trip delay and it doesn't utilize the available bandwidth effectively in some cases.
The weighting mechanisms of H2 (and SPDY) can help here - and they are what are used prior to Firefox 37. Full parallelism is restored, but some unwanted bandwidth sharing still goes on.
I've implemented a scheme for H2 using 5 fixed dependency groups (known informally as leader, follower, unblocked, background, and speculative). They are created with the PRIORITY frame upon session establishment and every new stream depends on one of them.
Streams for things like js and css are dependent on the leader group and images are dependent on the follower group. The use of group nodes, rather than being dependent on the js/css directly, greatly simplifies dependency management when some streams complete and new streams of the same class are created - no reprioritization needs to be done when the group nodes are used.
This is experimental - the tree organization and its weights will evolve over time. Various types of resource loads will still have to be better classified into the right groups within Firefox and that too will evolve over time. There is an obvious implication for prioritization of tabs as well, and that will also follow over time. Nonetheless its a start - and I'm excited about it.
One last note to H2 server implementors, if you should be reading this, there is a very strong implication here that you need to pay attention to the dependency information and not simply implement the individual resource weightings. Given the tree above, think about what would happen if there was a stream dependent on leaders with a weight of 1 and a stream dependent on speculative with a weight of 255. Because the entire leader group exists at the same level of the tree as background (and by inclusion speculative) the leader descendents should dominate the bandwidth allocation due to the relative weights of those group nodes - but looking only at the weights of the individual streams gives the incorrect conclusion.
Subscribe to:
Posts (Atom)