The Google transport networking crew (QUIC, TCP, etc..) deserve a shout out for identifying and fixing a nearly decade old Linux kernel TCP bug that I think will have an outsized impact on performance and efficiency for the Internet.
Their patch addresses a problem with cubic congestion control, which is the default algorithm on many Linux distributions. The problem can be roughly summarized as the controller mistakenly characterizing the lack of congestion reports over a quiescent period as positive evidence that the network is not congested and therefore it should send at a faster rate when sending resumes. When put like this, its obvious that an endpoint that is not moving any traffic cannot use the lack of errors as information in its feedback loop.
The end result is that applications that oscillate between transmitting lots of data and then laying quiescent for a bit before returning to high rates of sending will transmit way too fast when returning to the sending state. This consequence of this is self induced packet loss along with retransmissions, wasted bandwidth, out of order packet delivery, and application level stalls.
Unfortunately a number of common web use cases are clear triggers here. Any use of persistent connections, where the burst of web data on two separate pages is interspersed with time for the user to interpret the data is an obvious trigger. A far more dangerous class of triggers is likely to be the various HTTP based adaptive streaming media formats where a series of chunks of media are transferred over time on the same HTTP channel. And of course, Linux is a very popular platform for serving media.
As with many bugs, it all seems so obvious afterwards - but tracking this stuff down is the work of quality non-glamorous engineering. Remember that TCP is robust enough that it seems to work anyhow - even at the cost of reduced network throughput in this case. Kudos to the google team for figuring it out, fixing it up, and especially for open sourcing the result. The whole web, including Firefox users, will benefit.
Thanks!
Real data and musings on the performance of networks, servers, protocols, and their related folks.
Friday, September 25, 2015
Tuesday, September 22, 2015
Brotli Content-Encoding for Firefox 44
The best way to make data appear to move faster over the Web is to move less of it and lossless compression has always been a core tenet of good web design.
Sometimes that is done via over the top gzip of text resources (html, js, css), but other times it is accomplished via the compression inherent in the file format of media elements. Modern sites apply gzip to all of their text as a best practice.
Time marches on, and it turns out we can often do a better job than the venerable gzip. Until recently, new formats struggled with matching the decoding rates of gzip, but lately a new contender named brotli has shown impressive results. It has been able to improve on gzip anywhere from 20% to 40% in terms of compression ratios while keeping up on the decoding rate. Have a look at the author's recent comparative results.
The deployed WOFF2 font file format already uses brotli internally.
If all goes well in testing, Firefox 44 (ETA January 2016) will negotiate brotli as a content-encoding for https resources. The negotiation will be done in the usual way via the Accept-Encoding request header and the token "br". Servers that wish to encode a response with brotli can do so by adding "br" to the Content-Encoding response header. Firefox won't decode brotli outside of https - so make sure to use the HTTP content negotiation framework instead of doing user agent sniffing.
[edit note - around Oct 6 2015 the token was changed to br from brotli. The token brotli was only ever deployed on nightly builds of firefox 44.]
We expect Chrome will deploy something compatible in the near future.
The brotli format is defined by this document working its way through the IETF process. We will work with the authors to make sure the IANA registry for content codings is updated to reference it.
You can get tools to create brotli compressed content here and there is a windows executable I can't vouch for linked here
Sometimes that is done via over the top gzip of text resources (html, js, css), but other times it is accomplished via the compression inherent in the file format of media elements. Modern sites apply gzip to all of their text as a best practice.
Time marches on, and it turns out we can often do a better job than the venerable gzip. Until recently, new formats struggled with matching the decoding rates of gzip, but lately a new contender named brotli has shown impressive results. It has been able to improve on gzip anywhere from 20% to 40% in terms of compression ratios while keeping up on the decoding rate. Have a look at the author's recent comparative results.
The deployed WOFF2 font file format already uses brotli internally.
If all goes well in testing, Firefox 44 (ETA January 2016) will negotiate brotli as a content-encoding for https resources. The negotiation will be done in the usual way via the Accept-Encoding request header and the token "br". Servers that wish to encode a response with brotli can do so by adding "br" to the Content-Encoding response header. Firefox won't decode brotli outside of https - so make sure to use the HTTP content negotiation framework instead of doing user agent sniffing.
[edit note - around Oct 6 2015 the token was changed to br from brotli. The token brotli was only ever deployed on nightly builds of firefox 44.]
We expect Chrome will deploy something compatible in the near future.
The brotli format is defined by this document working its way through the IETF process. We will work with the authors to make sure the IANA registry for content codings is updated to reference it.
You can get tools to create brotli compressed content here and there is a windows executable I can't vouch for linked here
Subscribe to:
Posts (Atom)