Showing posts with label ip. Show all posts
Showing posts with label ip. Show all posts

Wednesday, February 24, 2010

Forwarding Decisions with Bloom Filters

Really neat paper: "Hash, Don't Cache: fast packet forwarding for enterprise edge routers". The paper and abstract are both available on line. The authors are Minlan Yu, and Jennifer Rexford - both of Princeton.

The authors share a lament of mine - caching forwarding decisions is attractive but no longer realistic as an implementation. The diversity of addresses (including those generated randomly by attackers) pollutes the cache and sends way too much traffic to slow fallback processing paths. So we end up with routers with one class of memory and no caches. Generally their memory is made totally from the really expensive fast stuff.

But the suggestion in this paper of using hashed bloom filters instead of caches is a really cool one. Essentially maintain one filter per interface in fast memory (sram probably) and evaluate those at forwarding time in parallel if you can. You probably just get a single hit and can forward the packet onwards.. you do this with only needing enough fast ram for the hash filters (i.e. not much).. whereas all the traditional trie data and routing update code can live in slow and cheap dram.

of course, due to the false positive nature of bloomies it is possible that you'll get more than one match that is going to require some kind of (probably slower) fallback plan for that packet. But the problem is nowhere near as dire as it is with caches.. with caches the misses force all the valid entries out of the cache and then all traffic is slowed down as the cache rate drops - with bloomies only the packets with the false positives are impacted, almost all of the traffic goes through the fast path unimpacted. The paper puts the false positive rate somewhere on the order of 1 in tens of thousands (depending on a bunch of factors - but that gives a feel for it.) Furthermore a cache can be intentionally attacked with a diversity of addresses in order to flood the cache and impact service for everyone - where under bloom filters such an attack is no more or less likely to impact service than any other kind of packet.

What can't you apply a bloom filter to? It's all pretty cool. The paper has a number of other details on how to minimize false positives and efficiently process route updates.

There is a related work from the same authors on a "buffalo" architecture which, after just a quick glance, appears to apply similar principles to LAN switching.

J Rexford is also an author of Web Protocols and Practice: HTTP/1.1, Networking Protocols, Caching, and Traffic Measurement which is probably the most practical description of the major web protocol I've ever seen.

Tuesday, February 16, 2010

Googling Harder

A while back I mentioned that Google thinks TCP ought to be more aggressive.

I must admit, this matches my own bias. I can barely count the number of applications I have watched wait for network I/O when there was plenty of CPU and idle bandwidth available. It's maddening. Sometimes it's slow start or another aspect of congestion control, sometimes it is outdated things like the nagle algorithm.

Well, Google is back at it with this slide set. (PDF)

They make the argument for increasing the initial cwnd. More provocatively, they argue that the Web has already done so in a defacto way by going to aggressive numbers of independent parallel HTTP connections (where you essentially get new cwnd credits just for opening a new TCP stream). Clever argument. Maybe you want to pace the data after 3 or 4 packets based on the RTT of the handshake - so you don't overrun any buffers un-necessarily.

Frankly, this kind of thing can be implemented on the server side without ever telling the peer. It would make some sense for Google to just do this for a few different values of cwnd on a tiny fraction of their traffic and see if the packet loss rates change and then publish that.

Monday, August 10, 2009

Making the switch

Well that's funny.

Just a few hours after my last post, which suggested that virtio based networking might be getting bested by the not-in-userspace v-bus, Michael Tsirkin posts an in-kernel backend to virtio. Which puts the two on more or less the same procedural footing.

Fire up the benchmarks?

(not) switching contexts.

a lot of bits have been spilled over virtual network performance in v-bus vs virtio-net/virtio-pci.. (aka alacrity vm vs traditional kvm/qemu).. this includes some pretty sensational(ist?) performance graphs: here.

There are lots of details (and details do matter) but the first-order issue can probably be summed up thusly, from Avi Kivity on lklm:

The current conjecture is that ioq outperforms virtio because the host side of ioq is implemented in the host kernel, while the host side of virtio is implemented in userspace.


Perhaps context switching isn't such a minor detail afterall.

Tuesday, July 21, 2009

Caution - (S)Low Bridge Ahead

This post will not be satisfying. Someone has posted some great datapoints about virtualized packet forwarding, which is great. But they don't make a lot of sense. Which is not great. Nor is it satisfying.

Oh well, I'm sure there will be a followup sometime in the future.

In this thread, Or Gerlitz posts a new networking type for qemu (and by extension) kvm which are of course popular linux host virtualization packages. The networking type is "raw" and the driver couldn't be more simple - a (v)lan interface on the host is opened with a AF_PACKET socket and all of the packets that appear there are shoved through to the guest interface, and vice versa.

This is a pretty direct way of doing things, but it has the unfortunate side effect that all of the guests and the host itself are aggregated onto one upstream switch port without any kind of bridge, switch, or router in between. This means that unless the upstream switch can do a u-turn when forwarding (and most of them will not), all of the guests and the host are isolated from each other. The normal way of doing things is to attach the guests and host together with a tun/tap socket and run a bridge on host. This bridge will do all the necessary forwarding so that everybody has full connectivity, and it lets you run iptables and ebtables on the host to boot.

That's all well and good, but the really interesting part was the motivation for running around tun/tap/bridge anyhow: the poster runs a test with short udp transmissions over gige.. running it between two real (non-vm) hosts he sees 450K packets per second. The post doesn't mention what hardware is involved, so we'll just take it as a black box baseline. Switching the sender to be a qemu guest with traditional tap/bridge networking it plummets to just 195K. The "raw" interface gets that back up to 240K - which is still a far cry from 450, eh?

Tap mode has 3 times the context switches than the raw version. I don't think I saw a number for the non-vm test. Other than that nothing, including the profiles, really jumps out.

The whole thread is worth reading - but the main data points are here and here

Monday, July 20, 2009

What is eating those Google SYN-ACKs?

In this post, I mentioned google was seeing huge packet loss on syn-acks from their servers. At the time it looked like 2%. That sounded nuts.

It still sounds nuts.

Someone else on the mailing list posted about that, and Jerry Chu of Google confirmed it:

Our overall pkt retransmission rate often goes over 1%. I was
wondering if SYN/SYN-ACK pkts are less likely to be dropped
by some routers due to their smaller size so we collected traces
and computed SYN-ACK retransmissions rate on some servers.
We confirmed it to be consistent with the overall pkt drop rate,
i.e., > 1% often.


You could imagine why the overall retransmission rate might be higher than the real drop rate due to jitter and various fast retransmit algorithms that might retransmit things that just hadn't been acknowledged quite yet. Even SYNs might be dropped at the host (instead of the network) due to queue overflows and such.. but we're talking about SYN-ACKs from busy servers towards what one would expect would be pretty idle google-searching clients. And these SYN-ACKs have giant timeouts (3 seconds - which is why Jerry was writing in the first place) so it certainly isn't a matter of over-aggressive retransmit. The only explanation seems to be packet loss. At greater than 1%

wow.

This probably has more to do with the global nature of google's audience than anything else. But still, TCP can really suck at loss rates that high. It must be very different than the desktop Internet I know (which is a fair-to-middlin cable service, not a fancy Fiber-To-The-Home setup which is becoming more common.)

I wonder exactly where those losses happen.

Tuesday, July 14, 2009

Google Thinks TCP should be more Aggressive by Default

Really interesting post from Jerry Chu of Google. He says Google has data which shows that we ought to lower the initial RTO, increase the initial CWND, drop the min RTO, and reduced the delayed ack time out in TCP.

Based on my own anecdotal data, I've done stuff like that in products I've worked on. Let's face it - 3 seconds is a freaking eternity. Processors, networks, and busses have all scaled but these constants remain the same. Jerry says Google has data that shows this is important. As the google data set is no doubt much more extensive than any I worked with, that's a really welcome post.

Probably the most important data point Jerry shares is that "up to a few percentage points" in his data set exhibit a SYN-ACK retransmission from the google servers. Wow. (at least) 1 in 50 syn-acks needs to be retransmitted? That's not my experience at all, and if true on google scale it is absolutely fascinating. Are they generally seeing 2% packet loss on google tx? There's no way that they are seeing that.. google would appear to suck! So what's going on... ? Why is syn-ack rexmitted more than anything else? (and I'm assuming they are indeed lost, because otherwise lowering the timeout wouldn't be the right remedy..)

Sunday, June 28, 2009

'Violation' is so prejorative

Ben Hutchings says:

>[...] we also have architectural issues in violating layered
> software design

Meanwhile, in the real world, we want to avoid copying data, so an skb doesn't belong to any specific protocol layer.


Thank You Ben!

Abstractions aren't inherently good. They are great if they help you build or maintain things that are otherwise too complex to understand or too tediuos to work on - but we have to vigilantly remember that losing those details also sometimes restricts the quality of what we can build too as somethings are just inherently complex.