How exactly & specifically does layer 3 LACP destination address hashing work?
Solution 1:
What you're looking for is commonly called a "transmit hash policy" or "transmit hash algorithm". It controls the selection of a port from a group of aggregate ports with which to transmit a frame.
Getting my hands on the 802.3ad standard has proven difficult because I'm not willing to spend money on it. Having said that, I've been able to glean some information from a semi-official source that sheds some light on what you're looking for. Per this presentation from the 2007 Ottawa, ON, CA IEEE High Speed Study Group meeting the 802.3ad standard does not mandate particular algorithms for the "frame distributor":
This standard does not mandate any particular distribution algorithm(s); however, any distribution algorithm shall ensure that, when frames are received by a Frame Collector as specified in 43.2.3, the algorithm shall not cause a) Mis-ordering of frames that are part of any given conversation, or b) Duplication of frames. The above requirement to maintain frame ordering is met by ensuring that all frames that compose a given conversation are transmitted on a single link in the order that they are generated by the MAC Client; hence, this requirement does not involve the addition (or modification) of any information to the MAC frame, nor any buffering or processing on the part of the corresponding Frame Collector in order to re-order frames.
So, whatever algorithm a switch / NIC driver uses to distribute transmitted frames must adhere to the requirements as stated in that presentation (which, presumably, was quoting from the standard). There is no particular algorithm specified, only a compliant behavior defined.
Even though there's no algorithm specified, we can look at a particular implementation to get a feel for how such an algorithm might work. The Linux kernel "bonding" driver, for example, has an 802.3ad-compliant transmit hash policy that applies the function (see bonding.txt in the Documentation\networking directory of the kernel source):
Destination Port = ((<source IP> XOR <dest IP>) AND 0xFFFF)
XOR (<source MAC> XOR <destination MAC>)) MOD <ports in aggregate group>
This causes both the source and destination IP addresses, as well as the source and destination MAC addresses, to influence the port selection.
The destination IP address used in this type of hashing would be the address that's present in the frame. Take a second to think about that. The router's IP address, in an Ethernet frame header away from your server to the Internet, isn't encapsulated anywhere in such a frame. The router's MAC address is present in the header of such a frame, but the router's IP address isn't. The destination IP address encapsulated in the frame's payload will be the address of the Internet client making the request to your server.
A transmit hash policy that takes into account both source and destination IP addresses, assuming you have a widely varied pool of clients, should do pretty well for you. In general, more widely varied source and/or destination IP addresses in the traffic flowing across such an aggregated infrastructure will result in more efficient aggregation when a layer 3-based transmit hash policy is used.
Your diagrams show requests coming directly to the servers from the Internet, but it's worth pointing out what a proxy might do to the situation. If you're proxying client requests to your servers then, as chris speaks about in his answer then you may cause bottlenecks. If that proxy is making the request from its own source IP address, instead of from the Internet client's IP address, you'll have fewer possible "flows" in a strictly layer 3-based transmit hash policy.
A transmit hash policy could also take layer 4 information (TCP / UDP port numbers) into account, too, so long as it kept with the requirements in the 802.3ad standard. Such an algorithm is in the Linux kernel, as you reference in your question. Beware that the the documentation for that algorithm warns that, due to fragmentation, traffic may not necessarily flow along the same path and, as such, the algorithm isn't strictly 802.3ad-compliant.
Solution 2:
very suprisingly, a few days ago our testing showed that xmit_hash_policy=layer3+4 will not have any effect between two directly connected linux servers, all traffic will use one port. both run xen with 1 bridge that has the bonding device as a member. most Obviously, the bridge could cause the problem, just that it does not make sense AT ALL considering that ip+port based hashing would be used.
I know some people actually manage to push 180MB+ over bonded links (i.e. ceph users), so it does work in general. Possible things to look at: - We used old CentOS 5.4 - The OPs example would mean the second LACP "unhashes" the connections - does that make sense, ever?
What this thread and documentation reading etc etc has shown me:
- Generally everyone knows a lot about this, is good at reciting theory from the bonding howto or even the IEEE standards, whereas practical experience is close to none.
- The RHEL documentation is incomplete at best.
- The bonding documentation is from 2001 and not current enough
- layer2+3 mode is apparently not in CentOS (it doesnt show in modinfo, and in our test it dropped all traffic when enabled)
- It does not help that SUSE (BONDING_MODULE_OPTS), Debian (-o bondXX) and RedHat (BONDING_OPTS) all have different ways to specify per-bond mode settings
- The CentOS/RHEL5 kernel module is "SMP safe" but not "SMP capable" (see facebook highperformance talk) - it does NOT scale above one CPU, so with bonding higher cpu clock > many cores
If anyone ends up a good high-performance bonding setup, or really knows what they're talking about it would be awesome if they took half an hour to write a new small howto that documents ONE working example using LACP, no odd stuff and bandwidth > one link
Solution 3:
If your switch sees the true L3 destination, it can hash on that. Basically if you've got 2 links, think link 1 is for odd numbered destinations, link 2 is for even numbered destinations. I don't think they ever use the next-hop IP unless configured to do so, but that's pretty much the same as using the MAC address of the target.
The problem you're going to run into is that, depending on your traffic, the destination will always be the single server's single IP address so you'll never use that other link. If the destination is the remote system on the internet, you'll get even distribution, but if it is something like a web server, where your system is the destination address, the switch will always send traffic over only one of the available links.
You'll be in even worse shape if there is a load balancer somewhere in there, because then the "remote" IP will always be either the load balancer's IP or the server. You could get around that a bit by using lots of IP addresses on the load balancer and the server, but that's a hack.
You may want to expand your horizon of vendors a bit. Other vendors, such as extreme networks, can hash on things like:
L3_L4 algorithm—Layer 3 and Layer 4, the combined source and destination IP addresses and source and destination TCP and UDP port numbers. Available on SummitStack and Summit X250e, X450a, X450e, and X650 series switches.
So basically as long as the client's source port (which typically changes a lot) changes, you'll evenly distribute the traffic. I'm sure other vendors have similar features.
Even hashing on source and destination IP would be enough to avoid hot-spots, so long as you don't have a load balancer in the mix.