ARP broadcast flooding network and high cpu usage
Solution 1:
Solved.
The issue is with SCCM 2012 SP1, a service called: ConfigMrg Wake-Up Proxy. The 'feature' does not existing SCCM 2012 RTM.
Within 4 hours of turning this off within the policy, we saw steady drops in CPU usage. By the time 4 hours was up, ARP usage was merely 1-2%!
In summary, this service does MAC address spoofing! Cannot believe how much havoc it caused.
Below is a full text from Microsoft Technet as i feel it's important to understand how this relates to the issue posted.
For anyone who is interested, below is the technical details.
Configuration Manager supports two wake on local area network (LAN) technologies to wake up computers in sleep mode when you want to install required software, such as software updates and applications: traditional wake-up packets and AMT power-on commands.
Beginning with Configuration Manager SP1, you can supplement the traditional wake-up packet method by using the wake-up proxy client settings. Wake-up proxy uses a peer-to-peer protocol and elected computers to check whether other computers on the subnet are awake, and to wake them if necessary. When the site is configured for Wake On LAN and clients are configured for wake-up proxy, the process works as follows:
Computers that have the Configuration Manager SP1 client installed and that are not asleep on the subnet check whether other computers on the subnet are awake. They do this by sending each other a TCP/IP ping command every 5 seconds.
If there is no response from other computers, they are assumed to be asleep. The computers that are awake become manager computers for the subnet.
Because it is possible that a computer might not respond because of a reason other than it is asleep (for example, it is turned off, removed from the network, or the proxy wake-up client setting is no longer applied), the computers are sent a wake-up packet every day at 2 P.M. local time. Computers that do not respond will no longer be assumed to be asleep and will not be woken up by wake-up proxy.
To support wake-up proxy, at least three computers must be awake for each subnet. To achieve this, three computers are non-deterministically chosen to be guardian computers for the subnet. This means that they stay awake, despite any configured power policy to sleep or hibernate after a period of inactivity. Guardian computers honor shutdown or restart commands, for example, as a result of maintenance tasks. If this happens, the remaining guardian computers wake up another computer on the subnet so that the subnet continues to have three guardian computers.
Manager computers ask the network switch to redirect network traffic for the sleeping computers to themselves.
The redirection is achieved by the manager computer broadcasting an Ethernet frame that uses the sleeping computer’s MAC address as the source address. This makes the network switch behave as if the sleeping computer has moved to the same port that the manager computer is on. The manager computer also sends ARP packets for the sleeping computers to keep the entry fresh in the ARP cache. The manager computer will also respond to ARP requests on behalf of the sleeping computer and reply with the MAC address of the sleeping computer.
During this process, the IP-to-MAC mapping for the sleeping computer remains the same. Wake-up proxy works by informing the network switch that a different network adapter is using the port that was registered by another network adapter. However, this behavior is known as a MAC flap and is unusual for standard network operation. Some network monitoring tools look for this behavior and can assume that something is wrong. Consequently, these monitoring tools can generate alerts or shut down ports when you use wake-up proxy. Do not use wake-up proxy if your network monitoring tools and services do not allow MAC flaps.
When a manager computer sees a new TCP connection request for a sleeping computer and the request is to a port that the sleeping computer was listening on before it went to sleep, the manager computer sends a wake-up packet to the sleeping computer, and then stops redirecting traffic for this computer.
The sleeping computer receives the wake-up packet and wakes up. The sending computer automatically retries the connection and this time, the computer is awake and can respond.
Ref: http://technet.microsoft.com/en-us/library/dd8eb74e-3490-446e-b328-e67f3e85c779#BKMK_PlanToWakeClients
Thank you for everyone who has posted here and assisted with the troubleshooting process, very much appreciated.
Solution 2:
ARP / Broadcast storm
- We see large broadcast packets from VLAN 1, VLAN 1 used for desktop devices. We use 192.168.0.0/20 ...
- Wiresharks shows that 100s of computers are flooding the network with ARP Broadcast ...
Your ARP Input process is high, which means the switch is spending a lot of time processing ARPs. One very common cause of ARP flooding is a loop between your switches. If you have a loop, then you also can get the mac flaps you mentioned above. Other possible causes of ARP floods are:
- IP address misconfigurations
- A layer2 attack, such as arp spoofing
First eliminate the possibility of misconfigurations or a layer2 attack mentioned above. The easiest way to do this is with arpwatch on a linux machine (even if you have to use a livecd on a laptop). If you have a misconfiguration or layer2 attack, then arpwatch gives you messages like this in syslog, which list the mac addresses which are fighting over the same IP address...
Oct 20 10:31:13 tsunami arpwatch: flip flop 192.0.2.53 00:de:ad:85:85:ca (00:de:ad:3:d8:8e)
When you see "flip flops", you have to track down the source of the mac addresses and figure out why they're fighting over the same IP.
- Large number of MAC flaps
- Spanning tree has been verified by Cisco TAC & CCNP/CCIE qualified individuals. We shutdown all redundant links.
Speaking as someone who has been through this more times than I would like to recall, don't assume you found all redundant links... just make your switchports behave at all times.
Since you're getting a large number of mac flaps between switchports, it's hard to find where the offenders are (suppose you find two or three mac addresses that send lots of arps, but the source mac addresses keep flapping between ports). If you aren't enforcing a hard limit on mac-addresses per edge port, it is very difficult to track these problems down without manually unplugging cables (which is what you want to avoid). Switch loops cause an unexpected path in the network, and you could wind up with hundreds of macs learned intermittantly from what should normally be a desktop switchport.
The easiest way to slow down the mac-moves is with port-security
. On every access switchport in Vlan 1 that is connected to a single PC (without a downstream switch), configure the following interface-level commands on your cisco switches...
switchport mode access
switchport access vlan 1
!! switchport nonegotiate disables some Vlan-hopping attacks via Vlan1 -> another Vlan
switchport nonnegotiate
!! If no IP Phones are connected to your switches, then you could lower this
!! Beware of people with VMWare / hubs under their desk, because
!! "maximum 3" could shutdown their ports if they have more than 3 macs
switchport port-security maximum 3
switchport port-security violation shutdown
switchport port-security aging time 5
switchport port-security aging type inactivity
switchport port-security
spanning-tree portfast
!! Ensure you don't have hidden STP loops because someone secretly cross-connected a
!! couple of desktop ports
spanning-tree bpduguard enable
In most mac/ARP flooding cases, applying this configuration to all your edge switch ports (especially any with portfast) will get you back to a sane state, because the config will shutdown any port that exceeds three mac-addresses, and disable a secretly looped portfast port. Three macs per port is a number that works well in my desktop environment, but you could raise it to 10 and probably be fine. After you have done this, any layer 2 loops are broken, rapid mac flaps will cease, and it makes diagnosis much easier.
Another couple of global commands that are useful for tracking down ports associated with a broadcast storm (mac-move) and flooding (threshold)...
mac-address-table notification mac-move
mac address-table notification threshold limit 90 interval 900
After you finish, optionally do a clear mac address-table
to accelerate healing from potentially full CAM table.
- Ran show mac address-table on different switches and core itself (on the core, for example, plugged by desktop directly, my desktop ), and we can see the several different MAC hardware address being registered to the interface, even though that interface has only one computer attached to this...
This whole answer assumes your 3750 doesn't have a bug causing the problem (but you did say that wireshark indicated PCs that are flooding). What you're showing us is obviously wrong when there is only one computer attached to Gi1/1/3, unless that PC has something like VMWare on it.
Misc thoughts
Based on a chat conversation we had, I probably don't have to mention the obvious, but I will for sake of future visitors...
- Putting any users in Vlan1 is normally a bad idea (I understand you may have inherited a mess)
- Regardless of what TAC tells you, 192.168.0.0/20 is too large to manage in a single switched domain without risks of layer2 attacks. The larger your subnet mask is, the greater exposure you have to layer2 attacks like this because ARP is an unauthenticated protocol and a router must at least read a valid ARP from that subnet.
- Storm-control on layer2 ports is usually a good idea as well; however, enabling storm-control in a situation like this it will take out good traffic with the bad traffic. After the network has healed, apply some storm-control policies on your edge ports and uplinks.
Solution 3:
The real question is why are hosts sending so many ARPs in the first place. Until this is answered, the switch(es) will continue to have a hard time dealing with the arp storm. Netmask mismatch? Low host arp timers? One (or more) hosts having an "interface" route? A rouge wireless bridge somewhere? "gratuitous arp" gone insane? DHCP server "in-use" probing? It doesn't sound like an issue with the switches, or layer 2; you have hosts doing bad things.
My debugging process would be unplug everything and watch closely as things are reattached, one port at a time. (I know it's miles from ideal, but at some point you have to cut your losses and attempt to physically isolate any possible source(s)) Then I'd work towards understanding why select ports are generating some many arp's.
(Would a lot of those hosts happen to be linux systems? Linux has had a very d***med stupid ARP cache management system. The fact that it will "re-verify" an entry in mere minutes, is broken in my book. It tends to be less of an issue in small networks, but a /20 is not a small network.)