How high can system load go?
Solution 1:
This this brings up the question, how high can system load go? For instance is it possible for it to go up to 2.00 even 100.00?
Absolutely. Looking at the uptime
man page:
System load averages is the average number of processes that are either
in a runnable or uninterruptable state. A process in a runnable state
is either using the CPU or waiting to use the CPU. A process in unin‐
terruptable state is waiting for some I/O access, eg waiting for disk.
The averages are taken over the three time intervals. Load averages
are not normalized for the number of CPUs in a system, so a load aver‐
age of 1 means a single CPU system is loaded all the time while on a 4
CPU system it means it was idle 75% of the time.
So if you have a lot of processes waiting to run (or a lot of processes blocked waiting for I/O), you're going to have a high load average. This article talks about it in more detail, and has useful links to other resources.
On an unloaded system, the load average will typically be in the range 0 <= load_average <= n, where n is the number of cores on your system.
Solution 2:
I've seen alive systems hit the thousands. Load Average a relative measure based on waiting processes of how much competition there is for getting the kernels attention and being granted some time on the CPU. If the machine is swamped with jobs or crashing, that can take a long time.
What level is acceptable depends on the machine, the number of cores, the kind of kernel job scheduler in use, and the jobs you expect it to do. I have some machines that are quite happy in the ~10 range but bog down if they hit ~40-50. Others become noticeably laggy at 2 and would be unusable at 10.
It's not unusual for the load to be high durring boot since lots of things are being done at once and the machine is winding up. I would consider ~1 quite a normal load to hit durring boot for a desktop Linux, then settling down to ~0.1 while doing nothing.
Solution 3:
On Linux, the system load average values are comprised of processes in one of three different states. In general, one could say that the load average is the amount of processes waiting for CPU time or consuming CPU time. The three values in the load average overview are the load average over the past minute, the last 5 minutes and the last 15 minutes.
The three different states of processes counted towards the load average are: (1) processes running on the CPU, (2) processes waiting for CPU time and (3) processes in uninterruptable sleep. The last category, while not generating CPU load, can increase the system load average significantly.
For example, a dozen processes waiting for reads from a disk that is very busy or unavailable will generate a load average of 12 as processes in uninterruptable sleep, but your CPU can be perfectly idle in the meantime.
So, yes, load average can easily go up to double digits. How bad that is is rather dependent on your hardware. If you have 16 cores, having 16 processes waiting for CPU time is not so bad. On a single core machine, having 3 processes waiting for CPU time can be very bad.
Solution 4:
Make a simple C process running infinite loops in 10000 threads. Give it a very low priority (+20). Your load will be 10000, while your system will be still usable. It will use only a very few RAM (at most some megabytes).
Although it is a quite uncommon configuration, in real systems you won't find this.
The system load means only the mean number of processes waiting for a cpu time slot, no less and no more. Here is another answer about the proper way to interpret system load.
In the daily experience, a load over 30+ means mostly some problem.
Solution 5:
A few seconds after killing a process that was eating an old 450Mhz cpu: