Gustafson's law vs Amdahl's law

Amdahl's law

Suppose you have a sequential code and that a fraction f of its computation is parallelized and run on N processing units working in parallel, while the remaining fraction 1-f cannot be improved, i.e., it cannot be parallelized. Amdahl’s law states that the speedup achieved by parallelization is

enter image description here

Gustafson's law

Amdahl’s point of view is focused on a fixed computation problem size as it deals with a code taking a fixed amount of sequential calculation time. Gustafson's objection is that massively parallel machines allow computations previously unfeasible since they enable computations on very large data sets in fixed amount of time. In other words, a parallel platform does more than speeding up the execution of a code: it enables dealing with larger problems.

Suppose you have an application taking a time ts to be executed on N processing units. Of that computing time, a fraction (1-f) must be run sequentially. Accordingly, this application would run on a fully sequential machine in a time t equal to

enter image description here

If we increase the problem size, we can increase the number of processing units to keep the fraction of time the code is executed in parallel equal to f·ts. In this case, the sequential execution time increases with N which now becomes a measure of the problem size. The speedup then becomes

enter image description here

The efficiency would then be

enter image description here

so that the efficiency tends to f for increasing N. The pitfall of these rather optimistic speedup and efficiency evaluations is related to the fact that, as the problem size increases, communication costs will increase, but increases in communication costs are not accounted for by Gustafson’s law.

References

G. Barlas, Multicore and GPU Programming: An Integrated Approach, Morgan Kaufmann

M.D. Hill, M.R. Marty, Amdahl’s law in the multicore era, Computer, vol. 41, n. 7, pp. 33-38, Jul. 2008.

GPGPU

There are interesting discussions on Amdahl's law as applied to General Purpose Graphics Processing Units, see

Amdahl's law and GPU Amdahl's Law for GPU Is Amdahl's law accepted for GPUs too?


We are looking at the same problem from different perspectives. Amdahl's law says that if you have, say, 100 more CPUs, how much faster can you solve the same problem?

Gustafson's law is saying, if a parallel computer with 100 CPUs can solve this problem in 30 minutes, how long would it take for a computer with just ONE such CPU to solve the same problem?

Gustafson's law reflects the situations better. For example, we cannot use a 20 year old PC to play most of today's video games because they are just too slow.


Amdahl's law

Generally, if a fraction f of a job is impossible to divide into parallel parts, the whole thing can only get 1/f times faster in parallel. That is Amdahl's law.

Gustafson's law

Generally, for p participants, and an un-parallelizable fraction f the whole thing can get p+(1−p)f times faster . That is Gustafson's law