What is FLOP/s and is it a good measure of performance?
It's a pretty decent measure of performance, as long as you understand exactly what it measures.
FLOPS is, as the name implies FLoating point OPerations per Second, exactly what constitutes a FLOP might vary by CPU. (Some CPU's can perform addition and multiplication as one operation, others can't, for example). That means that as a performance measure, it is fairly close to the hardware, which means that 1) you have to know your hardware to compute the ideal FLOPS on the given architecture, and you have to know your algorithm and implementation to figure out how many floating point ops it actually consists of.
In any case, it's a useful tool for examining how well you utilize the CPU. If you know the CPU's theoretical peak performance in FLOPS, you can work out how efficiently you use the CPU's floating point units, which are often one of the hard to utilize efficiently. A program which runs 30% of the FLOPS the CPU is capable of, has room for optimization. One which runs at 70% is probably not going to get much more efficient unless you change the basic algorithm. For math-heavy algorithms like yours, that is pretty much the standard way to measure performance. You could simply measure how long a program takes to run, but that varies wildly depending on CPU. But if your program has a 50% CPU utilization (relative to the peak FLOPS count), that is a somewhat more constant value (it'll still vary between radically different CPU architectures, but it's a lot more consistent than execution time).
But knowing that "My CPU is capable of X GFLOPS, and I'm only actually achieving a throughput of, say, 20% of that" is very valuable information in high-performance software. It means that something other than the floating point ops is holding you back, and preventing the FP units from working efficiently. And since the FP units constitute the bulk of the work, that means your software has a problem.
It's easy to measure "My program runs in X minutes", and if you feel that is unacceptable then sure, you can go "I wonder if I can chop 30% off that", but you don't know if that is possible unless you work out exactly how much work is being done, and exactly what the CPU is capable of at peak. How much time do you want to spend optimizing this, if you don't even know whether the CPU is fundamentally capable of running any more instructions per second?
It's very easy to prevent the CPU's FP unit from being utilized efficiently, by having too many dependencies between FP ops, or by having too many branches or similar preventing efficient scheduling. And if that is what is holding your implementation back, you need to know that. You need to know that "I'm not getting the FP throughput that should be possible, so clearly other parts of my code are preventing FP instructions from being available when the CPU is ready to issue one".
Why do you need other ways to measure performance? What's wrong with just working out the FLOPS count as your boss asked you to? ;)
"compare the results with benchmarks" and do what?
FLOPS means you need
1) FLOPs per some unit of work.
2) time for that unit of work.
Let's say you have some input file that does 1,000 iterations through some loop. The loop is a handy unit of work. It gets executed 1,000 times. It takes an hour.
The loop has some adds and multiplies and a few divides and a square root. You can count adds, multiplies and divides. You can count this in the source, looking for +, * and /. You can find the assembler-language output from the compiler, and count them there, too. You may get different numbers. Which one is right? Ask your boss.
You can count the square roots, but you don't know what it really does in terms of multiplies and adds. So, you'll have to do something like benchmark multiply vs. square root to get a sense of how long a square root takes.
Now you know the FLOPS in your loop. And you know the time to run it 1,000 times. You know FLOPS per second.
Then you look at LINPACK and find you're slower. Now what? Your program isn't LINPACK, and it's slower than LINPACK. Odds are really good that your code will be slower. Unless your code was written and optimized over the same number of years a LINPACK, you'll be slower.
Here's the other part. Your processor has some defined FLOPS rating against various benchmarks. Your algorithm is not one of those benchmarks, so you fall short of the benchmarks. Is this bad? Or is this the obvious consequence of not being a benchmark?
What's the actionable outcome going to be?
Measurement against some benchmark code base is only going to tell you that you're algorithm isn't the benchmark algorithm. It's a foregone conclusion that you'll be different; usually slower.
Obviously, the result of measuring against LINPACK will be (a) you're different and therefore (b) you need to optimize.
Measurement is only really valuable when done against yourself. Not some hypothetical instruction mix, but your own instruction mix. Measure your own performance. Make a change. See if your performance -- compared with yourself -- get better or worse.
FLOPS don't matter. What matters is time per unit of work. You'll never match the design parameters of your hardware because you're not running the benchmark that your hardware designers expected.
LINPACK doesn't matter. What matters is your code base and the changes you're making to change performance.
I'd just like to add a couple of finer points:
division is special. Since most processors can do an addition, comparison, or multiplication in a single cycle, those are all counted as one flop. But division always takes longer. How much longer depends on the processor, but there's sort of a defacto standard in the HPC community to count one division as 4 flops.
If a processor has a fused multiply-add instruction that does a multiplication and an addition in a single instruction -- generally A += B * C -- that counts as 2 operations.
Always be careful in distinguishing between single-precision flops and double-precision flops. A processor that is capable of so many single-precision gigaflops may only be capable of a small fraction of that many double-precision gigaflops. The AMD Athlon and Phenom processors can generally do half as many double-precision flops as single precision. The ATI Firestream processors can generally do 1/5th as many double-precision flops as single precision. If someone is trying to sell you a processor or a software package and they just quote flops without saying which, you should call them on it.
The terms megaflop, gigaflop, teraflop, etc. are in common use. These refer to factors of 1000, not 1024. E.g., 1 megaflop = 1,000,000 flop/sec not 1,048,576. Just as with disk drive sizes, there is some confusion on this.
Old question with old, if popular, answers that are not exactly great, IMO.
A “FLOP” is a floating-point math operation. “FLOPS” can mean either of two things:
- The simple plural of “FLOP” (i.e. “operation X takes 50 FLOPs”)
- The rate of FLOPs in the first sense (i.e. floating-point math operations per second)
Where it is not clear from context, which of these is meant is often disambiguated by writing the former as “FLOPs” and the latter as “FLOP/s”.
FLOPs are so-called to distinguish them from other kinds of CPU operations, such as integer math operations, logical operations, bitwise operations, memory operations, and branching operations, which have different costs (read “take different lengths of time”) associated with them.
The practice of “FLOP counting” dates back to the very early days of scientific computing, when FLOPs were, relatively speaking, extremely expensive, taking many CPU cycles each. An 80387 math coprocessor, for example, took something like 300 cycles for a single multiplication. This was at a time before pipelining and before the gulf between CPU clock speeds and memory speeds had really opened up: memory operations took just a cycle or two, and branching (“decision making”) was similarly cheap. Back then, if you could eliminate a single FLOP in favor of a dozen memory accesses, you made a gain. If you could eliminate a single FLOP in favor of a dozen branches, you made a gain. So, in the past, it made sense to count FLOPs and not worry much about memory references and branches because FLOPs strongly dominated execution time because they were individually very expensive relative to other kinds of operation.
More recently, the situation has reversed. FLOPs have become very cheap — any modern Intel core can perform about two FLOPs per cycle (although division remains relatively expensive) — and memory accesses and branches are comparatively much more expensive: a L1 cache hit costs maybe 3 or 4 cycles, a fetch from main memory costs 150–200. Given this inversion, it is no longer the case that eliminating a FLOP in favor of a memory access will result in a gain; in fact, that's unlikely. Similarly, it is often cheaper to “just do” a FLOP, even if it's redundant, rather than decide whether to do it or not. This is pretty much the complete opposite of the situation 25 years ago.
Unfortunately, the practice of blind FLOP-counting as an absolute metric of algorithmic merit has persisted well past its sell-by date. Modern scientific computing is much more about memory bandwidth management — trying to keep the execution units that do the FLOPs constantly fed with data — than it is about reducing the number of FLOPs. The reference to LINPACK (which was essentially obsoleted by LAPACK 20 years ago) leads me to suspect that your employer is probably of a very old school that hasn't internalized the fact that establishing performance expectations is not just a matter of FLOP counting any more. A solver that does twice as many FLOPs could still be twenty times faster than another if it has a much more favorable memory access pattern and data layout.
The upshot of all this is that performance assessment of computationally intensive software has become a lot more complex than it used to be. The fact that FLOPs have become cheap is hugely complicated by the massive variability in the costs of memory operations and branches. When it comes to assessing algorithms, simple FLOP counting simply doesn't inform overall performance expectations any more.
Perhaps a better way of thinking about performance expectations and assessment is provided by the so-called roofline model, which is far from perfect, but has the advantage of making you think about the trade-off between floating-point and memory bandwidth issues at the same time, providing a more informative and insightful “2D picture” that enables the comparison of performance measurements and performance expectations.
It's worth a look.