Java Math.min/max performance
When I run your (suitably-modified) code using Math.max
on an old (1.6.0_27) JVM, the hot loop looks like this:
0x00007f4b65425c50: mov %r11d,%edi ;*getstatic array
; - foo146::bench@81 (line 40)
0x00007f4b65425c53: mov 0x10(%rax,%rdx,4),%r8d
0x00007f4b65425c58: mov 0x14(%rax,%rdx,4),%r10d
0x00007f4b65425c5d: mov 0x18(%rax,%rdx,4),%ecx
0x00007f4b65425c61: mov 0x2c(%rax,%rdx,4),%r11d
0x00007f4b65425c66: mov 0x28(%rax,%rdx,4),%r9d
0x00007f4b65425c6b: mov 0x24(%rax,%rdx,4),%ebx
0x00007f4b65425c6f: rex mov 0x20(%rax,%rdx,4),%esi
0x00007f4b65425c74: mov 0x1c(%rax,%rdx,4),%r14d ;*iaload
; - foo146::bench@86 (line 40)
0x00007f4b65425c79: cmp %edi,%r8d
0x00007f4b65425c7c: cmovl %edi,%r8d
0x00007f4b65425c80: cmp %r8d,%r10d
0x00007f4b65425c83: cmovl %r8d,%r10d
0x00007f4b65425c87: cmp %r10d,%ecx
0x00007f4b65425c8a: cmovl %r10d,%ecx
0x00007f4b65425c8e: cmp %ecx,%r14d
0x00007f4b65425c91: cmovl %ecx,%r14d
0x00007f4b65425c95: cmp %r14d,%esi
0x00007f4b65425c98: cmovl %r14d,%esi
0x00007f4b65425c9c: cmp %esi,%ebx
0x00007f4b65425c9e: cmovl %esi,%ebx
0x00007f4b65425ca1: cmp %ebx,%r9d
0x00007f4b65425ca4: cmovl %ebx,%r9d
0x00007f4b65425ca8: cmp %r9d,%r11d
0x00007f4b65425cab: cmovl %r9d,%r11d ;*invokestatic max
; - foo146::bench@88 (line 40)
0x00007f4b65425caf: add $0x8,%edx ;*iinc
; - foo146::bench@92 (line 39)
0x00007f4b65425cb2: cmp $0x1ffff9,%edx
0x00007f4b65425cb8: jl 0x00007f4b65425c50
Apart from the weirdly-placed REX prefix (not sure what that's about), here you have a loop that's been unrolled 8 times that does mostly what you'd expect---loads, comparisons, and conditional moves. Interestingly, if you swap the order of the arguments to max
, here it outputs the other kind of 8-deep cmovl
chain. I guess it doesn't know how to generate a 3-deep tree of cmovl
s or 8 separate cmovl
chains to be merged after the loop is done.
With the explicit OpsMath.max
, it turns into a ratsnest of conditional and unconditional branches that's unrolled 8 times. I'm not going to post the loop; it's not pretty. Basically each mov/cmp/cmovl
above gets broken into a load, a compare and a conditional jump to where a mov
and a jmp
happen. Interestingly, if you swap the order of the arguments to max
, here it outputs an 8-deep cmovle
chain instead. EDIT: As @maaartinus points out, said ratsnest of branches is actually faster on some machines because the branch predictor works its magic on them and these are well-predicted branches.
I would hesitate to draw conclusions from this benchmark. You have benchmark construction issues; you have to run it a lot more times than you are and you have to factor your code differently if you want to time Hotspot's fastest code. Beyond the wrapper code, you aren't measuring how fast your max
is, or how well Hotspot understands what you're trying to do, or anything else of value here. Both implementations of max
will result in code that's entirely too fast for any sort of direct measurement to be meaningful within the context of a larger program.
It's hard to tell why Math.max
is slower than a Ops.max
, but it's easy to tell why this benchmark strongly favors branching to conditional moves: On the n
-th iteration, the probability of
Math.max( array[i], max );
being not equal to max
is the probability that array[n-1]
is bigger than all previous elements. Obviously, this probability gets lower and lower with growing n
and given
final int[] array = new int[(8*1024*1024)/4];
it's rather negligible most of the time. The conditional move instruction is insensitive to the branching probability, it always take the same amount of time to execute. The conditional move instruction is faster than branch prediction if the branch is very hard to predict. On the other hand, branch prediction is faster if the branch can be predicted well with high probability. Currently, I'm unsure about the speed of conditional move compared to best and worst case of branching.1
In your case all but first few branches are fairly predictable. From about n == 10
onward, there's no point in using conditional moves as the branch is rather guaranteed to be predicted correctly and can execute in parallel with other instructions (I guess you need exactly one cycle per iteration).
This seems to happen for algorithms computing minimum/maximum or doing some inefficient sorting (good branch predictability means low entropy per step).
1 Both conditional move and predicted branch take one cycle. The problem with the former is that it needs its two operands and this takes additional instruction. In the end the critical path may get longer and/or the ALUs saturated while the branching unit is idle. Often, but not always, branches can be predicted well in practical applications; that's why branch prediction was invented in the first place.
As for the gory details of timing conditional move vs. branch prediction best and worst case, see the discussion below in comments. My my own benchmark shows that conditional move is significantly faster than branch prediction when branch prediction encounters its worst case, but I can't ignore contradictory results. We need some explanation for what exactly makes the difference. Some more benchmarks and/or analysis could help.
Using JDK 8:
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)
On Ubuntu 13.10
I ran the following:
import java.util.Random;
import java.util.function.BiFunction;
public class MaxPerformance {
private final BiFunction<Integer, Integer, Integer> max;
private final int[] array;
public MaxPerformance(BiFunction<Integer, Integer, Integer> max, int[] array) {
this.max = max;
this.array = array;
}
public double time() {
long start = System.nanoTime();
int m = Integer.MIN_VALUE;
for (int i = 0; i < array.length; ++i) m = max.apply(m, array[i]);
m = Integer.MIN_VALUE;
for (int i = 0; i < array.length; ++i) m = max.apply(array[i], m);
// total time over number of calls to max
return ((double) (System.nanoTime() - start)) / (double) array.length / 2.0;
}
public double averageTime(int repeats) {
double cumulativeTime = 0;
for (int i = 0; i < repeats; i++)
cumulativeTime += time();
return (double) cumulativeTime / (double) repeats;
}
public static void main(String[] args) {
int size = 1000000;
Random random = new Random(123123123L);
int[] array = new int[size];
for (int i = 0; i < size; i++) array[i] = random.nextInt();
double tMath = new MaxPerformance(Math::max, array).averageTime(100);
double tAlt1 = new MaxPerformance(MaxPerformance::max1, array).averageTime(100);
double tAlt2 = new MaxPerformance(MaxPerformance::max2, array).averageTime(100);
System.out.println("Java Math: " + tMath);
System.out.println("Alt 1: " + tAlt1);
System.out.println("Alt 2: " + tAlt2);
}
public static int max1(final int a, final int b) {
if (a >= b) return a;
return b;
}
public static int max2(final int a, final int b) {
return (a >= b) ? a : b; // same as JDK implementation
}
}
And I got the following results (average nanoseconds taken for each call to max):
Java Math: 15.443555810000003
Alt 1: 14.968298919999997
Alt 2: 16.442204045
So on a long run it looks like the second implementation is the fastest, although by a relatively small margin.
In order to have a slightly more scientific test, it makes sense to compute the max of pairs of elements where each call is independent from the previous one. This can be done by using two randomized arrays instead of one as in this benchmark:
import java.util.Random;
import java.util.function.BiFunction;
public class MaxPerformance2 {
private final BiFunction<Integer, Integer, Integer> max;
private final int[] array1, array2;
public MaxPerformance2(BiFunction<Integer, Integer, Integer> max, int[] array1, int[] array2) {
this.max = max;
this.array1 = array1;
this.array2 = array2;
if (array1.length != array2.length) throw new IllegalArgumentException();
}
public double time() {
long start = System.nanoTime();
int m = Integer.MIN_VALUE;
for (int i = 0; i < array1.length; ++i) m = max.apply(array1[i], array2[i]);
m += m; // to avoid optimizations!
return ((double) (System.nanoTime() - start)) / (double) array1.length;
}
public double averageTime(int repeats) {
// warm up rounds:
double tmp = 0;
for (int i = 0; i < 10; i++) tmp += time();
tmp *= 2.0;
double cumulativeTime = 0;
for (int i = 0; i < repeats; i++)
cumulativeTime += time();
return cumulativeTime / (double) repeats;
}
public static void main(String[] args) {
int size = 1000000;
Random random = new Random(123123123L);
int[] array1 = new int[size];
int[] array2 = new int[size];
for (int i = 0; i < size; i++) {
array1[i] = random.nextInt();
array2[i] = random.nextInt();
}
double tMath = new MaxPerformance2(Math::max, array1, array2).averageTime(100);
double tAlt1 = new MaxPerformance2(MaxPerformance2::max1, array1, array2).averageTime(100);
double tAlt2 = new MaxPerformance2(MaxPerformance2::max2, array1, array2).averageTime(100);
System.out.println("Java Math: " + tMath);
System.out.println("Alt 1: " + tAlt1);
System.out.println("Alt 2: " + tAlt2);
}
public static int max1(final int a, final int b) {
if (a >= b) return a;
return b;
}
public static int max2(final int a, final int b) {
return (a >= b) ? a : b; // same as JDK implementation
}
}
Which gave me:
Java Math: 15.346468170000005
Alt 1: 16.378737519999998
Alt 2: 20.506475350000006
The way your test is set up makes a huge difference on the results. The JDK version seems to be the fastest in this scenario. This time by a relatively large margin compared to the previous case.
Somebody mentioned Caliper. Well if you read the wiki, one the first things they say about micro-benchmarking is not to do it: this is because it's hard to get accurate results in general. I think this is a clear example of that.