Is Objects.requireNonNull less efficient than the old way?
Let's look at the implementation of requireNonNull
in Oracle's JDK:
public static <T> T requireNonNull(T obj) {
if (obj == null)
throw new NullPointerException();
return obj;
}
So that's very simple. The JVM (Oracle's, anyway) includes an optimizing two-stage just-in-time compiler to convert bytecode to machine code. It will inline trivial methods like this if it can get better performance that way.
So no, not likely to be slower, not in any meaningful way, not anywhere that would matter.
So my question: is there any evidence of a performance penalty being incurred by using the
Objects.requireNonNull
methods?
The only evidence that would matter would be performance measurements of your codebase, or of code designed to be highly representative of it. You can test this with any decent performance testing tool, but unless your colleague can point to a real-world example of a performance problem in your codebase related to this method (rather than a synthetic benchmark), I'd tend to assume you and he/she have bigger fish to fry.
As a bit of an aside, I noticed your sample method is a private
method. So only code your team is writing calls it directly. In those situations, you might look at whether you have a use case for assertions rather than runtime checks. Assertions have the advantage of not executing in "released" code at all, and thus being faster than either alternative in your question. Obviously there are places you need runtime checks, but those are usually at gatekeeping points, public methods and such. Just FWIW.
Yes, there is evidence that the difference between manual null
check and Objects.requireNonNull()
is negligible. OpenJDK commiter Aleksey Shipilev created benchmarking code that proves this while fixing JDK-8073479, here is his conclusion and performance numbers:
TL;DR: Fear not, my little friends, use Objects.requireNonNull.
Stop using these obfuscating Object.getClass() checks,
those rely on non-related intrinsic performance, potentially
not available everywhere.
Runs are done on i5-4210U, 1.7 GHz, Linux x86_64, JDK 8u40 EA.
The explanations are derived from studying the generated code
("-prof perfasm" is your friend here), the disassembly is skipped
for brevity.
Out of box, C2 compiled:
Benchmark Mode Cnt Score Error Units
NullChecks.branch avgt 25 0.588 ± 0.015 ns/op
NullChecks.objectGetClass avgt 25 0.594 ± 0.009 ns/op
NullChecks.objectsNonNull avgt 25 0.598 ± 0.014 ns/op
Object.getClass() is intrinsified.
Objects.requireNonNull is perfectly inlined.
where branch
, objectGetClass
and objectsNonNull
are defined as follows:
@Benchmark
public void objectGetClass() {
o.getClass();
}
@Benchmark
public void objectsNonNull() {
Objects.requireNonNull(o);
}
@Benchmark
public void branch() {
if (o == null) {
throw new NullPointerException();
}
}
Formally speaking, your colleague is right:
If
someMethod()
or corresponding trace is not hot enough, the byte code is interpreted, and extra stack frame is createdIf
someMethod()
is called on 9-th level of depth from hot spot, therequireNonNull()
calls shouldn't be inlined because ofMaxInlineLevel
JVM OptionIf the method is not inlined for any of the above reasons, argument by T.J. Crowder comes into play, if you use concatenation for producing error message
Even if
requireNonNull()
is inlined, JVM wastes time and space for performing this.
On the other hand, there is FreqInlineSize
JVM option, which prohibits inlining too big (in bytecodes) methods. The method's bytecodes is counted by themselves, without accounting size of methods, called within this method. Thus, extracting pieces of code into independent methods could be useful sometimes, in the example with requireNonNull()
this extraction is made for you already.
If you want evidence ... then the way to get it is to write a micro-benchmark.
(I recommend looking at the Calliper project first! Or JMH ... per Boris's recommendation. Either way, don't try and write a micro-benchmark from scratch. There are too many ways to get it wrong.)
However, you can tell your colleague two things:
The JIT compiler does a good job of inlining small method calls, and it is likely that this will happen in this case.
If it didn't inline the call, the chances are that the difference in performance would only be a 3 to 5 instructions, and it is highly unlikely that it would make a significant difference.