Local Type Inference vs Instance
The motivation for forbidding type inference for fields and method returns is that APIs should be stable; field access and method invocation are linked by descriptor at runtime, so things that cause subtle changes to inferred types could cause existing compiled clients to break in terrible ways if a change to the implementation caused the inferred type to change (modulo erasure.) So using this for implementation, but not for API, is a sensible guiding principle.
It is reasonable to ask "so, what about private fields and methods?" And indeed, we could well have chosen to do that. Like all design decisions, this is a tradeoff; it would enable inference to be used in more places, in exchange for more complexity in the user model. (I don't care as much about complexity in the spec or the compiler; that's our problem.) It is easier to reason about "inference for local variables yes, fields and methods no" than adding various epicyclic considerations like "but, fields and methods are OK if they are private". Drawing the line where we did also means that the compatibility consequences of changing a field or method from private to nonprivate doesn't have accidental interactions with inference.
So the short answer is, doing it this way makes the language simpler, without making the feature dramatically less useful.
Various reasons:
Visibility and type are orthogonal - one shouldn't impact the other. If private variables could be initialized with
var
, you'd had to change that when making them protected or public.Because
var
uses the right-hand side to infer the type, such private fields always needed to be initialized right away. If moving initialization into a constructor, you'd have to make the type explicit.With
var
the compiler can infer types that you can currently can't express in Java (e.g. intersection types likeComparable & Serializable
). You might of course end up relying on those specific types and when you have to stop usingvar
at some point for any reason, you might have to refactor quite a lot to keep your code working.
It’s not like it was entirely impossible to turn these variables into fields that can be inspected via Reflection. E.g., you can do
var l = new ArrayList<String>();
l.add("text");
System.out.println(l);
System.out.println(
new Object(){ { var x = l; } }.getClass().getDeclaredFields()[0].getGenericType()
);
In the current version, it just prints ArrayList
, so the actual generic type has not been stored in the class file of the anonymous inner class and it’s unlikely that this will change, as supporting this introspection is not an the actual goal. It’s also just a special case that the type is denotable like ArrayList<String>
. To illustrate a different case:
var acs = true? new StringBuilder(): CharBuffer.allocate(10);
acs.append("text");
acs.subSequence(1, 2);
System.out.println(
new Object(){ { var x = acs; } }.getClass().getDeclaredFields()[0].getGenericType()
);
The type of acs
is an intersection type of Appendable
and CharSequence
, as demonstrated by invoking a method of either interface on it, but since it is not specified whether the compiler infers #1 extends Appendable&CharSequence
or #1 extends CharSequence&Appendable
, it is unspecified whether the code will print java.lang.Appendable
or java.lang.CharSequence
.
I don’t think that this is an issue for a synthetic field, but for an explicitly declared field, it might be.
However, I doubt that the expert group considered such impacts en detail. Instead, the decision not to support field declarations (and hence skip lengthy thinking about the implications) was made right from the start, as local variables always were the intended target for that feature. The number of local variables is much higher than the number of field declarations, so reducing the boilerplate for local variable declarations has the biggest impact.