jspecify: what happens to type variables when @NullnessUnknown (and @NotNull) type parameters are instantiated?
Capturing a comment thread here
Let’s say there’s a class Foo<T> that’s not annotated at all, ie., T’s effective specifier is “unknown nullness”:
public class Foo<T> {
private T value;
T getOrThrow() { return checkNotNull(value); }
T getOrNull() { return value; }
set(T value); { this.value = checkNotNull(value); }
}
Now in some other Java file you do:
@DefaultNotNull
public class Bar {
String baz(Foo<String> x) {
return x.getOrNull();
}
}
Meaning x
has the fully explicit type @NotNull Foo<@NotNull String>
. So when considering method calls on x
, should we (unsoundly) assume that getOrNull()
will return a @NotNull String
? That set
’s parameter is a @NotNull String
?
@cpovirk points out:
I commented on this somewhere else, saying that it was pretty scary: Even if
Foo
is@DefaultNullnessUnknown
, then sure, Bar’sFoo<String>
can meanFoo<@NotNull String>
. But I hope that doesn’t mean that x.getOrNull() is considered to return@NotNull String
. I’m hoping that@NotNull
gets trumped by the effectively@NullnessUnknown T
return type ofgetOrNull()
, per:“If the bound’s effective specifier is
@NotNull
or@NullnessUnknown
then that is the type variable usage’s effective specifier.”
I wrote the sentence quoted above but I don’t think I anticipated this application of it. We could possibly do it that way but I think we’ll want to be sure either way.
In general I believe the mental model I’ve been working with is that when a type parameter gets instantiated, the instantiation gets “pasted” everywhere the corresponding type variable is used. So T getOrNull()
becomes @NotNull String getOrNull()
. I think that’s what you want to do if T
is declared as <T extends @Nullable Object>
But if T
is bounded by @NullnessUnknown
, as in the above example, then we could indeed just consider occurrences of T
to have unknown nullness no matter how T
is instantiated. And while the language quoted above just discusses type variables, and we’ve been trying not to specify implementation checking and what happens on method calls, it does seem like we want to formalize what Foo<@NotNull String>
means in terms of methods declared in Foo
.
Playing with IntelliJ’s code completion popups, it seems that Kotlin, given x: Foo<String>
indeed types x.getOrNull() : String!
, i.e., with unknown nullness as @cpovirk would like. Given y: Foo<String?>
, it types y.getOrNull() : String?
, i.e., as @Nullable
(which also seems sensible but i’m not sure how to specify that).
I’ll note that the inverse situation also creates headaches. What I mean is, if Foo
was instead declared class Foo<T extends @NotNull Object>
, and given a variable Foo<@NullnessUnknown String> z
, then should we expect z.getOrNull()
to return a @NotNull String
or a @NullnessUnknown String
? Should we expect z.set(expr)
to expect a @NotNull String
argument or @NullnessUnknown String
? I think the answer here is that it’s sensible to use @NotNull String
for both method result and method parameter type, though it is weird to get any @NotNull
even though we had Foo<@NullnessUnknown String> z
. Especially considering that if Foo
was declared class Foo<T extends @Nullable Object>
then we presumably would expect the same z.getOrNull()
to return @NullnessUnknown String
.
Maybe the asymmetry pointed out in the last example is ok. I’ve been trying to avoid that very asymmetry with wildcards, but if we consider it ok here then I think we may need to revisit it there for consistency.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 60
An amendment to something I said before:
I recently said that I was down on the idea of different rules for what @kevinb9n is considering calling “in-types” and “out-types”.
I still stand by that if I am more precise about it: The thing I’m down on is what we might call different “substitution” (and maybe “defaulting”) rules for in-types vs. out-types.
Very broadly speaking, though, I do expect for us to sometimes see “different behavior” for in-types and out-types. This behavior would arise naturally when, for example, we handle parametric nullability: Given a type parameter
<T extends @Nullable Object>
, we’d forbid dereferencing a parameter of typeT
(it might benull
!), but we’d also forbid returningnull
from a method that returnsT
(it might not benull
!). This is in some sense asymmetric: “IsT
@Nullable
or not?”But it’s still symmetric in terms of substitution rules. That is, we’d still consider the type to be
T
in both cases: It’s just that we’re checking conversion rules in “the opposite direction”:@Nullable
isn’t convertible toT
, but neither isT
convertible to@NotNull
.(Similar logic may apply for unspecified nullness, but I haven’t thought it through deeply.)
So even though we see “different behavior” for in-types and out-types, it’s not because we have different substitution rules for those types: It’s just that, thanks to normal type-checking (and sometimes PECS), we sometimes check whether
T
is convertible to a given nullness, and we sometimes check the reverse. And for “in-between” types like parametric nullness, the two don’t have the completely opposite behavior that we’d expect from a simpler system with only@NotNull
and@Nullable
.I think this is probably all reasonably well understood already, but my earlier comments may have been misleading. Plus, I’m still not 100% sure that I have expressed it right 😃