For a very long time, Scala has had a featured named type projection, which lets one refer to a type member defined in an existing type (i.e., to “project it” or “pick it out”). The syntax is T#A, where T is some type that is know to contain a type member named A.

Type projections was shown to be unsound, which means that it can be used to subvert the type system and cause crashes at runtime. The feature is consequently being removed from Scala 3, the upcoming major version of the Scala language.

In this article, I describe how type projection works with several examples, and explain why it is unsound.

This post is meant to be understandable by anyone with basic knowledge of typed programming languages.

Projecting Nested Classes

The most common use of type projection is to refer to a class defined in some other class.

For example, if we have:

class Foo {
  class Bar
}

then the type Foo#Bar refers to an instance of the inner class Bar, no matter which instance of Foo it was instantiated from. Therefore, the following code can type check:

val foo1 = new Foo
val bar1: foo1.Bar = new foo1.Bar

val foo2 = new Foo
val bar2: foo2.Bar = new foo2.Bar

val bar: Foo#Bar = if (???) bar1 else bar2

This broadly corresponds to the syntax used for referring to nested classes in Java, where the syntax is just Foo.Bar. Scala reserves the dot operator for accessing the members of values, as in foo1.Bar, which is why it has to use a different symbol for type projection.

Finally, note that thanks to Scala’s path-dependent types, foo1.Bar is actually a different type than foo2.Bar, although they are both subtypes of Foo#Bar.

Projecting Other Nested Types

Here, it gets slightly more tricky: in Scala, one can define type members (i.e., type aliases) inside classes, and these type members can be left abstract and only be given a concrete definition in subclasses. For example:

abstract class Foo {
  type Bar // an abstract type; could be anything
  def process(x: Bar): Bar // an abstract method using the type
}
class FooInt extends Foo {
  type Bar = Int // here we refine Bar to mean 'Int'
  def process(x: Bar): Bar = x + 1
}
class FooString extends Foo {
  type Bar = String // here we refine Bar to mean 'String'
  def process(x: Bar): Bar = s"Hello, $x!"
}

We can still refer to these abstract type members from the outside, either through a proper value (using the dot operator) or through a type projection:

val foo1 = new FooInt
val foo2 = new FooString

// Referring to the abstract type through value 'foo':
def test(foo: Foo)(x: foo.Bar): foo.Bar = {
  val res = foo.process(x)
  println(s"Result of process is: $res")
  res
}
val f1 = test(foo1)(41) // Result of process is: 42
val f2 = test(foo2)("World") // Result of process is: Hello, World!

// Referring to the abstract type through a projection:
val bar: Foo#Bar = if (???) f1 else f2

In the above, f1 has type foo1.Bar, but the compiler knows that foo1.Bar == FooInt#Bar == Int, so we can use it in expressions like f1 + 1, which has type Int.

So far so good. But notice that we cannot do anything with a value of type Foo#Bar, since it could really be anything. That does not make projecting such types useless in general, though!

Type Projection to Encode Open Type Families

The reason type projection is already useful in this limited context is that we can use it on abstract prefix types. For example, we can write F#Bar where F is some abstract type that is known to be a subtype of Foo. When F is later resolved to be, for example, FooInt, then F#Bar will be resolved to FooInt#Bar, which is an alias of Int.

This is akin to what is called “open type families” in languages like Haskell, though Scala’s object-oriented foundations add much more flexibility and complexity to the feature.

Example: Collections

Here is an example of that process in action. We can define a class hierarchy that represents collections of values of some element type E, where these collections can be indexed by some index type Index:

abstract class Collection[E] {
  type Index
  def get(idx: Index): Option[E]
  def indices: Iterator[Index]
}
abstract class Sequence[E] extends Collection[E] {
  type Index = Int
}
abstract class Mapping[K,V] extends Collection[V] {
  type Index = K
}

// An example Sequence implementation:
case class ArraySequence[E](underlying: Array[E]) extends Sequence[E] {
  def get(idx: Index): Option[E] =
    if (0 <= idx && idx < underlying.length) Some(underlying(idx))
    else None
  def indices = Iterator.range(0, underlying.length - 1)
}

It makes sense to use a type member for Index, as opposed to a type parameter, because the Index is completely determined by the collection type. Having it as a type parameter to Collection would needlessly complicate its interface.

We can use Collection with traditional path-dependent types, as in:

def head[A](col: Collection[A]): Option[A] = {
  val ite = col.indices
  if (ite.hasNext()) col.get(ite.next())
  else None
}

This works because ite has type Iterator[col.Index] and so we can call col.get, which takes an argument of type col.Index and returns an A, by passing ite.next().

But path-dependent types are sometimes too restrictive: they only work when the abstract type we manipulate live in a single, specific instance.

Example: Matrices

What if we want to abstract over the index type of several instances at once? For example, consider defining a matrix class which abstracts over the type of collections in which it stores its elements:

// Type parameter Cols is required to be a subtype of Collection[Double]:
abstract class Matrix[Cols <: Collection[Double]] {
  type Row = Cols{type Index = Cols#Index}
  val rows: Collection[Row]
  
  // Type representing a position in the matrix
  type Position = (rows.Index, Cols#Index)
  
  def get(p: Position): Option[Double] =
    rows.get(p._1).flatMap(r => r.get(p._2))
}

We refer to the type used for indexing into the rows collection as rows.Index.

However, we cannot do the same for columns, since there is no column instance we can refer to (in fact, there may be no columns at all if rows is empty!). This is why we use type projection: Cols#Index refers to whatever index is used by the type that will eventually be used as Cols.

To make that work, we need to make sure that the Collection type used to store the columns (which we call Row) has an Index type that is exactly Cols#Index — otherwise, if it were allowed to be refined to more precise index types in subclasses, the get method implementation would not type check! We do this by using the Cols{type Index = Cols#Index} type refinement.

We can now define a subclass of Matrix representing dense matrices:

case class DenseMatrix(rows: Sequence[Sequence[Double]])
  extends Matrix[Sequence[Double]]

As well as a subclass for sparse matrices, for example, which uses BigInt to allow for indices that do not fit in an Int:

type SparseArr[E] = Mapping[BigInt, E]

case class SparseMatrix(rows: SparseArr[SparseArr[Double]])
  extends Matrix[SparseArr[Double]]

Other Approaches

The same kind of concepts as demonstrated above can be achieved using type classes. In fact, that is often the preferred approach, taken by most functional programming libraries in the Scala ecosystem.

Type classes have the advantage that they decouple the types they talk about from the abstractions used to talk about them. For example, with a Collection type class, we could add static Index information “after the fact”, to types that were already defined without extending some Collection base class.

However, the type-class-based approach comes with its own tradeoffs: it require passing along implicit parameters everywhere, which can be cumbersome. In contrast, type projection can remain entirely at the type level, without necessitating the presence of values in scope.

Moreover, in Scala, there is no way to make sure that type class instances are consistent: the same type can be given incompatible instances in different parts of an application, which can sometimes break some invariants. This problem does not exist with type projections.

Implicit resolution, on which type classes rely, is also a tricky matter and can become problematic when types become more involved, due to compiler limitations. The process of implicit search has restrictions to prevent divergence, which can limit expressiveness. On the other hand, type projections are actually quite reliable and also uniquely expressive, as we shall see now.

Expressiveness

Type projection is very expressive. As a matter of fact, it turns out that type projection makes Scala’s type system Turing Complete. Indeed, it has been shown that they can be used to encode the SKI calculus or the lambda calculus.

This means that Scala’s type system is undecidable: however complicated we make our type checker implementation trying to follow the Scala specification, it will never be complete (there will always be programs on which it crashes or never terminates).

That sounds crazy in its own right, and a good reason to drop the feature… until we realize that type projection is not the only feature that makes Scala’s type system undecidable! Other sources of undecidability include (probably) recursive bounded object types, and F-bounded polymorphism.

In fact, most practical languages turn out to have Turing Complete (and thus undecidable) type systems, such as C++, Rust, Haskell, and yes, even Java!

Soundness

I am not aware of any evidence (or even suspicion of evidence) that the kind of type projection we have seen thus far causes any sort of unsoundness in Scala’s type system. In fact, I am pretty confident that unsoundness only appears when bounds on abstract types are introduced into the mix, which we see in the next sections.

Projecting Bounded Abstract Types

There are often situations where we would like to specify some bounds on an abstract type, and leverage these bounds in type projections. Naturally, Scala lets us do that.

Upper-Bounded Abstract Types

For example, let us define a notion of “self type”, an abstract type which represent some concrete subclass of the current class:

abstract class Foo {
  type Self <: Foo // an abstract type that's a subtype of Foo
}
final class FooA extends Foo {
  type Self = FooA // here we refine Self to mean 'FooA'
}
final class FooB extends Foo {
  type Self = FooB // here we refine Self to mean 'FooB'
}

Above, we have given an upper bound to type Self in Foo, meaning than any concrete refinement of Self in a subclass of Foo will have to be a subtype of that bound.

When we hold onto a value of type Foo#Self, we can use it anywhere a value of type Foo is expected, since we know Foo#Self is a subtype of Foo. Indeed, no matter which instance of Foo was used to instantiate the value, we know that its Self type member has to be a subtype of Foo.

But only having an upper bound is not always sufficient. Sometimes, we would like to “upcast” a given value x of type X to a type projection T#A, which means that we need a way to assert that T#A is a supertype of X — or, in other words, we need a lower bound for T#A.

Lower-Bounded Abstract Types

In the definition of Foo above, Self has no lower bound, which means that we can give it any definition we want as long is that’s a subtype of Foo, including Nothing (the type of no value, which is a subtype of everything), or something that is not a “true self type” of the class, such as:

class FooA extends Foo {
  type Self = FooB // FooB is not directly related to this class!
}
class FooB extends Foo {
  type Self = FooA
}

We can fix these irregularities by using a lower bound on Foo#Self. More specifically, we can leverage the singleton type this.type, which in Scala represents the type of the current class instance. Note that .type denotes the type of a specific value, and so for example a.type and b.type will never be the same unless a and b are known to be the same value.

If we give this.type as the lower bound of Foo#Self, we are essentially making sure that Foo#Self is a “real” self type, in the sense that the current instance this is an instance of this type:

abstract class Foo {
  // an abstract type that's a supertype of this.type and a subtype of Foo:
  type Self >: this.type <: Foo
}

// FooA and FooB are unchanged

// Examples:
val foo1 = new FooA
val foo2: foo1.Self = foo1 // works because foo1.type is a subtype of foo1.Self == FooA
val foo3: Foo#Self = foo1 // also works with a type projection

This way, it becomes possible to type check the following program, for example:

def test(foo: Foo): foo.Self = foo // works because foo.type <: foo.Self

We have seen how type projection works, how it can be used not only to project classes, but also abstract types, and we have seen that projected type members inherit their corresponding bounds.

So, what’s the problem?

Type Projection Is Unsound

The trouble is that Scala’s type language is expressive enough to talk about abstract types with bounds that may not make sense (also called bad bounds). For instance, we can express a type which contains some type member whose whose lower bound is not a subtype of its upper bound.

Scala does not let us do this directly. For example, the compiler will reject the following definition, because String is not a subtype of Int, so Bad#A has bad bounds:

class Bad { type A >: String <: Int } // error: bounds do not conform

However, such bad bounds can arise indirectly because of a feature called type intersection, which combines two types using the with operator (it will be renamed to & and made to work more consistently in Scala 3).

For example, A with B represents the type of values which inherits both from A and from B. Such type is both a subtype of A and a subtype of B. If we view types as sets of values, A with B (or A & B) can be understood as the set intersection of A and B, i.e., the set which contains exactly those values that are both in A and in B.

Now, consider the following trait definitions (Scala’s traits are like Java’s interfaces):

trait BadLower { type A >: String }
trait BadUpper { type A <: Int }

It is easy to see that BadLower with BadUpper has bad bounds, as it is essentially equivalent to a trait with a member type A >: String <: Int.

Bad Bounds are Fine… Usually

Such object types are impossible to instantiate (they are not “realizable”). Indeed, Scala’s type checker can make sure, every time we create an object, that it does not contain bad bounds, so it will always prevent us from creating objects of types similar to BadLower with BadUpper.

This means that every time we have a value of some type in scope, we can trust its bounds. Essentially, Scala’s core type system treats values in scopes as proofs that some bounds are correct (proofs that the lower bounds are indeed subtypes of the corresponding upper bounds). So we can use path-dependent types like foo.A without worrying, because they are rooted in values.

However, type projection does not follow this safety precaution!

Type Projection Is Not Rooted in Reality

Since type projection applies on types and not values, and since we can make up types with bad bounds, it follows that we should not be able to fully trust the bounds that we obtain from type projection.

But that is exactly what the Scala 2 compiler does…

Here is an example that exploits the unsoundness. Scalac will not let us directly leverage the bad bounds in BadLower with BadUpper, but we can outsmart it by decomposing the problem with abstraction:

trait BadLower { type A >: Any }
trait BadUpper { type A <: Nothing }

def oops0[T <: BadLower]: Any => T#A     = a => a
def oops1[T <: BadUpper]: T#A => Nothing = a => a
def oops[A <: BadLower with BadUpper]: Any => Nothing =
  oops0[A].andThen(oops1[A])

We define two functions oops0 and oops1 which independently leverage the information present in the bounds of BadLower and BadUpper, and we compose these two functions together into a function oops which… can convert anything into nothing, an obviously nonsensical function. Yet, Scalac type checks this code without complaining.

We can then make our program crash at runtime by calling (oops("hi"): Int) + 1, which results in a ClassCastException from the Java Virtual Machine.

Conclusions & Final Words

Type projection is a very powerful feature, which has a number of cool uses. It allows us to refer to type members belonging to some owner type, without necessarily having a value of the owner type in scope.

Due to the risk of bad bounds, it is not safe to trust the bounds of a type projection, but that is exactly what the current Scala compiler does. This can result in runtime crash, meaning that the Scala 2 type system is unsound.

For this reason, Dotty (the future Scala 3 compiler) currently rejects any type projection based on a type that is not a class type. For example, it rejects T#A if T is a type parameter.

However, I am convinced that type projection is completely harmless as long as we disregard the lower bound of the projected type. This should allow us to retain virtually all of today’s uses of type projection (I have never seen type projection lower bounds used in the wild — apart from examples designed to show their unsoundness). But this will be the subject of a future article, where I will properly explain why I think it works.