We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic,
and understand where our audience is coming from. To find out more, please read our privacy policy.
By choosing 'I Accept', you consent to our use of cookies and other tracking technologies.
We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic,
and understand where our audience is coming from. To find out more, please read our privacy policy.
By choosing 'I Accept', you consent to our use of cookies and other tracking technologies. Less
We use cookies and other tracking technologies... More
Login or register to apply for this job!
Login or register to start contributing with an article!
Login or register to see more jobs from this company!
Login or register to boost this post!
Show some love to the author of this blog by giving their post some rocket fuel π.
Login or register to search for your ideal job!
Login or register to start working on this issue!
Engineers who find a new job through Functional Works average a 15% increase in salary π
Kinds of Types in Scala - Part 1: "Types, what are they?"
When I try to explain to someone why I prefer Scala to Java, and why functional programming works better with Scala, one of the arguments is that is has a better type system. But what exactly it means? What advantage it has over the static type of languages like Java or Go?
In this mini-series I want to explain different concepts related to types. Originally it was supposed to be a single post, but it was way too long for anyone to read.
Origin of types
First of all, lets us think what are types themselves. Historically, they were introduced to solve the issue of ambiguities in mathematics, which appeared in certain conditions, and were developed as a port of the effort to formalize mathematics.
Before formalization, mathematicians needed to face paradoxes like set of all sets. Bertrand Russell described it as one of entities we are unable to decide whether it could exist or not, without building upon strict, formal definitions. Nowadays, set theory solves this problem using ZermeloβFraenkel axioms:
let us assume, that the set of all sets could be constructed,
the power set of some set is always bigger than said set,
if this set is the set of all sets it would have to be a strict superset of itself and its own power set as well,
this leads to the conclusion, that the set would have to be bigger than itself (not equal!),
by contradiction we conclude, that building the set of all sets is impossible.
At a time though, Bertrand Russell proposed another way to avoid the paradox:
each term should be assigned a type. It is usually denoted with a colon: $$2: nat$$,
if we wanted to define a function we would define on which types it operates: e.g. $$double: nat \rightarrow nat$$.
$$f: X \rightarrow Y$$ read as: $$f$$ is a function from $$X$$ (domain) to $$Y$$ (codomain).
He proposed, that we should build objects from bottom-up: if we define something using predicates, we can only use already defined object and never other predicates. However, such definition has a flaw: since we cannot reuse other predicates, we couldn't e.g. compare cardinality of sets (how could you prove, that the types match?).
His initial (ramified) set theory was later on refined using axiom of reducibility, which allowed whole hierarchies of types to collapse creating a simple type theory. It introduced the following rule:
terms can be converted using explicitly defined rules: e.g. $$2 + 2 \twoheadrightarrow 4$$ means that $$2 + 2$$ reduces to $$4$$, so it has the same type.
In 1940, Alonzo Church combined the type theory his own lambda calculus creating simply typed lambda calculus. Since then more advanced type theories appeared.
Types, intuitively
Type theory exists in parallel to a set theory: one is not defined using the other. Because of that we cannot strictly speaking compare types to sets.
But we can if it's about intuition. By saying the value $$a$$ belongs to a type $$A$$ ($$a: A$$), we can compare it to saying that value $$a$$ belongs to a set $$A$$ ($$a \in A$$). Saying that type $$A$$ is a supertype of $$B$$ and a subtype of $$C$$ is analogous to saying, that $$B \subset A \subset C$$. When we say that $$t$$ is of both types $$X$$ and $$Y$$, it is the same as saying that $$t \in X \cap Y$$.
If we defined a set of all values besides itself, in type theory we would call it a top type: type which contains (is a supertype of) all other types. Similarly, the empty set would have a counterpart in form of the bottom type. To make it easier to remember, imagine types as a hierarchy where more generic (less restricted) types are above and more specific (with more requirements) below. Then the type with no requirements would sit the top and contains all values, while the type at the bottom, with requirements of all the types above it (which must be contradictory), would be empty.
In Scala
If we look how Scala implements types, we'll see that a judgment $$a: A$$ ($$a$$ is of type $$A$$) directly translated into the syntax:
val a: A
def f(a: A): B
When it comes to Scala syntax, we won't talk about a judgment. Instead, we would talk about a type ascription. Notation for function type also comes from the type theory: $$double: int \rightarrow int$$ becomes:
val double: Int => Int
Scala has also the notion of the top and bottom types. The top type - a type we don't have any assertions about, which contains any other type - is Any. In Java, the closest thing would be an Object, except it is not a supertype for primitives. Scala's Any is a supertype for both objects (AnyRef) and primitives (AnyVal). When we see Any as an inferred type, it usually means we messed up and combined two unrelated types.
The bottom type - a type which is a subtype of all others, and has no citizen - in Scala is Nothing. We cannot construct a value of type Nothing, so it often appears as a parameter of empty containers or functions that can only throw.
(Quite often I see that Null is mentioned as another bottom type in Scala together with Nothing. It is a type extending everything that is an Object on JVM. So it is kind of like Nothing without primitives, Unit/void, and functions that only throws).
Also, a word about functions. In Scala we make a distinction between a function and a method. A function type is a value, which can be called:
val double: Int => Int = x => x * 2
A method, on the other hand, is not a value. It cannot be assigned, or directly passed, though via a process called eta-expansion a method can be turned into a function.
def double(x: Int): Int = x * 2
// val y = def double(x: Int): Int = x * 2 - error!
val y = { def double(x: Int): Int = x * 2 } // Unit
// however
val z: Int => Int = double _ // works!
Less accurate, but simpler version: a function is an instance of some Function0, Function1, ..., Function22 trait, while a method is something bound to a JVM object. We make a distinction because authors of Scala haven't solved the issue in a more elegant way.
Oh, and one more thing. With a method, you can declare a argument as implicit. You cannot do that with function. For now. It will be possible with implicit function types in Dotty.
Type inference and Least Upper Bounds
This hierarchy of types is quite important when it comes to type inference. We expect it to provide us with the most concrete type possible, while it has to comply to any type restriction it was imposed for a given case. Let's take a look at a few cases:
val s = "test"
"test" has a type of String. There are no additional restrictions. So, the inferred type is String.
def a(b: Boolean) = if (b) Some(12) else None
Here, the returned value is one of two types Some[Int] or None.type (a type of None companion object). How to decide the common type for these two?
As we can see, there is quite a lot of candidates for the type of value returned by the function - any type which is above Some[T] and None.type at once is a valid candidate. Each of them is an upper bound for the sought type. (By analogy, if we were looking for a type of parameter we want to pass to a function, that would have to conform to several types, each of the types that would be below all of them would be a lower bound).
But having several upper bounds is not enough, it is ambiguous. However, it just happens so, that there is always one type that is the smallest if them (in a partial order meaning of smallest). Here, we can see that the Option[T] is the least of upper bounds, that fulfill our requirements. We call it the least upper bound (LUB) and it is exactly the type our function will get.
Here is why knowledge about the top type comes handy: if the inferred type is a top type (Any), among our requirements, are some that are most likely wrong, e.g. we ask Scala to find the greatest common denominator for a String and Int (List("test", 10)). Because Any is the only upper bound they share, it automatically becomes the least upper bound and the inferred type. Hardly ever we want such behavior, so it is no wonder that wartremover treats interred Any as a code smell.
Algebraic Data Types
When we think of types as a sets, there are 2 special constructions, that help us build new sets from existing ones, which complement each other. One is a Cartesian product and the other is a disjoint union/disjoint set sum.
Let's start with a Cartesian product. We can define it as a generalization of an ordered pair. A set of ordered pairs $$(A, B)$$ would be a set of values $$(a, b)$$, such as $$a \in A, b \in B$$, which we could distinct by its position inside a brackets. More formally, we could define an ordered pair as (Kuratowski's definition):
$$(a, b) = { {a}, {a, b}}$$
Such definition help us indicate the order using construction that itself has no distinction of order (that is a set).
Cartesian product generalizes the idea. It an operator which takes 2 sets and creates a set of tuples for them:
$$A \times B = { (a, b): a \in A, b \in B }$$
$${ x: p(x) }$$ is a set builder notation meaning: a set made of elements $$x$$ for which a predicate $$p(x)$$ is true
Such definition is not associative ($$(A \times B) \times C \ne A \times (B \times C)$$), which we can easily check:
If we consider records (well, any class actually) as a $$n$$-tuple of its attributes and containers like list as infinite-tuples (where all elements after some index are empty), then we'll understand why they are called the product types and why they often share a common scala.Product trait.
type IntTwice = (Int, Int)
case class TwoInts(first: Int, second: Int)
type IntList = List[Int]
// all of the above are products
(IntList is a type alias, another name for existing type. We could use List[Int] instead of IntList and it would make no difference to Scala).
The other construct - a disjoint union, sum type or coproduct - a is sum of finite number of sets, where no two sets have a non-empty intersection. Types are also tagged, that is we can always say from each subset originated any value of the union.
Scala 2.x doesn't allow to actually build up an union from already existing types. If we want to let compiler know that types should form a coproduct, we need to use a sealed hierarchy:
sealed trait Either[+A, +B] {}
case class Left[+A, +B](a: A) extends Either[A, B]
case class Right[+A, +B](b: B) extends Either[A, B]
sealed trait Option[+A] {}
case class Some[A](value: A) extends Option[A]
case object None extends Option[Nothing]
The sealed keyword makes sure that no new subtype of said class can be created outside the current file, and so compiler will always know all types that directly inherit the trait (and form a disjoint union). This allows pattern matching to be exhaustive: compiler can tell us that we didn't matched all components of the union.
Dotty will introduce the union types, which will let us build an union out of already existing types.
def f(intOrString: Int | String) = intOrString {
case i: Int => ...
case s: String => ...
}
Product types and sum types together are known as algebraic data types. They are the foundation the data modeling in functional programming build upon.
They also allow us to use generic programming in Scala with libraries like shapeless.
Compound and intersection types
We can calculate a sum of sets (sum type), a Cartesian product of sets (product type), why not an intersection? We can declare a type belonging to several types using... with keywords (yup, it's more than just a mixin!).
trait Str { def str: String }
trait Count { def count: Int }
def repeat(cd: Str with Count): String =
Iterator.fill(cd.count)(cd.str).mkString
repeat(new Str with Count {
val str = "test"
val count = 3
})
// "testtesttest"
It follows the mathematical definition exactly:
$$x \in A \cap B \iff x \in A \land x \in B \iff x \in B \land x \in A \iff x \in B \cap A$$
which mean that the order of composing type using with does not matter.
val sc: Str with Count
val ca: Count with Str
def repeat(sc) // works as expected
def repeat(ca) // also works!
Well, at least when it comes to types. Since we can compose types this way we have to face a diamond problem sooner or later.
trait A { def value = 10 }
trait B extends A { override def value = super.value * 2 }
trait C extends A { override def value = super.value + 2 }
(new B with C {}).value // ???
(new C with B {}).value // ???
How would Scala deal with such cases? The answer is a trait linearization:
trait X extends A with B with C
is the same as
trait AnonymousB extends A {
// B overrides A
override def value = super.value * 2
}
trait AnonymousC extends AnonymousB {
// C overrides AnonymousB
override def value = super.value + 2
}
trait X extends AnonymousC
(implementation wise, types B and C are not lost). If we switch the order:
trait Y extends A with C with B
then it is as if we
trait AnonymousC extends A {
// C overrides A
override def value = super.value + 2
}
trait AnonymousB extends AnonymousC {
// C overrides AnonymousB
override def value = super.value * 2
}
trait Y extends AnonymousC
It should be obvious now, that
(new B with C {}).value // (10 * 2) + 2 = 22
(new C with B {}).value // (20 + 2) * 2 = 24
In Dotty intersection types will be denotes using &. Documentation claims that it will differ from with in a way, that Dotty want to guarantee, that there will be no difference between A & B and B & A (operator is commutative). Currently, while A with B can be used in place of B with A and it just works., their behavior might be different due to change of order in trait linearization. Dotty will also handle parametric types intersection and intersection of properties types (List[A] & List[B] = List[A & B]). Because with has no such guarantees we call types created with it compound types instead.
Classes
In mathematics class came to life as a generalization of sets. In set theory, we build sets inductively from smaller elements. We cannot say about the set of all sets, as it cannot be built in such way. But what if we had another way of grouping objects, e.g. by some predicate?
A class is such group of objects for which some predicate (an indicator function, we could say) returns true.
Because class does not has to be set, we can talk e.g. about class of all sets.
$${ x: isSet(x) }$$
Programming languages borrowed word class exactly from this concept. If we think about it, in all languages that has classes, it is basically a specification of a type/set of instances, where each member must have some set of properties and methods. It is true for:
statically typed languages (like C/C++ where class definition dictates memory layout of an instance),
dynamically-typed (like Python or Ruby where class is basically a factory object that creates other objects according to the spec)
or prototype-based (like JavaScript, where question does it have a property/methods? is extended to does it or its prototype-object have a property/method?).
Of course, in Scala, a class (and subclass) create a new type (and subtype). Same can be said about traits.
Unit
This one is special. In lambda calculus, there are no constants, no nullary functions, and no multi-parameter functions. Everything is a single argument function returning another single argument function (therefore currying is the only way of achieving function with arity $$\geq$$ 1).
Similarly, in category theory, each arrow (function abstracted away from its body) goes from one object to one object (function domains/codomains abstracted away from specific values).
In such situation, we would have to implement somehow functions returning no (meaningful) value as well as functions taking no parameters (nullary). Category theory names such irrelevant garbage input an initial object or void. It assumes that for every object there is you can create an arrow from void to the object (for every type there is a nullary function returning a value of this type). Since you never have the need to actually pass any value of void type (it won't be used anyway), the set might be pretty much empty, and an actual implementation can work around it (category theory don't care about the values, remember?). Thus the name void.
Similarly, there is also a final object or unit, for each object there is an arrow from an object to unit (for every type there is function that could be feed with values of this type and don't return anything useful). If it only serves the purpose of being returned, we don't attribute it with anything meaningful. No matter how many values of this type exists, we cannot tell them apart. For us, the unit might be as well be a singleton. Thus the name unit.
While creators of Scala didn't see a point in making void an actual type (and making people confuse it with Java's void), they did want to make sure that every function returns something. That something is (), the sole citizen of an Unit type.
Summary
We talked a bit about what are (concrete) types, how they are represented in Scala, and we learned a thing or two about type theory and some minimum about categories.
In next post we'll see what are parametric types, type constructors and what constraints we can put on parameters to make the type system our friend.