Thomas M. Kehrenberg

Recreating logic in type theory

(Part of the series .)

This is the second entry in my series on type theory. In this post, we will talk about how to recreate logic in type theory.

As you might know, in order to define ZFC set theory, you first define first-order (FO) logic separately, and then specify the ZFC axioms in the language of FO logic. But we want to be more ambitious here! We want to encode FO logic with the type concepts we already have! If that doesn’t make any sense right now, I hope at the end of this article it does.

But before we get there, a quick refresher on type-valued functions.

Type-valued functions

In the definition of sum types, we briefly mentioned that B(t)B(t) is a type-valued function:

(t,x):t:AB(t) .(t, x):\sum_{t:A}B(t)~.

We can of course precisely specify BB’s type:

B:t:AU .B:\prod_{t:A}\mathcal{U}~.

What is maybe surprising about this is that an instance of a type, like the function BB, becomes part of a type. But this is how it works in type theory. In fact, we could have written the type of all groups a little differently:

f(T):(x:Ty:TT)×TGroup:T:Uf(T)\begin{aligned} f(T)&:\equiv \left(\prod_{x:T}\prod_{y:T}T \right)\times T\\ \mathsf{Group}&:\equiv \sum_{T:\mathcal{U}} f(T) \end{aligned}

The function ff here doesn’t only return a type; it also takes a type as input! A function on types can use the operators \sum\prod++, and ×\times, and can get quite complicated. And, given any function ff with codomain U\mathcal{U} (and arbitrary domain XX), the following is a type: x:Xf(x)\sum_{x:X}f(x), namely, the disjoint union over all types in the image of ff.

What to use for truth values?

OK, why do I mention all this? Because it will be relevant to how we will represent truth values (booleans in programming languages) in our encoding of logic.

The truth values in first-order logic are, shockingly, true and false. Propositions are statements that have a truth value – they’re either true or false. (So, boolean variables/expressions in a programming language.)

And as mentioned in the beginning, we can talk about subsets of, say, N\mathbb{N} by talking about predicate functions (or just predicates) on N\mathbb{N}. A predicate function is a function that takes a single argument and returns a truth value – true or false.

So, you see, truth values are needed everywhere. How to represent them?

Our first naïve approach might perhaps be to represent “true” and “false” with the values of the type 2\mathbf{2}, which is the type that has exactly two elements. Let’s call them 020_\mathbf{2} and 121_\mathbf{2} (distinct from 0:N0:\mathbb{N} and 1:N1:\mathbb{N}). So, let’s say 020_\mathbf{2} represents “false” and 121_\mathbf{2} represents “true”.

A predicate function ϕ\phi on the natural numbers would then have the type:

ϕ:n:N2 .\phi:\prod_{n:\mathbb{N}}\mathbf{2}~.

So, if ϕ\phi represents some property we care about, like whether a number is even, and we want to prove that the number 6 is even, then what we want to show at the end is

ϕ(6)12 .\phi(6)\equiv 1_\mathbf{2}~.

However, doing it like this doesn’t really play to the strengths of type theory. We would have to manually define all the logic operations on 2\mathbf{2}, which isn’t really better than how it’s done in set theory. We would like to re-use our type constructs in the encoding of logic.

So, what will we do instead?

Types for truth values

We will not use two values for “true” and “false”, but two types! Of course! Specifically, the empty type and the unit type.

The empty type has, as the name suggests, no elements at all. It can be represented by 0\mathbf{0} (a bold zero) or by \bot. In a computer program, this could be seen as the return type of a function that crashes and never returns at all. In some programming languages, this type is called “Never”. (Note that this is different from the “nil” or “None” types in programming languages; those correspond to the unit type which we’ll talk about below.) The defining quality of the empty type is that for any other type TT, there exists a unique function mapping from 0\mathbf{0} to TT:

rec0:T:Ux:0T\mathsf{rec}_\mathbf{0}:\prod_{T:\mathcal{U}}\prod_{x:\mathbf{0}}T

And this function is the “empty function” – there is no x:0x:\mathbf{0} after all. The empty function is quite weird, but unfortunately we will encounter it quite a lot. It just maps nothing to nothing. If you think of a function as a list of input-output pairs, then the empty function is a list with zero elements.

The unit type has exactly one element (I mentioned it once before) and is written 1\mathbf{1} (a bold one) or \top. The one element that is in 1\mathbf{1} is written as “\star” or “()”; the latter is meant to be a tuple without any entries.[1] We will use “\star” because we have enough parentheses as it is. The unit type has the property that for any other type AA, there exists a unique function from AA to 1\mathbf{1} (so exactly the opposite of the property of the empty type).[2] That function is the constant function which returns \star for all inputs, (λx.)(\lambda x.\star). We won’t make much use of this property though.

A predicate is then a type-valued function (hence the refresher!) on some domain XX:

ϕ:x:XU .\phi: \prod_{x:X}\mathcal{U}~.

More specifically, a predicate function returns either 0\mathbf{0} or 1\mathbf{1}. Note: mapping to either 0\mathbf{0} or 1\mathbf{1} is very different from mapping to an element of either 0\mathbf{0} or 1\mathbf{1}; after all, 0\mathbf{0} doesn’t have any elements.

And if ϕ\phi is again the predicate for even numbers, then we’ll have, for example, ϕ(6)1\phi(6)\equiv\mathbf{1}. Now, this might look suspiciously similar to our “naïve” solution based on 2\mathbf{2}, so what is the advantage? To unlock those advantages, we’ll note that we can express ϕ(6)1\phi(6)\equiv\mathbf{1} also as  :ϕ(6)\star:\phi(6), because if \star is an element of the type ϕ(6)\phi(6), then ϕ(6)\phi(6) must be the unit type!

So, if PP is some proposition, then we will encode the logic statement

P is true.P \text{ is true.}

as

:P   is well-typed.\star:P ~~\text{ is well-typed.}

in our type theory, where the proposition PP is a type-valued expression. This concept is known as propositions-as-types. If :P\star:P is not well-typed, then PP should be interpreted as a false statement.

At this point, I’ll remind you that the point of a logic system is to make truth-preserving transformations. So, if we represent truth with the fact that \star is a member of the type that represents the proposition, then all our inference steps need to preserve that property or our logic system is useless. We will show in the following that we can make our logic truth-preserving.

I can understand if this feels a bit mind-bending right now. But I hope that you will see shortly that this all makes sense and isn’t a terrible idea. In fact, it will be rather elegant.

Existential quantifier

Now that we have decided how to represent true and false propositions in our theory, we would also like to use the other parts of first-order logic, like existential quantification. Can we again do something really clever with types here?

Let’s see… Say that ϕ\phi is a predicate on the natural numbers with type n:NU\prod_{n:\mathbb{N}}\mathcal{U}. It might represent evenness as before. As ϕ\phi is a type-valued function, we can construct a sum type from it:

(n,p):n:Nϕ(n) .(n,p):\sum_{n:\mathbb{N}}\phi(n)~.

Now, what does it mean if there exists some pair (n,p)(n, p) such that the above is well-typed? Or in other words: what does it mean if the above sum type is inhabited? (We say that a type is inhabited if it has at least one element.)

As you’ll recall, the elements of sum types are pairs. In this case, the first element of the pair is a natural number nn, and the second element is an element of either 0\mathbf{0} or 1\mathbf{1}, depending on whether ϕ(n)\phi(n) is 0\mathbf{0} or 1\mathbf{1}. However… 0\mathbf{0} doesn’t have any elements! So actually, all elements of n:Nϕ(n)\sum_{n:\mathbb{N}}\phi(n) must have 1\mathbf{1} as the type of the second entry of the pair. In other word, a pair of type n:Nϕ(n)\sum_{n:\mathbb{N}}\phi(n) can only exist for those n:Nn:\mathbb{N}, where ϕ(n)1\phi(n)\equiv \mathbf{1}. For those nn, the pair is given by (n,)(n, \star), where the second entry is :1\star:\mathbf{1}.

So, if I can find some nn such that

(n,):n:Nϕ(n)(n, \star):\sum_{n:\mathbb{N}}\phi(n)

is well-typed, then this must mean that there exists an nn such that :ϕ(n)\star:\phi(n) is well-typed, or equivalently, there exists an nn such that ϕ(n)\phi(n) is inhabited.

It’s worth repeating again: if the type x:Tϕ(x)\sum_{x:T}\phi(x) is inhabited, then there exists an x:Tx:T such that ϕ(x)\phi(x) is inhabited. So, in a way, the x:T()\sum_{x:T}(\cdots) operator is acting like an existential quantifier: (x:T).()\exists(x: T).(\cdots).

However, to really make this work, we’ll have to change our definition of truth a bit.

If we say, “as long as a type is inhabited at all, it represents a true proposition,” then the step from ψ(y)\psi(y) to x:Tψ(x)\sum_{x:T}\psi(x) is truth-preserving. Because if ψ(y)\psi(y) is true (i.e., is inhabited) for some y:Ty:T, then x:Tψ(x)\sum_{x:T}\psi(x) is also true (inhabited). We have successfully reasoned that if I have a concrete y:Ty:T with property ψ\psi, then I can conclude that there exists an element in TT for which ψ\psi holds. This rule is known as existential introduction in first-order logic.

So, in the propositions-as-types system, the sum type operator \sum acts like the existential quantifier. This has been our first step towards encoding all of first-order logic as operations on types in type theory. But don’t be alarmed – when we actually write down propositions later, we will not transcribe them as types every time. Rather, it’s sufficient to know that we could always do that, and that everything we’re doing is well-defined.

As mentioned, we have to tweak our definition of truth a bit, because the members of x:Tϕ(x)\sum_{x:T}\phi(x) actually look like (x,)(x, \star) and not just \star. The new definition will just check for inhabitedness.

So,

P is true.P \text{ is true.}

gets encoded as

P is inhabited.P \text{ is inhabited.}

where PP is again a type. The members of PP are sometimes called the witnesses for the proposition’s truth.[3]

So, if we encode this first-order-logic statement:

(x:T).ϕ(x)  is true.\exists(x:T).\phi(x) ~\text{ is true.}

in type theory, we get

x:Tϕ(x)  is inhabited.\sum_{x:T}\phi(x) ~\text{ is inhabited.}

Great!

Universal quantifier

Let’s do the universal quantifier, \forall, next. In order to figure this out, it helps to think about what object would be proof enough that ϕ(x)\phi(x) holds for all elements in TT. One such object is a (total) function that maps each xx to an element in ϕ(x)\phi(x), because for a (total) function like that to exist, the types ϕ(x)\phi(x) for all x:Tx:T must have at least one element, right? You can’t map a value to an empty type! That is, if there is even one x:Tx:T such that ϕ(x)0\phi(x)\equiv\mathbf{0}, then no function from TT to ϕ(x)\phi(x) can exist. So, the existence of the function is sufficient proof that all ϕ(x)\phi(x) are inhabited.

What is the type of such a witness function? We know that function types are written x:TB\prod_{x:T} B, but that assumes that the return type is the same for all xx. What we need instead, is a function type where the return type varies with the input value:

f:x:Tϕ(x)f:\prod_{x:T}\phi(x)

This is called a dependent function type. It’s a bit of a strange type – most programming languages don’t have an equivalent to this, though some very fancy functional programming languages do. It’s usually not that useful, but it is very useful to encode a universal quantifier: (x:T).ϕ(x)\forall(x:T).\phi(x).

One way to think about dependent function types is as a generalization of tuples. For example, take the following tuple type: A×B×CA\times B\times C. Given an index type that has three elements, 3\mathbf{3}, we can also realize this with a dependent function type. We need a type-valued function V:n:3UV:\prod_{n:\mathbf{3}}\mathcal{U} that is defined like this:

V(13)A,  V(23)B,  V(33)CV(1_\mathbf{3})\equiv A,\; V(2_\mathbf{3})\equiv B,\; V(3_\mathbf{3})\equiv C

With this, an element of n:3V(n)\prod_{n:\mathbf{3}}V(n) is equivalent to a 3-tuple.

Another way to see dependent function types is as functions that return a union, except that the type is specified more precisely. We previously discussed the example of the division function on the rational numbers, which returned the special value :1\star:\mathbf{1} for division by zero. But instead of saying that the return type is Q+1\mathbb{Q}+\mathbf{1}, we can be more precise! We could specify a dependent return type, which is Q\mathbb{Q} for all inputs except where the second argument is 0. So, if

f(0)1 and f(y)Q otherwisef(0)\equiv \mathbf{1}\text{ and }f(y)\equiv\mathbb{Q}\text{ otherwise}

then this is a more precise type for div\mathsf{div}:

div:x:Qy:Qf(y) .\mathsf{div}:\prod_{x:\mathbb{Q}}\prod_{y:\mathbb{Q}}f(y)~.

However, more realistically, you would probably just restrict the type of the second argument to be Q0\mathbb{Q}_{\neq 0}.

A more common use case is “generic” functions. The truth is, we already used a dependent function type once before – in the type of the generic identity function:

id:T:U(x:TT)\mathsf{id}:\prod_{T:\mathcal{U}}\left(\prod_{x:T}T\right)

If we define B(T):x:TTB(T):\equiv\prod_{x:T}T, then the above type can be written as T:UB(T)\prod_{T:\mathcal{U}}B(T). So, this was actually a dependent function all along.

Anyway, we now know that this first-order logic statement:

(x:T).ϕ(x)  is true\forall(x:T).\phi(x) ~\text{ is true}

can be encoded as

x:Tϕ(x)  is inhabited.\prod_{x:T}\phi(x) ~\text{ is inhabited.}

Complete first-order logic

Now we only have \land\lor and ¬\neg left. The first two will be very easy, the last one will be trouble. So, let’s start with the easy ones. Say that types PP and QQ may or may not be inhabited, what new type could we construct that will be inhabited if and only if both are inhabited? Well, a pair (aka 2-tuple) of course! You can only form a pair out of elements from PP and QQ if both have elements. So, the logic statement PQP\land Q will be encoded as P×QP\times Q.

If you take a moment to think about logical or, it should be pretty clear that we can encode PQP\lor Q as P+QP+Q.

Finally, negation: ¬\neg. If I have a type PP that may or may not be inhabited, what new type could we construct that will be inhabited exactly when PP is not inhabited and vice versa? The trick that we’ll use for this is that if PP is uninhabited, then we can still use it as the domain for the empty function. The empty function maps from 0\mathbf{0} to any type TT, and… does just absolutely nothing, as we discussed. So, if PP is uninhabited, then p:PT\prod_{p:P}T is still inhabited by the empty function for any type TT.

However, if PP is actually inhabited, then p:PT\prod_{p:P}T could still have an element – namely just a normal function that maps p:Pp:P to some element of TT.  So we can’t use p:PT\prod_{p:P}T to represent “¬P\neg P” because it doesn’t fulfill the requirement that “¬P\neg P” be inhabited if and only if PP is not inhabited.

But we can fix this by using 0\mathbf{0} as the codomain: p:P0\prod_{p:P}\mathbf{0}. If P0P\equiv \mathbf{0}, this has an element (the empty function), but if there exists p:Pp:P, then this can’t have an element because what are you going to map this pp to?

So, we encode ¬P\neg P as p:P0\prod_{p:P}\mathbf{0}. Where is the promised trouble? The trouble is that with this encoding, ¬¬P\neg\neg P does not imply PP. To be clear, it works in the other direction: if I have a p:Pp:P, then I know that p:P0\prod_{p:P}\mathbf{0} is uninhabited, and so this is inhabited:

f:Πp:P00\prod_{f:\Pi_{p:P}\,\mathbf{0}}\mathbf{0}

(by the empty function). This means that PP implies ¬¬P\neg\neg P. However, when we try to prove the opposite direction, we run into the problem that we can’t produce an element of PP out of nothing – there is just no rule that allows us to construct an element of PP out of, essentially, the empty function.

At this point, we must realize the shocking truth, that what we have constructed here is not classical first-order logic, but intuitionistic logic! (Also called constructive logic as it’s used for constructive mathematics.) In intuitionistic logic, proof by contradiction does not work in general and, more pertinently, double-negation elimination is not a thing.

What should have tipped us off from the beginning was that we used two different operators, \sum and \prod, to encode \exists and \forall, because in classical logic, you only need one of them and the other is just the negation of the first. In intuitionistic logic, on the other hand, \exists and \forall are different operators that can’t be defined in terms of each other. For example, in intuitionistic logic, the following does not hold:

¬((x:X).P(x))(x:X).(¬P(x)) .\neg\big(\forall(x:X).P(x)\big)\Rightarrow \exists(x:X).\big(\neg P(x)\big)~.

The intuitive argument for why this does not hold is that you need to construct a concrete xx for which ¬P(x)\neg P(x), but there is no way to do that from just knowing ¬((x:X).P(x))\neg(\forall(x:X).P(x)).

Intuitionistic logic has its charms, but it would be nice to be able to use classical logic as well. In order to do that, we will simply assume double-negation elimination as an axiom! As it turns out, such an axiom causes problems with the encoding of \exists and \lor as \sum and ++, but this is a good sign! Because those are not actually needed to define classical logic. It suffices if we have \forall\land, and ¬\neg.

It might seem a bit unsatisfactory that we have to assume an additional axiom in order to get classical logic, but on the other hand, it’s remarkable that we got even this far without assuming anything specific to logic. Also, we will later see that double-negation elimination follows from the axiom of choice, which we probably want to assume anyway, so this isn’t so bad.

Which is interesting: in a deep way, it is the axiom of choice which allows us to do classical logic.

Anyway, before we define the axiom, let’s introduce some simplified notation.

Function types revisited

At this point I should come clean and admit that type theorists don’t actually write their function types like this: x:XY\prod_{x:X}Y. If the codomain is a constant type, and not a type-valued function, then function types are usually written in this simplified notation:

XY  :  x:XY .X\to Y \;:\equiv\;\prod_{x:X}Y ~.

So, a function from XX to YY can be written as f:XYf:X\to Y. However, if it’s a dependent function, you still have to write its type as x:XB(x)\prod_{x:X}B(x).

(It’s actually interesting that “f:XYf:X\to Y” with the colon is a common notation even outside of type theory because if you use set theory as your foundation, then I think this should actually be “f(XY)f\in (X\to Y)”, but I guess in set theory you’re not supposed to read “XYX\to Y” as a set with elements, like we do in type theory where XYX\to Y is just a shorthand for x:XY\prod_{x:X}Y.)

With this simplified notation, we can write the negation of PP as P0P\to\mathbf{0}. And the type of the generic identity function is this:

id:T:UTT .\mathsf{id} :\prod_{T:\mathcal{U}}T\to T~.

Note that for the first argument (the type parameter) we still need the standard notation, because the other types depend on the value of that first argument.

Functions that have multiple arguments and use currying, are written like this:

add:R(RR) .\mathsf{add}:\mathbb{R}\to(\mathbb{R}\to\mathbb{R})~.

However, we usually leave out the parentheses if their scope extends to the end of the expression:

add:RRR .\mathsf{add}:\mathbb{R}\to\mathbb{R}\to\mathbb{R}~.

As the arrow notation may already suggest, we can encode logical implication as a function type. If a function f:PQf:P\to Q exists, and PP is inhabited by, say, p:Pp:P, then we can simply apply the function, f(p)f(p), to get an element of QQ and conclude that QQ is inhabited as well. Conversely, if we have f:PQf:P\to Q, but PP is not inhabited, then we can’t conclude anything about QQ from this (except that ff must be the empty function). So, PQP\to Q behaves exactly like logical implication does.

Law of excluded middle

Our goal was to recreate classical first-order logic within type theory. The main problem was that double-negation elimination (abbreviated DNE) didn’t hold. The idea was then to assume DNE as an axiom; i.e., to assume that the following type is inhabited:

DNE:A:U((A0)0)A .\mathrm{DNE}:\equiv \prod_{A:\mathcal{U}}\big((A\to\mathbf{0})\to\mathbf{0}\big)\to A~.

Before we do that, we’ll note that, given all the other constructions we already have, DNE is equivalent to the law of excluded middle (abbreviated LEM):

LEM:A:UA+(A0) .\mathrm{LEM}:\equiv\prod_{A:\mathcal{U}} A+(A\to\mathbf{0})~.

LEM says: for any proposition AA, we have either AA or ¬A\neg A.

In order to better get to know DNE and LEM, let’s verify that they are indeed equivalent in our logic system. Going from LEM to DNE is easier, so let’s do that first.

(The proof will showcase how constructive mathematics works in type theory. In order to prove a proposition, we’ll have to explicitly construct an element of the corresponding type! This style of reasoning certainly takes some getting used to.)

So, we are given a function of type LEM, which, for any type AA, gives us an element a^:A+(A0)\hat{a}:A+(A\to\mathbf{0}). We want to construct a function of type DNE, that maps the function g:(A0)0g:(A\to\mathbf{0})\to\mathbf{0} to an element of AA.

Given a^\hat{a}, an element of a union type, we can do a case analysis. If a^\hat{a} corresponds to an element of AA, we’re already done: that’s what we wanted.

If a^\hat{a} corresponds to an element of A0A\to\mathbf{0}, then we can map this with gg to an element of 0\mathbf{0}. Getting an element of 0\mathbf{0} is always exciting, because as I mentioned when introducing the empty type, there is a function 0C\mathbf{0}\to C for any type CC, which means there is also one for AA; call it f:0Af:\mathbf{0}\to A. So, if we have an element of 0\mathbf{0}, we can also get an element of AA. Of course, (assuming our theory is consistent) there are no elements of 0\mathbf{0}, so (again assuming our theory is consistent) there can be no element of A0A\to\mathbf{0} which we could then map to 0\mathbf{0} with gg, so the element of A+(A0)A+(A\to\mathbf{0}) must have been an element of AA all along. But in any case, we have conclusively demonstrated that we can construct an element of AA, given a^:A+(A0)\hat{a}:A+(A\to\mathbf{0}) and g:(A0)0g:(A\to\mathbf{0})\to\mathbf{0}. Thus, LEM implies DNE.

The fact that we can get anything from an element of 0\mathbf{0} corresponds to the principle of explosion in classical logic.

The opposite direction – from DNE to LEM – is slightly trickier, but it’s also a short proof. First, we’ll have to convince ourselves that, regardless of DNE, for any type BB, a function of the following type exists:

f:((B+(B0))0)0f:\Big(\big(B+(B\to\mathbf{0})\big)\to\mathbf{0}\Big)\to\mathbf{0}

So, it’s a function that takes in another function – let’s call it gg – and maps that to the empty type, 0\mathbf{0}gg in turn is a function mapping an element from B+(B0)B+(B\to\mathbf{0}) to 0\mathbf{0}. Why do we want this function ff? Because if we had this, then by DNE (double-negation elimination), we would have an element of B+(B0)B+(B\to\mathbf{0}), which is exactly what LEM says exists. So, let’s explicitly construct ff! We know that it gets g:B+(B0)0g:B+(B\to\mathbf{0})\to\mathbf{0} as input:

f(g)  ?f(g)\equiv\;?

So, we will probably want to call gg at some point. However, for that we need an element of B+(B0)B+(B\to\mathbf{0}). We won’t be able to just construct an element of BB, but B0B\to\mathbf{0} seems potentially doable. In fact, can’t we use gg itself to construct an element of B0B\to\mathbf{0}? Yes, we can!

(λb.g(left(b))):B0\bigg(\lambda b.g\big(\mathsf{left}(b)\big)\bigg):B\to\mathbf{0}

As a reminder, left\mathsf{left} allows us to construct an element of B+(B0)B+(B\to\mathbf{0}) from a b:Bb:B. Great, now we’ll just pass this function to gg and we’ll have it:

f(g):g(right(λb.g(left(b)))):((B+(B0))0)0f(g):\equiv g\Big(\mathsf{right}\big(\lambda b.g(\mathsf{left}(b))\big)\Big):\Big(\big(B+(B\to\mathbf{0})\big)\to\mathbf{0}\Big)\to\mathbf{0}

So, we managed to construct ff for an arbitrary type BB, which means that the double-negated version of LEM is already always inhabited in our theory. Now we just apply DNE to it and get the regular LEM.

This concludes our first proof in constructive mathematics; I hope it wasn’t too bad!

To construct classical logic we can now just assume either DNE or LEM, right? Unfortunately, there is one last slight complication, which we will discuss in the next post!


To summarize what we have learned: we can use types as truth values, which then allows us to re-use \prod\sum×\times, … as \forall\exists\land, … However, this only gives us intuitionstic logic, so we’ll need to assume an axiom in order to get classical logic.


  1. Haskell, for example, writes the single element of the unit type as (). However, the unit type itself is also written as (), so () :: () in Haskell. ↩︎

  2. For the category theorists: if we view types as objects in a category, then this means that the empty type is an initial object and the unit type is a terminal object. ↩︎

  3. Or, more old-fashioned, they’re called the proofs of the proposition, which is why we use the letter pp to represent them. ↩︎

 

 


This post is part of the series .