Skip to main content
PRL Project

The Book

Implementing Mathematics with The Nuprl Proof Development System

Introduction to Type Theory

Sections 2.1 to 2.4 introduce a sequence of approximations to Nuprl, starting with a familiar formalism, the typed lambda calculus. These approximations are not exactly subsets of Nuprl, but the differences between these theories and subtheories of Nuprl are minor. These subsections take small steps toward the full theory and relate each step to familiar ideas. Section 2.5 summarizes the main ideas and can be used as a starting point for readers who want a brief introduction. The last two sections relate the idea of a type in Nuprl to the concept of a set and to the concept of a data type in programming languages.

The Typed Lambda Calculus

A type is a collection of objects having similar structure. For instance, integers, pairs of integers and functions over integers are at least three distinct types. In both mathematics and programming the collection of functions is further subdivided based on the kind of input for which the function makes sense, and these divisons are also called types, following the vocabulary of the very first type theories [Whitehead & Russell 25]. For example, we say that the integer successor function is a function from integers to integers, that inversion is a function from invertible functions to functions (as in ``subtraction is the inverse of addition''), and that the operation of functional composition is a function from two functions to their composition. One modern notation for the type of functions from type $A$ into type $B$ is $A \!\rightarrow\! B$ (read as $A$ ``arrow'' $B$). Thus integer successor has type $int \!\rightarrow\! int$, and the curried form of the composition of integer functions has type $(int \!\rightarrow\! int) \!\rightarrow\! ((int \!\rightarrow\! int) \!\rightarrow\! (int \!\rightarrow\! int))$. For our first look at types we will consider only those built from type variables $A,B,C,\ldots $ using arrow. Hence we will have $(A \!\rightarrow\! B), (A \!\rightarrow\! B) \!\rightarrow\! C, ((A \!\rightarrow\! B) \!\rightarrow\! (A \!\rightarrow\! B))$, etc., as types, but not $int \!\rightarrow\! int$. This will allow us to examine the general properties of functions without being concerned with the details of concrete types such as integers.

One of the necessary steps in defining a type is choosing a notation for its elements. In the case of the integers, for instance, we use notations such as $0, +1, -2, +3, \ldots $. These are the defining notations, or canonical forms, of the type. Other notations, such as $1+1$, $2*3$ and $2-1$, are not defining notations but are derived notations or noncanonical forms.

Informally, functions are named in a variety of ways. We sometimes use special symbols like $+$ or $*$. Sometimes in informal mathematics one might abuse the notation and say that $x+1$ is the successor function. Formally, however, we regard $x+1$ as an ambiguous value of the successor function and adopt a notation for the function which distinguishes it from its values. Betrand Russell [Whitehead & Russell 25] wrote $\hat{x} +1$ for the successor function, while Church [Church 51] wrote $\lambda x.x+1$. Sometimes one sees the notation $()+1$, where $()$ is used as a ``hole'' for the argument. In Nuprl we adopt the lambda notation, using $\backslash x.x+1$ as a printable approximation to $\lambda x.x+1$. This notation is not suitable for all of our needs, but it is an adequate and familiar place to start.

A Formal System

We will now define a small formal system for deriving typing relations such as $(\backslash x.x)$ $in$ $(A \!\rightarrow\! A)$. To this end we have in mind the following two classes of expression. A type expression has the form of a type variable $A,B,C,\ldots $ (an example of an atomic type) or the form $(T_1 \!\rightarrow\! T_2)$, where $T_1$ and $T_2$ are type expressions. If we omit parentheses then arrow associates to the right; thus $A \!\rightarrow\! B \!\rightarrow\! C$ is $A \!\rightarrow\! (B \!\rightarrow\! C)$. An object expression has the form of a variable, $x,y,z,\ldots $, an abstraction, $\backslash x.b$ or of an application, $a(b)$, where $a$ and $b$ are object expressions. We say that $b$ is the body of $\backslash x.b$ and the scope of $\backslash x$, a binding operator.

In general, a variable $y$ is bound in a term $t$ if $t$ has a subterm of the form $\backslash y.b$. Any occurrence of $y$ in $b$ is bound. A variable occurrence in $t$ which is not bound is called free. We say that a term $a$ is free for a variable $x$ in a term $t$ as long as no free variable of $a$ becomes bound when $a$ is substituted for each free occurrence of $x$. For example, $z$ is free for $x$ in $\backslash y.x$, but $y$ is not. If $a$ has a free variable which becomes bound as a result of a substitution then we say that the variable has been captured. Thus $\backslash y$ ``captures'' $y$ if we try to substitute $y$ for $x$ in $\backslash y.x$. If $t$ is a term then $t[a/x]$ denotes the term which results from replacing each free occurrence of $x$ in $t$ by $a$, provided that $a$ is free for $x$ in $t$. If $a$ is not free for $x$ then $t[a/x]$ denotes $t$ with $a$ replacing each free $x$ and the bound variables of $t$ which would capture free variables of $a$ being renamed to prevent capture. $t[a_1,\ldots ,a_n/x_1, \ldots ,x_n]$ denotes the simultaneous substitution of $a_i$ for $x_i$. We agree that two terms which differ only in their bound variable names will be treated as equal everywhere in the theory, so $t[a/x]$ will denote the same term inside the theory regardless of capture. Thus, for example, $(\backslash y.x(y))[t/x]$ = $\backslash y.t(y)$ and $(\backslash x.x(y))[t/x]$ = $\backslash x.x(y)$ and $(\backslash y.x(y))[y/x]$ = $\backslash z.y(z)$.

When we write $(\backslash x.x) \;in\;(A \!\rightarrow\! A)$ we mean that $\backslash x.x$ names a function whose type is $A \!\rightarrow\! A$. To be more explicit about the role of $A$, namely that it is a type variable, we declare $A$ in a context or environment. The environment has the single declaration $A:U_1$, which is read ``$A$ is a type.''2.1

For $T$ a type expression and $t$ an object expression, $t \;in\;T$ will be called a typing. To separate the context from the typing we use ». To continue the example above, the full expression of our goal is $A:U_1 \;\mbox{\tt >>}\;(\backslash x.x) \;in\;(A \!\rightarrow\! A)$.

In general we use the following terminology. Declarations are either type declarations, in which case they have the form $A:U_1$, or object declarations, in which case they have the form $x:T$ for $T$ a type expression.2.2A hypothesis list has the form of a sequence of declarations; thus, for instance, $A:U_1, B:U_1, x:A$ is a hypothesis list. In a proper hypothesis list the types referenced in object declarations are declared first (i.e., to the left of the object declaration). A typing has the form $t \;in\;T$, where $t$ is an object expression and $T$ is a type expression. A goal has the form $H \;\mbox{\tt >>}\;t \;in\;T$, where $H$ is a hypothesis list and $t \;in\;T$ is a typing.

We will now give rules for proving goals. The rules specify a finite number of subgoals needed to achieve the goal. The rules are stated in a ``top-down'' form or, following Bates [Bates 79], refinement rules. The general shape of a refinement rule is:

goal by $rule\: name$
    subgoal 1
    subgoal n.

Here is a sample rule.

H \;\mbox{\tt >>}\;(\backslash x.b) \;in\;(S \!\rightarrow\! T) \mbox{ by in...{\tt >>}\;b \;in\;T\\
\mbox{\ \ \ \ }2. H \;\mbox{\tt >>}\;S \;in\;U1\\

It reads as follows: to prove that $(\backslash x.b)$ is a function in $(S \!\rightarrow\! T)$ in the context $H$ (or from hypothesis list $H$) we must achieve the subgoals $H,x:S \;\mbox{\tt >>}\;b \;in\;T$ and $H \;\mbox{\tt >>}\;S \;in\;U_1$. That is, we must show that the body of the function has the type $T$ under the assumption that the free variable in the body has type $S$ (a proof of this will demonstrate that $T$ is a type expression), and that $S$ is a type expression.

A proof is a finite tree whose nodes are pairs consisting of a subgoal and a rule name or a placeholder for a rule name. The subgoal part of a child is determined by the rule name of the parent. The leaves of a tree have rule parts that generate no subgoals or have placeholders instead of rule names. A tree in which there are no placeholders is complete. We will use the term proof to refer to both complete and incomplete proofs.

Figure 2.1: Rules for the Typed Lambda Calculus
(1)\; H,x:A,H' \;\mbox{\tt >>}\;x \;in\;A ...
...\ \ \ \ }2. H \;\mbox{\tt >>}\;a \;in\;S\\

Figure 2.1 gives the rules for the small theory. Note that in rule (3) the square brackets indicate an optional part of the rule name; if the new y part is missing then the variable x is used, so that subgoal 1 is

1. H,x:A \;\mbox{\tt >>}\;b \;in\;T.

The ``new variable'' part of a rule name allows the user to rename variables so as to prevent capture.

We say that an initial goal has the form

$A_1:U_1,A_2:U_1,...,A_n:U_1 \;\mbox{\tt >>}\;t \;in\;T$,
where the $A_i$ are exactly the free type variables of $T$. The Nuprl system allows only initial goals with empty hypothesis lists. Also, in full Nuprl we do not distinguish type variables from any other kind of variable. We have introduced these special notions here as pedagogical devices.2.3

Figure 2.2 describes a complete proof of a simple fact. This proof provides simultaneously a derivation of $(A \!\rightarrow\! A) \;in\;U_1$, showing that $(A \!\rightarrow\! A)$ is a type expression; a derivation of $(\backslash
x.\backslash y.y(x))(\backslash v.v)$, showing that this is an object expression; and type information about all of the subterms, e.g.,

$\backslash v.v$ is in $(A \!\rightarrow\! A)$;
$\backslash x.y(x)$ is in $(A \!\rightarrow\! A)$ given $y:A$;
$\backslash y.\backslash x.y(x)$ is in $(A \!\rightarrow\! A) \!\rightarrow\! (A \!\rightarrow\! A)$;
$(\backslash y.\backslash x.y(x))(\backslash v.v)$ is in $(A \!\rightarrow\! A)$.
There is a certain conceptual economy in providing all of this information in one format, but the price is that some of the information is repeated unnecessarily. For example, we show that $A \;in\;U_1$ three separate times. This is an inherent difficulty with the style of ``simultaneous proof'' adopted in Nuprl. In chapter 10 we discuss ways of minimizing its negative effects; for example, one could prove $A:U_1 \;\mbox{\tt >>}\;(A \!\rightarrow\! A)$ once as a lemma and cite it as necessary, thereby sparing some repetition.

Figure 2.2: A Sample Proof in the Small Type Theory
A:U1 \;\mbox{\tt >>}\;(\backslash y.\backsl...{\tt >>}\;A \;in\;U1 \mbox{ by hyp A}\\

It is noteworthy that from a complete proof from an initial goal of the form $H \;\mbox{\tt >>}\;t \;in\;T$ we know that $t$ is a closed object expression (one with no free variables) and $T$ is a type expression whose free variables are declared in $H$. Also, in all hypotheses lists in all subgoals any expression appearing on the right side of a declaration is either $U_1$ or a type expression whose variables are declared to the left. Moreover, all free variables of the conclusion in any subgoal are declared exactly once in the corresponding hypothesis list. In fact, no variable is declared at a subgoal unless it is free in the conclusion. Furthermore, every subterm $t'$ receives a type in a subproof $H' \;\mbox{\tt >>}\;t' \;in\;T'$, and in an application, $f(a)$, $f$ will receive a type $(T_1 \!\rightarrow\! T_2)$ and $a$ will receive the type $T_1$. Properties of this variety can be proved by induction on the construction of a complete proof. For full Nuprl many properties like these are proved in the Ph.D. theses of R. W. Harper [Harper 85] and S. F. Allen [Allen 86].

Computation System

The meaning of lambda terms $\backslash x.b$ is given by computation rules. The basic rule, called beta reduction, is that $(\backslash x.b)(a)$ reduces to $b[a/x]$; for example, $(\backslash x.\backslash y.x)(\backslash v.v)$ reduces to $\backslash y.(\backslash v.v)$. The strategy for computing applications $f(a)$ is involves reducing $f$ until it has the form $\backslash x.b$, then computing $(\backslash x.b)(a)$. This method of computing with noncanonical forms $f(a)$ is called head reduction or lazy evaluation, and it is not the only possible way to compute. For example, we might reduce $f$ to $\backslash x.b$ and then continue to perform reductions in the body $b$. Such steps might constitute computational optimizations of functions. Another possibility is to reduce $a$ first until it reaches canonical form before performing the beta reductions. This corresponds to call-by-value computation in a programming language.

In Nuprl we use lazy evaluation, although for the simple calculus of typed lambda terms it is immaterial how we reduce. Any reduction sequence will terminate--this is the strong normalization result [Tait 67,Stenlund 72]--and any sequence results in the same value according to the Church-Rosser theorem [Church 51,Stenlund 72]. Of course, the number of steps taken to reach this form may vary considerably depending on the order of reduction.

Extending the Typed Lambda Calculus

Dependent Function Space

It is very useful to be able to describe functions whose range type depends on the input. For example, we can imagine a function on integers of the form $\backslash x. if \; even(x) \; then \; 2 \; else \; (\backslash x.2)$. The type of this function on input $x$ is $if \; even(x) \; then \; int \; else \; (int \!\rightarrow\! int)$. Call this type expression $F(x)$; then the function type we want is written $x:int \!\rightarrow\! F(x)$ and denotes those functions $f$ whose value on input $n$ belongs to $F(n)$ ( $f(n) \;in\;F(n)$).

In the general case of pure functions we can introduce such types by allowing declarations of parameterized types or, equivalently, type-valued functions. These are declared as $B:(A \!\rightarrow\! U_1)$. To introduce these properly we must think of $U_1$ itself as a type, but a large type. We do not want to say $U_1 \;in\;U_1$ to express that $U_1$ is a type because this leads to paradox in the full theory. It is in the spirit of type theory to introduce another layer of object, or in our terminology, another ``universe'', called $U_2$. In addition to the types in $U_1$, $U_2$ contains so-called large types, namely $U_1$ and types built from it such as $A \!\rightarrow\! U_1$, $U_1 \!\rightarrow\! U_1$, $A \!\rightarrow\! (B \!\rightarrow\! U_1)$ and so forth. To say that $U_1$ is a large type we write $U_1 \;in\;U_2$. The new formal system allows the same class of object expressions but a wider class of types. Now a variable $A,B,C,\ldots $ is a type expression, the constant $U_1$ is a type expression, if $T$ is a type expression (possibly containing a free occurrence of the variable $x$ of type $S$) then $x:S \!\rightarrow\! T$ is a type expression, and if $F$ is an object expression of type $S \!\rightarrow\! U_1$ then $F(x)$ is a type expression. The old form of function space results when $T$ does not depend on $x$; in this case we still write $S \!\rightarrow\! T$.

Figure 2.3: Rules for Dependent Functions
(0) \; H \;\mbox{\tt >>}\;U1 \;in\;U2 \mbo...
...x{\ \ \ \ }H \;\mbox{\tt >>}\;A \;in\;U1\\

The new rules are listed in figure 2.3. With these rules we can prove the following goals.

\;\mbox{\tt >>}\;(\backslash A.\backslash x.x) \;in\;(A:U1 \rightarrow (A \r...
...arrow (b:(x:A \rightarrow
B(x)) \rightarrow (x:A \rightarrow B(g(x)))))))\\

With the new degree of expressiveness permitted by the dependent arrow we are able to dispense with the hypothesis list in the initial goal in the above examples. We now say that an initial goal has the form
[0] $\;\mbox{\tt >>}\;t \;in\;T$, where $t$ is an object expression and $T$ is a type expression. One might expect that it would be more convenient to allow a hypothesis list such as $A:U_1, B:(A \!\rightarrow\! U_1)$, but such a list would have to be checked to guarantee well-formedness of the types. Such checks become elaborate with types of the form $c:(x:A \!\rightarrow\! y:B(x) \!\rightarrow\! U_1)$, and the hypothesis-checking methods would become as complex as the proof system itself. As the theory is enlarged it will become impossible to provide an algorithm which will guarantee the well-formedness of hypotheses. Using the proof system to show well-formedness will guarantee that the hypothesis list is well-formed.

Hidden in the explanation above is a subtle point which affects the basic design of Nuprl. The definition of a type expression involves the clause ``$F$ is an expression of type $S \!\rightarrow\! U_1$.'' Thus, in order to know that $t \;in\;T$ is an allowable initial goal, we may have to determine that a subterm of $T$ is of a certain type; in the example above, we must show that $B$ is of type $A \!\rightarrow\! U_1$. To define this concept precisely we would need some precise definition of the relation that $B$ is of type $S \!\rightarrow\! U_1$. This could be given by a type-checking algorithm or by an inductive definition, but in either case the definition would be as complex as the proof system that it is used to define.

Another approach to this situation is to take a simpler definition of an initial goal and let the proof system take care of ensuring that only type expressions can appear on the right-hand side of a typing. To this end, we define the syntactically simple concept of a readable expression and then state that an initial goal has the form $e_1 \;in\;e_2$, where $e_1$ and $e_2$ are these simple expressions. Using this approach, an expression is either:

a variable:
$A,B,C, ..., x,y,z$;
a constant:
an application:
an abstraction:
$\backslash x.b$; or
an arrow:
$x:a \!\rightarrow\! b$,
where $a,b$ and $f$ are expressions. This allows expressions such as $\backslash x.U_1$ or $y:A \!\rightarrow\! \backslash x.x$, which do not make sense in this theory. However, the proof rules are organized so that if the initial goal $\;\mbox{\tt >>}\;t \;in\;T$ is proved then $T$ will be a type expression and $t$ will be an object expression of type $T$.

Cartesian Product

One of the most basic ways of building new objects in mathematics and programming involves the ordered pairing constructor. For example, in mathematics one builds rational numbers as pairs of integers and complex numbers as pairs of reals. In programming, a data processing record might consist of a name paired with basic information such as age, social security number, account number and value of the account, e.g., $<\! bloog, 37, 396\!-\!54\!-\!3900, 12268, .01\! >$. This item might be thought of as a single 5-tuple or as compound pair $<\! bloog, <\! 37, <\! 396\!-\!54\!-\!3900, <\! 12268, .01\! >\! >\! >\! >$. In Nuprl we write $<\! a,b\! >$ for the pair consisting of $a$ and $b$; $n$-tuples are built from pairs.

The rules for pairs are simpler than those for functions because the canonical notations are built in a simple way from components. We say that $<\! a,b\! >$ is a canonical value for elements of the type of pairs; the name $<\! a,b\! >$ is canonical even if $a$ and $b$ are not canonical. If $a$ is in type $A$ and $b$ is in type $B$ then the type of pairs is written $A\char93 B$ and is called the cartesian product. The Nuprl notation is very similar to the set-theoretic notation, where a cartesian product is written $A \times B$; we choose $\char93 $ as the operator because it is a standard ASCII character while $\times$ is not. In programming languages one might denote the cartesian product as $RECORD(A,B)$, as in the Pascal record type, or as $struct(A,B)$, as in Algol 68 structures.

The pair decomposition rule is the only Nuprl rule for products that is not as one might expect from cartesian products in set theory or from record types in programming. One might expect operations, say $1of()$ and $2of()$, obeying

$1of(<\! a,b\! >) = a$ and
$2of(<\! a,b\! >) = b.$
Instead of this notation we use a single form that generalizes both forms. One reason for this is that it allows a single form which is the inverse of pairing. Another more technical reason will appear when we discuss dependent products below. The form is
where $p$ is an expression denoting a pair and where $b$ is any expression in $u$ and $v$. We think of $u$ and $v$ as names of the elements of the pair; these names are bound in $b$. Using $spread$ we can define the selectors $1of()$ and $2of()$ as
$1of(p) = spread(p;u,v.u)$ and
$2of(p) = spread(p;u,v.v).$

Figure 2.4: Rules for Cartesian Product
(1) \; H \;\mbox{\tt >>}\;A\char93 B \;in\...
... \ }H,u:A,v:B \;\mbox{\tt >>}\;b \;in\;T\\

Figure 2.4 lists the rules for cartesian product. These rules allow us to assign types to pairs and to the spread terms. We will see later that Nuprl allows variations on these rules.

Dependent Products

Just as the function space constructor is generalized from $A \!\rightarrow\! B$ to $x:A \!\rightarrow\! B$, so too can the product constructor be generalized to $x:A\char93  B$, where $B$ can depend on $x$. For example, given the declarations $A:U_1$ and $F:A \!\rightarrow\! U_1$, $x:A\char93 F(x)$ is a type in $U_1$. The formation rule for dependent types becomes the following.

(1') H \;\mbox{\tt >>}\;(x:A\char93  B) \;in\;U1 \mbox{ by intro}\\
\mbox{\ \ \ \ }\mbox{\ \ \ \ }H,x:A \;\mbox{\tt >>}\;B \;in\;U1\\

The introduction rules change as follows.

(2') H \;\mbox{\tt >>}\;<a,b> \;in\;(x:A\char93  B) \mbox{ by intro}\\
... \ \ \ }\mbox{\ \ \ \ }H, u:A,v:B[u/x] \;\mbox{\tt >>}\;b \;in\;T[<u,v>/z]\\

The term ``over $z.T$'' is needed in order to specify the substitution of $<\! u,v\! >$ in $T$.

Disjoint Union

A union operator represents another basic way of combining concepts. For example, if $T$ represents the type of triangles, $R$ the type of rectangles and $C$ the type of circles, then we can say that an object is a triangle or a rectangle or a circle by saying that it belongs to the type $T$ or $R$ or $C$. In Nuprl this type is written $T\vert R\vert C$.

In general if $A$ and $B$ are types, then so is their disjoint union, $A\vert B$. Semantically, not only is the union disjoint, but given an element of $A\vert B$, it must be possible to decide which component it is in. Accordingly, Nuprl uses the canonical forms $inl(a)$ and $inr(b)$ to denote elements of the union; for $a \;in\;A$ $inl(a)$ is in $A\vert B$, and for $b \;in\;B$ $inr(b)$ is in $A\vert B$.

To discriminate on disjuncts, Nuprl uses the form $decide(d;u.e;v.f)$. The interpretation is that if $d$ denotes terms of the form $inl(a)$ then

$decide(inl(a);u.e.;v.f) = \mbox{$e[a/u]$},$
and if it denotes terms of the form $inr(b)$ then
$decide(inr(b);u.e;v.f) = \mbox{$f[b/v]$}.$
The variable $u$ is bound in $e$ and $v$ is bound in $f$. It is noteworthy that the type $A\vert B$ can be defined in terms of $\char93 $ and a two-element type such as $\{ 0,1 \}$.


The type of integers, $int$, is built into Nuprl. The canonical members of this type are $0,+1,-1,+2,-2,\ldots $. The operations of addition, $+$, subtraction, $-$, multiplication, $*$, and division, $/$, are built into the theory along with the modulus operation, $a \: mod \: b$, which gives the positive remainder of dividing $a$ by $b$. Thus $-5 \: mod \: 2 = 1$. Division of two integers produces the integer part of real number division, so $5/2 = 2$. For nonnegative integers $a$ and $b$ we have $a = b*(a/b)+a \: mod \: b$.

There are three noncanonical forms associated with the integers. The first form captures the fact that integer equality is decidable; $int\_ eq(a;b;s;t)$ denotes $s$ if $a=b \: in \: int$ and denotes $t$ otherwise. The second form captures the computational meaning of less than; $less(a;b;s;t)$ denotes $s$ if $a<b$ and $t$ otherwise. The third form provides a mechanism for definition and proof by induction and is written $ind(a;x,y.s;b;u,v.t)$. It is easiest to see this form as a combination of two simple induction forms over the nonnegative and nonpositive integers. Over the nonnegative integers ( $0, +1, +2, +3, \ldots$) the form denotes an inductive definition satisfying the following equations:

$ind(0;x,y.s;b;u,v.t) = b$
$ind(n+1;x,y.s;b;u,v.t) = $ $t[(n+1),ind(n;x,y.s;b;u,v.t)/u,v]$.
Over the nonpositive integers ( $0, -1, -2, \ldots$) the form denotes an inductive definition satisfying these equations:
$ind(0;x,y.s;b;u,v.t) = b$
$ind(n-1;x,y.s;b;u,v.t) = $ $s[(n-1),ind(n;x,y.s;b;u,v.t)/x,y]$.
For example, this form could be used to define $n!$ as $ind(n;x,y.1;1;u,v.u*v)$ if we assume that for $n<0, n!=1$.

In the form $ind(a;x,y.s;b;u,v.t)$ $a$ represents the integer argument, $b$ represents the value of the form if $a=0$, $x,y.s$ represents the inductive case for negative integers, and $u,v.t$ represents the inductive case for positive integers. The variables $x$ and $y$ are bound in $s$, while $u$ and $v$ are bound in $t$.

Atoms and Lists

The type of atoms is provided in order to model character strings. The canonical elements of the type $atom$ are ``...'', where ... is any character string. Equality on atoms is decidable using the noncanonical form $atom\_ eq(a;b;s;t)$, which denotes $s$ when $a=b \;in\;atom$ and $t$ otherwise.

Nuprl also provides the type of lists over any other type $A$; it is denoted $A \; list$. The canonical elements of the type $A \; list$ are $nil$ , which corresponds to the empty list, and $a.b$, where $a$ is in $A$ and $b$ is in $A \; list$. For example, the list of the first three positive integers in descending order is denoted $3.(2.(1.nil))$.

It is customary in the theory of lists to have head and tail functions such that

$head(a.b) = a$ and
$tail(a.b) = b$.
These and all other functions on lists that are built inductively are defined in terms of the list induction form $list\_ ind(a;b;h,t,v.t)$. The meaning of this form is given by the following equations.
$list\_ ind(nil;b;h,t,v.t) = b$
$list\_ ind(a.r;b;h,t,v.t) = t[a,r,list\_ ind(r;b;h,t,v.t)/h,t,v]$
With this form the tail function can be defined as $list\_ ind(a;nil;h,t,v.t)$. The basic definitions and facts from list theory appear in chapter 11.

Equality and Propositions as Types

So far we have talked exclusively about types and their members. We now want to talk about simple declarative statements. In the case of the integers many interesting facts can be expressed as equations between terms. For example, we can say that a number $n$ is even by writing $n \; mod \; 2 = 0 \;in\;int$. In Nuprl the equality relation on $int$ is built-in; we write $x = y \;in\;int$. In fact, each type $A$ comes with an equality relation written $x = y \;in\;A$. The idea that types come equipped with an equality relation is very explicit in the writings of Bishop [Bishop 67]. For example, in The Foundation of Constructive Analysis, he says, ``A set is defined by describing what must be done to construct an element of the set, and what must be done to show that two elements of the set are equal.'' The notion that types come with an equality is central to Martin-Löf's type theories as well.

The equality relations $x = y \;in\;A$ play a dual role in Nuprl in that they can be used to express type membership relations as well as equality relations within a type. Since each type comes with such a relation, and since $a = b \;in\;A$ is a sensible relation only if $a$ and $b$ are members of $A$, it is possible to express the idea that $a$ belongs to $A$ by saying that $a = a \;in\;A$ is true. In fact, in Nuprl the form $a \;in\;A$ is really shorthand for $a = a \;in\;A$.

The equality statement $a = a \;in\;A$ has the curious property that it is either true or nonsense. If $a$ has type $A$ then $a = a \;in\;A$ is true; otherwise, $a = a \;in\;A$ is not a sensible statement because $a = b \;in\;A$ is sensible only if $a$ and $b$ belong to $A$. Another way to organize type theory is to use a separate form of judgement to say that $a$ is in a type, that is, to regard $a \;in\;A$ as distinct from $a = a \;in\;A$. That is the approach taken by Martin-Löf. It is also possible to organize type theory without built-in equalities at all except for the most primitive kind. We only need equality on some two-element type, say a type of booleans, $\{ \mbox{true}, \mbox{false} \}$; we could then define equality on $int$ as a function from $int$ into $\{ \mbox{true}, \mbox{false} \}$ The fact that each type comes equipped with equality complicates an understanding of the rules, as we see when we look at functions. If we define a function $f \;in\;(A \!\rightarrow\! B)$ then we expect that if $a_1 = a_2 \;in\;A$ then $f(a_1) = f(a_2) \;in\;B$. This is a key property of functions, that they respect equality. In order to guarantee this property there are a host of rules of the form that if part of an expression is replaced by an equal part then the results are equal. For example, the following are rules.

H \;\mbox{\tt >>}\;spread(a;x,y.t) = spread(b,x,y.t) \;in\;T\\
\mbox{\ \ \...
...) \;in\;T\\
\mbox{\ \ \ \ }\mbox{\ \ \ \ }H \;\mbox{\tt >>}\;a = b \;in\;A.

Propositions as Types

An equality form such as $a = b \;in\;A$ makes sense only if $A$ is a type and $a$ and $b$ are elements of that type. How should we express the idea that $a = b \;in\;A$ is well-formed? One possibility is to use the same format as in the case of types. We could imagine a rule of the following form.

H \;\mbox{\tt >>}\;(a = b \;in\;A) \;in\;U1 \mbox{ by intro}\\
\mbox{\ \ \...
...\;\mbox{\tt >>}\;a \;in\;A\\
\mbox{\ \ \ \ }H \;\mbox{\tt >>}\;b \;in\;A\\

This rule expresses the right ideas, and it allows well-formedness to be treated through the proof mechanism in the same way that well-formedness is treated for types. In fact, it is clear that such an approach will be necessary for equality forms if it is necessary for types because it is essential to know that the $A$ in $a = b \;in\;A$ is well-formed.

Thus an adequate deductive apparatus is at hand for treating the well-formedness of equalities, provided that we treat $a = b \;in\;A$ as a type. Does this make sense on other grounds as well? Can we imagine an equality as denoting a type? Or should we introduce a new category, called Prop for proposition, and prove $H \;\mbox{\tt >>}\;(a = b \;in\;A$) in Prop? The constructive interpretation of truth of any proposition $P$ is that $P$ is provable. Thus it is perfectly sensible to regard a proposition $P$ as the type of its proofs. For the case of an equality we make the simplifying assumption that we are not interested in the details of such proofs because those details do not convey any more computational information than is already contained in the equality form itself. It may be true that there are many ways to prove $a = b \;in\;A$, and some of these may involve complex inductive arguments. However, these arguments carry only ``equality information,'' not computational information, so for simplicity we agree that equalities considered as types are either empty if they are not true or contain a single element, called $axiom$, if they are true.2.4

Once we agree to treat equalities as types (and over $int$, to treat $a<b$ as a type also) then a remarkable economy in the logic is possible. For instance, we notice that the cartesian product of equalities, say $(a = b \;in\;A) \char93  (c = d \;in\;B)$, acts precisely as the conjunction $(a = b \;in\;A) \: \& \: (c = d \;in\;B)$. Likewise the disjoint union, $(a = b \;in\;A) \vert (c = d \;in\;B)$, acts exactly like the constructive disjunction. Even more noteworthy is the fact that the dependent product, say $x:int \char93  (x = 0 \;in\;int)$, acts exactly like the constructive existential quantifier, $\exists x:int. x=0 \;in\;int$. Less obvious, but also valid, is the interpretation of $x:A \!\rightarrow\! (x = x \;in\;A)$ as the universal statement, $\forall x:A. x=x \;in\;A$.

We can think of the types built up from equalities (and inequalities in the case of integer) using $\char93 $, $\vert$ and $\!\rightarrow\!$ as propositions, for the meaning of the type constructors corresponds exactly to that of the logical operators considered constructively. As another example of this, if $A$ and $B$ are propositions then $A \!\rightarrow\! B$ corresponds exactly to the constructive interpretation of $A \:\: implies \:\: B$. That is, proposition $A$ implies proposition $B$ constructively if and only if there is a method of building a proof of $B$ from a proof of $A$, which is the case if and only if there is a function $f$ mapping proofs of $A$ to proofs of $B$. However, given that $A$ and $B$ are types such an $f$ exists exactly when the type $A \!\rightarrow\! B$ is inhabited, i.e., when there is an element of type $A \!\rightarrow\! B$.

It is therefore sensible to treat propositions as types. Further discussion of this principle appears in chapters 3 and 11.

Is it sensible to consider any type, say $int$ or $int \: list$, as a proposition? Does it make sense to assert $int$? We can present the logic and the type theory in a uniform way if we agree to take the basic form of assertion as ``type $A$ is inhabited.'' Therefore, when we write the goal $H \;\mbox{\tt >>}\;A$ we are asserting that given that the types in $H$ are inhabited, we can build an element of $A$. When we want to mention the inhabiting object directly we say that it is extracted from the proof, and we write $H \;\mbox{\tt >>}\;A \: ext \: a$. This means that $A$ is inhabited by the object $a$. We write the form $H \;\mbox{\tt >>}\;A$ instead of $H \;\mbox{\tt >>}\;a \;in\;A$ when we want to suppress the details of how $A$ is inhabited, perhaps leaving them to be determined by a computer system as in the case of Nuprl.

When we write $A:U_1 \;\mbox{\tt >>}\;(\backslash x.x) \;in\;(A \!\rightarrow\! A)$ we are really asserting the equality

$A:U_1 \;\mbox{\tt >>}\;((\backslash x.x) = (\backslash x.x) \;in\;(A \!\rightarrow\! A))$.
This equality is a type. If it is true it is inhabited by $axiom$. The full statement is therefore
$A:U_1 \;\mbox{\tt >>}\;((\backslash x.x) = (\backslash x.x) \;in\;A \!\rightarrow\! A) \: ext \: axiom$.
As another example of this interpretation, consider the goal
$\;\mbox{\tt >>}\;int$.
This can be proved by introducing $0$, and from such a proof we would extract $0$ as the inhabiting witness. Compare this to the goal
$\;\mbox{\tt >>}\;0 \;in\;int$.
This is proved by introduction, and the inhabiting witness is $axiom$.

Sets and Quotients

We conclude the introduction of the type theory with some remarks about two, more complex type constructors, the subtype constructor and the quotient type constructor. Informal reasoning about functions and types involves the concept of subtypes. A general way to specify subtypes uses a concept similar to the set comprehension idea in set theory; that is, $\{ x:A \vert B\} $ is the type of all $x$ of type $A$ satisfying the predicate $B$. For instance, the nonnegative integers can be defined from the integers as $\{ z:int \vert 0 <= z \}$. In Nuprl this is one of two ways to specify a subtype. Another way is to use the type $z:int \char93  0 <= z$. Consider now two functions on the nonnegative integers constructed in the following two ways.

$f: \{ z:int \vert 0 <= z \} \!\rightarrow\! int$
$g: (z:int \char93  0 <= z) \!\rightarrow\! int$
The function $g$ takes a pair $<\! x,p\! >$ as an argument, where $x$ is an integer and $p$ is a proof that the integer is nonnegative. The set construct is defined in such a way that $f$ takes only integers as arguments to the computation; the information that the argument is nonnegative can only be used noncomputationally in proofs.

The difference between these notions of subset is more pronounced with a more involved example. Suppose that we consider the following two types defining integer functions having zeros.

$F_1 = \{ f:int \!\rightarrow\! int \vert \: some \: y:int.f(y)=0 \;in\;int\} $
$F_2 = (f:int \!\rightarrow\! int \char93  \: some \: y:int.f(y)=0 \;in\;int)$
It is easy to define a function $g$ mapping $F_2$ into $int$ such that for all $p$ in $F_2$, $1of(p)(g(p))=0 \;in\;int$. (Notice that $p$ is a pair $< \! f,e \! >$, where $f:int \!\rightarrow\! int$ and $e$ is a proof that $f$ has a zero, so $1of(p) = f$.) That is, the function $g$ simply picks out the witness for the quantifier in $some \: y:int f(y)=0 \;in\;int$. There is no such function $h$ from $F_1$ because the only input to $h$ is the function $f$, so in order to find a zero value $h$ would need to search through $int$ for a zero. In the language described so far there is no unbounded search operator to use in defining $h$.2.5

One can think of the set constructor, $\{ x:A \vert B\} $, as serving two purposes. One is to provide a subtype concept; this purpose is shared with $(x:A \char93  B)$. The other is to provide a mechanism for hiding information to simplify computation.

The quotient operator builds a new type from a given base type, $A$, and an equivalence relation, $E$, on $A$. The syntax for the quotient is $(x,y):A//E$. In this type the equality relation is $E$, so the quotient operator is a way of redefining equality in a type.

In order to define a function $f:(x,y):A//E \!\rightarrow\! B$ one must show that the operation respects $E$, that is, $E(x,y)$ implies $f(x)=f(y) \;in\;B$. Although the details of showing $f$ is well-defined may be tedious, we are guaranteed that concepts defined in terms of $f$ and the other operators of the theory respect equality on $(x,y):A//E$. As an example of quotienting changing the behavior of functions, consider defining the integers modulo 2 as a quotient type.

$N_2 = (x,y):int//(x \; mod \; 2 = y \; mod \; 2 \;in\;int)$
We can now show that successor is well-defined on $N_2$ by showing that if $x \; mod \; 2 = y \; mod \; 2 \;in\;int$ then $x+1 \; mod \; 2 = y+1 \; mod \; 2 \;in\;int$. On the other hand, the maximum function is not well-defined on $N_2$ because $0=2 \;in\;N_2$ but $max(1,0)=1$ and $max(1,2)=2$, meaning that it is not the case that $max(1,0)=max(1,2) \;in\;N_2$.


This section is included for technical completeness; the beginning reader may wish to skip this section on a first reading. Here we shall consider only briefly the Nuprl semantics. The complete introduction appears in section 8.1. The semantics of Nuprl are given in terms of a system of computation and in terms of criteria for something being a type, for equality of types, for something being a member of a given type and for equality between members in a given type.

The basic objects of Nuprl are called terms. They are built using variables and operators, some of which bind variables in the usual sense. Each occurrence of a variable in a term is either free or bound. Examples of free and bound variables from other contexts are:

  • Formulas of predicate logic, where the quantifiers ($\forall$, $\exists$) are the binding operators. In $\forall x. (P(x) \& Q(y))$ the two occurrences of $x$ are bound, and the occurrence of $y$ is free.
  • Definite integral notation. In $\int_x^y \sin x   dx $ the occurrence of $y$ is free, the first occurrence of $x$ is free, and the other two occurrences are bound.
  • Function declarations in Pascal. In
    function Q(y:integer):integer;
    function P(x:integer):integer; begin P:=x+y end ;
    begin Q:=P(y) end ;
    all occurrences of x and y are bound, but in the declaration of P x is bound and y is free.

By a closed term we mean a term in which no variables are free. Central to the definitions of computation in the system is a procedure for evaluating closed terms. For some terms this procedure will not halt, and for some it will halt without specifying a result. When evaluation of a term does specify a result, this value will be a closed term called a canonical term. Each closed term is either canonical or noncanonical, and each canonical term has itself as value.

Certain closed terms are designated as types; we may write ``$T$ type'' to mean that $T$ is a type. Types always evaluate to canonical types. Each type may have associated with it closed terms which are called its members; we may write ``$t \in T$'' to mean that $t$ is a member of $T$. The members of a type are the (closed) terms that have as values the canonical members of the type, so it is enough when specifiying the membership of a type to specify its canonical members. Also associated with each type is an equivalence relation on its members called the equality in (or on) that type; we write ``$t = s \in T$'' to mean that $t$ and $s$ are members of $T$ which satisfy equality in $T$. Members of a type are equal (in that type) if and only if their values are equal (in that type).

There is also an equivalence relation $T = S$ on types called type equality. Two types are equal if and only if they evaluate to equal types. Although equal types have the same membership and equality, in Nuprl some unequal types also have the same membership and equality.

We shall want to have simultaneous substitution of terms, perhaps containing free variables, for free variables. The result of such a substitution is indicated thus:

$t[t_1,\dots, t_n/x_1,\dots,x_n]$,
where $0\le n$, $x_1,\dots,x_n$ are variables, and $t_1,\dots,t_n$ are the terms substituted for them in $t$.

What follows describes inductively the type terms in Nuprl and their canonical members. We use typewriter font to signify actual Nuprl syntax. The integers are the canonical members of the type int. There are denumerably many atom constants (written as character strings enclosed in quotes) which are the canonical members of the type atom. The type void is empty. The type $A$|$B$ is a disjoint union of types $A$ and $B$. The terms inl($a$) and inr($b$) are canonical members of $A$|$B$ so long as $a\in A$ and $b\in B$. (The operator names inl and inr are mnemonic for ``inject left'' and ``inject right''.) The canonical members of the cartesian product type $A$#$B$ are the terms <$a$,$b$> with $a\in A$ and $b\in B$. If $x$:$A$#$B$ is a type then $A$ is closed (all types are closed) and only $x$ is free in $B$. The canonical members of a type $x$:$A$#$B$ (``dependent product'') are the terms <$a$,$b$> with $a\in A$ and $b\in B[a/x]$. Note that the type from which the second component is selected may depend on the first component. The occurrences of $x$ in $B$ become bound in $x$:$A$#$B$. Any free variables of $A$, however, remain free in $x$:$A$#$B$. The $x$ in front of the colon is also bound, and indeed it is this position in the term which determines which variable in $B$ becomes bound. The canonical members of the type $A$ list represent lists of members of $A$. The empty list is represented by nil, while a nonempty list with head $a$ and tail $b$ is represented by $a$.$b$, where $b$ evaluates to a member of the type $A$ list.

A term of the form $t$($a$) is called an application of $t$ to $a$, and $a$ is called its argument. The members of type $A$->$B$ are called functions, and each canonical member is a lambda term, \$x$.$b$, whose application to any member of $A$ is a member of $B$. The canonical members of a type $x$:$A$->$B$, also called functions, are lambda terms whose applications to any member $a$ of $A$ are members of $B[a/x]$. In the term $x$:$A$->$B$ the occurrences of $x$ free in $B$ become bound, as does the $x$ in front of the colon. For these function types it is required that applications of a member to equal members of $A$ be equal in the appropriate type.

The significance of some constructors derives from the representation of propositions as types, where the proposition represented by a type is true if and only if the type is inhabited. The term $a$<$b$ is a type if $a$ and $b$ are members of int, and it is inhabited if and only if the value of $a$ is less than the value of $b$. The term ($a$=$b$ in $A$) is a type if $a$ and $b$ are members of $A$, and it is inhabited if and only if $a=b\in A$. The term ($a$=$a$ in $A$) is also written ($a$ in $A$); this term is a type and is inhabited if and only if $a\in A$.

Types of form {$A$|$B$} or {$x$:$A$|$B$} are called set types. The set constructor provides a device for specifying subtypes; for example, {x:int|0<x} has just the positive integers as canonical members. The type {$A$|$B$} is inhabited if and only if the types $A$ and $B$ are, and if it is inhabited it has the same membership as $A$. The members of a type {$x$:$A$|$B$} are the members $a$ of $A$ such that $B[a/x]$ is inhabited. In {$x$:$A$|$B$}, the $x$ before the colon and the free $x$'s of $B$ become bound.

Terms of the form $A$//$B$ and ($x$,$y$):$A$//$B$ are called quotient types. $A$//$B$ is a type only if $B$ is inhabited, in which case $a=a' \in A//B$ exactly when $a$ and $a'$ are members of $A$. Now consider ($x$,$y$):$A$//$B$. This term denotes a type exactly when $A$ is a type, $B[a,a'/x,y]$ is a type for $a$ and $a'$ in $A$, and the relation $\exists b.b \in B[a,a'/x,y]$ is an equivalence relation over $A$ in $a$ and $a'$. If ($x$,$y$):$A$//$B$ is a type then its members are the members of $A$; the difference between this type and $A$ only arises in the equality between elements. Briefly, $a=a' \in x,y:A//B$ if and only if $a$ and $a'$ are members of $A$ and $B[a,a'/x,y]$ is inhabited. In ($x,y$):$A$//$B$ the $x$ and $y$ before the colon and the free occurrences of $x$ and $y$ in $B$ become bound.

Now consider equality on the types already discussed. Members of int are equal (in int) if and only if they have the same value. The same goes for type atom. Canonical members of $A$|$B$, $A$#$B$, $x$:$A$#$B$ and $A$ list are equal if and only if they have the same outermost operator and their corresponding immediate subterms are equal (in the corresponding types). Members of $A$->$B$ or $x$:$A$->$B$ are equal if and only if their applications to any member $a$ of $A$ are equal in $B[a/x]$. The types $a$<$b$ and ($a$=$b$ in $A$) have at most one canonical member, namely axiom, so equality is trivial. Equality in {$x$:$A$|$B$} is just the restriction of equality in $A$ to {$x$:$A$|$B$}, as is the equality for {$A$|$B$}.

Now consider the so-called universes, U$k$ ($k$ positive). The members of U$k$ are types. The universes are cumulative; that is, if $j$ is less than $k$ then membership and equality in U$j$ are just restrictions of membership and equality in U$k$. U$k$ is closed under all the type-forming operations except formation of U$i$ for $i$ greater than or equal to $k$. Equality in U$k$ is the restriction of type equality to members of U$k$.

With the type theory in hand we now turn to the Nuprl proof theory. The assertions that one tries to prove in the Nuprl system are called judgements. They have the form

$\mbox{\tt$x_1$:$T_1$,$\dots$,$x_n$:$T_n$ » $S$ [ext $s$]}$,
where $x_1,\dots,x_n$ are distinct variables and $T_1,\dots,T_n,S$ and $s$ are terms ($n$ may be $0$), every free variable of $T_i$ is one of $x_1,\dots,x_{i-1}$, and every free variable of $S$ or of $s$ is one of $x_1,\dots,x_n$. The list \( \mbox{\tt$x_1$:$T_1$,\dots,$x_n$:$T_n$} \) is called the hypothesis list or assumption list, each $x_i$:$T_i$ is called a declaration (of $x_i$), each $T_i$ is called a hypothesis or assumption, $S$ is called the consequent or conclusion, $s$ the extract term (the reason will be seen later), and the whole thing is called a sequent.

The criterion for a judgement being true is to be found in the complete introduction to the semantics.2.6Here we shall say a judgement
\( \mbox{\tt$x_1$:$T_1$,$\dots$,$x_n$:$T_n$ » $S$ [ext $s$]} \) is almost true if and only if
\( \forall t_1,\dots,t_n.   s[t_1,\dots,t_n/x_1,\dots,x_n]
\in S[t_1,\dots,t_n/x_1,\dots,x_n] \)
if \(\forall i < n.   t_{i+1}[t_1,\dots,t_i/x_1,\dots,x_i]\in
T_{i+1}[t_1,\dots,t_i/x_1,\dots,x_i] \)
That is, a sequent like the one above is almost true exactly when substituting terms $t_i$ of type $T_i$ (where $t_i$ and $T_i$ may depend on $t_j$ and $T_j$ for $j<i$) for the corrseponding free variables in $s$ and $S$ results in a true membership relation between $s$ and $S$.

It is not always necessary to declare a variable with every hypothesis in a hypothesis list. If a declared variable does not occur free in the conclusion, the extract term or any hypothesis, then the variable (and the colon following it) may be omitted.

In Nuprl it is not possible for the user to enter a complete sequent directly; the extract term must be omitted. In fact, a sequent is never displayed with its extract term. The system has been designed so that upon completion of a proof, the system automatically provides, or extracts, the extract term. This is because in the standard mode of use the user tries to prove that a certain type is inhabited without regard to the identity of any member. In this mode the user thinks of the type (that is to be shown inhabited) as a proposition and assumes that it is merely the truth of this proposition that the user wants to show. When one does wish to show explicitly that $a=b\in A$ or that $a\in A$, one instead shows the type ($a$ = $b$ in $A$) or the type ($a$ in $A$) to be inhabited.2.7

The system can often extract a term from an incomplete proof when the extraction is independent of the extract terms of any unproven claims within the proof body. Of course, such unproven claims may still contribute to the truth of the proof's main claim. For example, it is possible to provide an incomplete proof of the untrue sequent » 1<1 [ext axiom], the extract term axiom being provided automatically.

Although the term extracted from a proof of a sequent is not displayed in the sequent, the term is accessible by other means through the name assigned to the proof in the user's library. In the current system proofs named in the user's library cannot be proofs of sequents with hypotheses.

Relationship to Set Theory

Type theory is similar to set theory in many ways, and one who is unfamiliar with the subject may not see readily how to distinguish the two. This section is intended to help. A type is like a set in these respects: it has elements, there are subtypes, and we can form products, unions and function spaces of types. A type is unlike a set in many ways too; for instance, two sets are considered equal exactly when they have the same elements, whereas in Nuprl types are equal only when they have the same structure. For example, void and {x:int|x<x} are both types with no members, but they are not equal.

The major differences between type theory and set theory emerge at a global level. That is, one cannot say much about the difference between the type int of integers and the set Z of integers, but one can notice that in type theory the concept of equality is given with each type, so we write x=y in int and x=y in int#atom. In set theory, on the other hand, equality is an absolute concept defined once for all sets. Moreover, set theory can be organized so that all objects of the theory are sets, while type theory requires certain primitive elements, such as individual integers, pairs, and functions, which are not types. Another major global difference between the theories concerns the method for building large types and large sets. In set theory one can use the union and power set axioms to build progressively larger sets. In fact, given any indexed family of sets, $\{ S(x)\mid x\in A \}$, the union of these sets exists. In type theory there are no union and power type operators. Given a family of types S(x) indexed by A, they can be put together into a disjoint union, x:A#S(x), or into a product, x:A->S(x), but there is no way to collect only the members of the S(x). Large unstructured collections of types can be obtained only from the universes, U1,U2,... .

Another global difference between the two theories is that set theory typically allows so-called impredicative set formation in that a set can be defined in terms of a collection which contains the set being defined. For instance, the subgroup $H$ of a group $G$ generated by elements $h_1,\ldots,h_n$ is often defined to be the least among all subgroups of $G$ containing the $h_i$. However, this definition requires quantifying over a collection containing the set being defined. The type theory presented here depends on no such impredicative concepts.

For set theories, such as Myhill's CST [Myhill 75], which do not employ impredicative concepts, Peter Aczel [Aczel 77,Aczel 78] has shown a method of defining such theories in a type theory similar to Nuprl.

Both type theory and set theory can play the role of a foundational theory. That is, the concepts used in these theories are fundamental. They can be taken as irreducible primitive ideas which are explained by a mixture of intuition and appeal to defining rules. The view of the world one gets from inside each theory is quite distinct. It seems to us that the view from type theory places more of the concepts of computer science in sharp focus and proper context than does the view from set theory.

Relationship to Programming Languages

In many ways the formalism presented here will resemble a functional programming language with a rich type structure. The functions of Nuprl are denoted by lambda expressions, written $\backslash$x.t, and correspond to programs. The function terms do not carry any type information, and they are evaluated without regard to types. This is the evaluation style of ML [Gordon, Milner, & Wadsworth 79], and it contrasts with a style in which some type correctness is checked at runtime (as in PL/I). The programs of Nuprl are rather simple in comparison to those of modern production languages; there is no concurrency, and there are few mechanisms to optimize the evaluation (such as alternative parameter passing mechanisms, pointer allocation schemes, etc.).

The type structure of Nuprl is much richer than that of any programming language; for example, no such language offers dependent products, sets, quotients and universes. On the other hand, many of the types and type constructors familiar from languages such as Algol 68, Simula 67, Pascal and Ada are available in some form in Nuprl. We discuss this briefly below.

A typical programming language will have among its primitive types the integers, int, booleans, bool, characters, char, and real numbers (of finite precision), real. In Nuprl the type of integers, int, is provided; the booleans can be defined using the set type as {x:int | x=0 in int or x=1 in int}, the characters are given by atom, and various kinds of real numbers can be defined (including infinite precision), although no built-in finite precision real type is as yet provided.

Many programming languages provide ways to build tuples of values. In Algol the constructor is the structure; in Pascal and Ada it is the record and has the form ${\tt RECORD} x:A, y:B {\tt END}$ for the product of types $A$ and $B$. In Nuprl such a product would be written A#B just as it would be in ML.

In Pascal the variant record has the following form.

        CASE kind:(RECT,TRI,CIRC) of
            TRI :(x,y,a:real)
The elements of this type are either pairs, triples or quadruples, depending on the first entry. If the first entry is RECT then there are two more components, both reals. If the first entry is CIRC then there is only one other which is a real; if it is TRI then there are three real components. One might consider this type a discriminated union rather than a variant record. In any case, in Nuprl it is defined as an extension of the product operator which we call a dependent product. If real denotes the type of finite precision reals, and if the Pascal type (RECT,CIRC,TRI) is represented by the type consisting of the three atoms "RECT","CIRC" and "TRI", and if the function F is defined as
        F("RECT") = real#real
        F("CIRC") = real
        F("TRI")  = real#real#real
then the following type represents the variant record.

In Nuprl, as in Algol 68, it is possible to form directly the disjoint union, written $A$|$B$, of two types $A$ and $B$. This constructor could also be used to define the variant record above as real#real|(real|real#real#real).

One of the major differences between Nuprl types and those of most programming languages is that the type of functions from $A$ to $B$, written $A$->$B$, denotes exactly the total functions. That is, for every input $a$ of type $A$, a function in $A$->$B$ must produce a value in $B$. In Algol the type of functions from A to B, say PROC(x:A)B, includes those procedures which may not be well-defined on all inputs of A; that is, they may diverge on some inputs.

In contrast to the usual state of affairs with programming languages the semantics of Nuprl ``programs'' is completely formal. There are rules to settle such issues as when two types of programs are equal, when one type is a subtype of another, when a ``program'' is correctly typed, etc. There are also rules for showing that ``programs'' meet their specifications. Thus Nuprl is related to programming languages in many of the ways that a programming logic or program verification system is.


... type.''2.1
Actually $A:U_1$ reads as ``$A$ is a type in universe $U_1$.'' We discuss universes later.
In full Nuprl the distinction between types and other objects is dropped.
... devices.2.3
In Nuprl the goal $ A:U1 \;\mbox{\tt >>}\;(\backslash x.x) \;in\;(A \!\rightarrow\! A)$ would be expressed initially as $ \;\mbox{\tt >>}\;A:U1\!\rightarrow\! ((\backslash x.x) \;in\;(A \!\rightarrow\! A))$. A rule of introduction would then create the context $ A:U1$.
A better term to use here might be the token ``yes'', which can be thought of as a summary of the proof. The term $axiom$ suggests that the facts are somehow basic or atomic, but in fact they may require considerable work to prove.
In chapter 12 we introduce a concept of partial function that will allow us to define $h:F_1 \!\rightarrow\! int$ such that $f(h(f))=0 \;in\;int$ using an unbounded search. The search is guaranteed to terminate because of the information about $f$.
... semantics.2.6
Section 8.1, page [*].
... inhabited.2.7
Recall that the term ($a$ = $b$ in $A$) is a type whenever $a\in A$ and $b\in A$ and is inhabited just when $a=b\in A$. As a special case the term ($a$ in $A$), which is shorthand for ($a$ = $a$ in $A$), is a type and is inhabited just when $a\in A$.
entwood 2012-06-20