TT ⇔ CT: category semantics; WTT corresponds to morphisms and categories.
PT ⇔ CT: category logic; an algebraic way of thinking about logic and proofs.
PT ⇔ TT: thinking propositions as types and WTT as proofs.
When you are learning a new idea in one field, it is helpful to transpose it to the other two fields and seek its correspondence there. See here and here for more details.
Judgements. We start with three kinds of categorical judgements:
$A\ type$ ─ $A$ is a type.
$a \in A$ ─ $a$ is a value of type $A.$
$a_1=a_2: A$ ─ $a_1$ and $a_2$ are "definitionally equivalent" values of type $A.$
A judgement in form of $x_1:A_1,...,x_n:A_n \models a(x_1,...,x_n):A$ is called a hypothetical judgement. One may regard such a judgement as a mapping from $A_1,...,A_n$ to $A.$ $a(x_1,...,x_n)$ is called an open term that might involve free variables $x_1,...,x_n.$ In practice, we often consider the open term parametrized at $x_1,...,x_n.$ A closed term of type $A$ can be determined categorically after the values of $x_1,...,x_n$ are plugged in.
Definitional equality is 1) an equivalence relation, 2) a congruence (see typing rules), and 3) respected by maps. That is, $$\frac{\Gamma, x:A \models b:B, \Gamma \models a_1\equiv a_2: A}{\Gamma \models [a_1/x]b \equiv [a_2/x]b : B}.$$The last feature is called the principle of functionality property. This property and its related concepts will play a central role later and in Homotopy type theory.
Definitional equality has its limits. For example, one cannot prove definitional equivalence between $0+1$ and $1+0$ even given all the arithmetic rules about natural numbers. Definitional equality corresponds to calculating by simplification. There however are equations (such as $0+1=1+0$) that you wish to hold cannot be reached that way. These equations need not just calculation but proof, and thus cannot hold definitionally.
Typing rules. We have the following kinds of rules to reason about types. Rules 1-5 come from Proof Theory and Rule 6 comes from Category Theory.
1. Formation rules: how to construct a type.
2. Introduction rules: how to create elements of a type.
3. Elimination rules: how to use elements of a type.
4. Congruence rules: how to interact with definitional equivalence.
5. Computation rules: how to do calculation with types
6. Uniquicity/Unisersality rules: characterize precisely types
Negative types. E.g., Unit, product, and function. Elimination-oriented. Uniquicity of product: $\Gamma \vdash \langle fst(c), snd(c) \rangle \equiv c : A\times B.$ Computation of function is substitution. Elimination of function is application. Extension of function can be written as $$\frac{\Gamma, x:A \vdash a_1(x)\equiv a_2(x):B}{\Gamma \vdash a_1\equiv a_2: A\rightarrow B}$$This means that functions are characterized by their input-output behaviors.2. Introduction rules: how to create elements of a type.
3. Elimination rules: how to use elements of a type.
4. Congruence rules: how to interact with definitional equivalence.
5. Computation rules: how to do calculation with types
6. Uniquicity/Unisersality rules: characterize precisely types
Positive types. E.g., Void and sum. Introduction-oriented. The Uniquicity property of positive types is problematic, and cannot be shown as for the negative types as directly. Elimination rule for the Void type:$$\frac{\Gamma\vdash a:\emptyset}{\Gamma\vdash abort(a): A}$$This means if you have a value of the Void type, you can abort the program with any type. Note that one can never build a term of type $A$ in this way since there is no closed term of type $\emptyset.$ One application of this rule is that if you are in a conditional branch, then one side of the conditional may be contradictory, and you can build an $abort$ element of type $A$ to make sure both elements for the condition have the same type.
Negative types vs. positive types. Approximately, the two types differ most noticeably in how they behave under elimination. Reasoning about a negative type has noting to do with its structure. For example, a product is characterized by its behaviors on $\texttt{fst}$ and $\texttt{snd}$ without caring about what it actually is. A product can be realized by an ordered pair or just a black box that computes the $\texttt{fst}$ and $\texttt{snd}$ components on demand. Also, there is no induction principle for negative types. For example, all you know about a function is that it is a lambda term. Thus you cannot induct on its structure via the typing rules. In contrast, the structure of a positive type is enforced in the theory. The elimination form of positive types work by case analysis─the actual element in a sum is extracted by $\texttt{case}$ in the process of inference. Thus it cannot be anything else after the inference.
A correspondence btw STT and IPT. $Unit$ is $true.$ $Void$ is $false.$ Sum is disjunction. Product is conjunction. Function is implication. $A\ type$ is $A\ prop.$ $\exists e: A$ is $A\ true.$ $\exists b.\, a:A \vdash b:B$ is $A\le B.$ Thus $A\le B$ means that there is a function transforming a proof for $A$ to a proof for $B.$ Note that $\le$ is a pre-order: that is, it is reflexive and transitive. We use $A\supset B$ to denote the largest element $x$ satisfying $A\wedge x \le B$, i.e., $C \wedge A \le B$ implies $C \le A \supset B.$ If an ordered structure satisfies 1) $\forall A: A\le \top$; 2) $\forall A: \bot \le A$; 3) $A\wedge B$ (meet) is a glb for $A,\,B$; 4) $A\vee B$ (join) is a sub for $A,\,B$; 5) $\wedge$ is distributive over $\vee$; 6) $A \supset B$ exists for every $A,B$, then it is called a Heyting pre-algebra. (It is called a pre-algebra because $\le$ is not anti-reflexive.)
Negations. $\neg\ A$ is defined as $A \supset \bot.$ Thus we can get a contradiction from a value of $A.$ The fact $A\le \neg\neg\ A$ follows directly by the properties of $\supset.$ Its converse $\neg\neg\ A \le A$ however doesn't hold in general (it takes quite some efforts to check this statement). Similarly, while $A \vee \neg\ A \le \top$ is trivial, $\top \le A \vee \neg\ A$ cannot be proved in general without knowing something about $A.$
By definition, $\neg A$ is the largest element inconsistent with $A.$ That is, $A\wedge\neg A \le \bot$ and $B\wedge\neg A \le \bot \implies B\le \neg A.$ The compliment $\overline{A}$ of $A$ is defined as the smallest element $B$ such that $\top\le A\vee\overline{A}.$ Hence $\top\le A\vee B$ implies $\overline{A}\le B.$ Note that i) $\neg A$ always exists while $\overline{A}$ does not; ii) it is reasonable to add the largest/smallest requirement to the definitions, for otherwise we can trivially choose $\neg\ A=\bot$ and $\overline{A}=\top$; iii) it is easy to check $\neg\ A \le \overline{A}$ (things in $\neg\ A$ are definitely not in $A$). The converse $\overline{A} \le \neg\ A$ however does not hold in general (the negation does not fill up the whole space).
A Boolean algebra is also called a complemented Heyting algebra in that $\overline{A}$ exists for every $A$, with i) $\top \le A\vee \neg\ A$ (which implies $A\vee \neg\ A=A\vee \overline{A}$), and ii) $\overline{\overline{A}} \le A$ (note that $A \le \overline{\overline{A}}$ follows by definition). Heyting pre-algebras generalize Boolean pre-algebras in the sense that a Heyting pre-algebra satisfying $1 \le A \vee \neg A$ (law of excluded middle), or equivalently $\neg\neg A \le A$ (law of double negation), is a Boolean pre-algebra.
Heyting algebras. An exponential $y^x$ is defined as $\overline{x}\vee y.$ A Heyting algebra can also be defined as a lattice with exponentials: $H$ is called a Heyting algebra if i) it has a preorder; ii) it has finite meets and joins, and iii) satisfies $y^x$ is the largest element $z$ such that $x \wedge z \le y.$
Exercise. Show that every Heyting algebra is distributive.
We give hints about proving $x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z).$ First, note that $z \wedge x \le y$ iff $z \le y^x.$ Hence, to prove direction $x \wedge (y \vee z) \le (x \wedge y) \vee (x \wedge z)$, it suffices to show $y \vee z \le ((x \wedge y) \vee (x \wedge z))^x$, which amounts to show that $u \le ((x \wedge y) \vee (x \wedge z))^x$ for $u=y, z.$ However, $u \le ((x \wedge y) \vee (x \wedge z))^x$ iff $u \wedge y \le (x \wedge y) \vee (x \wedge z).$ For $u=y, z$ this follows by definition. As to the other direction, note that $x \le y$ iff $\forall z.\,(y\le z$ $\implies x \le z).$ Hence, to prove $(x \wedge y) \vee (x \wedge z) \le x \wedge (y \vee z)$, it suffices to show that $(x \wedge y) \vee (x \wedge z) \le u$ implies $x \wedge (y \vee z) \le u$, which is equivalent to $y \vee z \le u^x.$ Note that $(x \wedge y) \vee (x \wedge z) \le u$ implies both $x \wedge y \le u$ and $x \wedge z \le u.$ Thus we have $y \le u^x$ and $z \le u^x$, that is, $y \vee z \le u^x.$
As an alternative, you can also use the completeness theorem for IPT. Completeness assures that anything that holds in every Heyting algebra must be provable in IPT, and vice versa. Since preorder is interpreted to entailment in IPT, you can transfer e.g., $x \wedge (y \vee z) \le (x \wedge y) \vee (x \wedge z)$ to $X \wedge (Y \vee Z)\ true \vdash (X \wedge Y) \vee (X \wedge Z)\ true$, and then reason about the logical connectives to do the proof. You can further add proof terms to obtain a FP problem $\Gamma, x: A \times (B + C) \vdash M: (A \times B) + (A \times C).$ Resolving this problem amounts to writing a program $M$ of type $(A \times B) + (A \times C)$, and it is not hard to see that $M = \texttt{case}(z.\,\texttt{inl}(\langle \texttt{fst}(x), z\rangle);\, z.\,\texttt{inr}(\langle \texttt{fst}(x), z\rangle))\ \texttt{snd}(x).$ It is sort of a cyclelogical matter for people to transfer problems around the trinitarian when they are doing proofs.
No comments:
Post a Comment