## Abstract

Lean *eliminators* combine the notions of recursion and induction. Put another way, eliminators are used to define functions on an inductive type (such as \(\mathbb N\)). The *motive* describes what kind of function is being defined.

To understand motives, one needs to know about types-dependent-on-terms, a *threshold concept* (in the language of education) that opens doors to the world of type theory.

In this post, I’ll (1) compare threshold concepts in set theory and type theory and (2) scaffold the motive concept in the special case of eliminators on \(\mathbb N\) through the examples of induction, recursive definition of sequences of integers, and recursive definition of sequences of vectors of increasing length.

## Threshold concepts

One of the common sources of confusion for beginning undergraduate mathematicians is the difference between the set membership and subset relations. The problem is pronounced in situations where an object can, in some parts of an argument, act as a set member and, in other parts, as a subset.

A case in point is Cantor’s theorem on the impossibility of a surjection between a set \(A\) and its power set \(\mathcal P (A)\). Cantor’s proof assumes, for a contradiction, that there exists a surjection \(f : A \to \mathcal P(A)\). Let \(B = \{ x \in A : x\notin f(x)\}\). Then \(B\) is *simultaneously* a subset of \(A\) and an element of \(\mathcal P(A)\). As \(f\) is surjective, there exists \(c \in A\) such that \(f(c) = B\).

The contradiction arises by considering each of the cases \(c \in B\) and \(c \notin B\). Understanding *what is meant* by \(c \in B\), \(c \notin B\) and \(f(c) = B\) is crucial to appreciating Cantor’s proof.

For a teacher, the challenge is that the set concept is so deeply embedded that it is almost impossible to understand the source of the student’s confusion.

All things are difficult before they are easy

variously attributed to Saadi Shirazi and Thomas Fuller.

This skill of unpacking and using set-theoretic definitions is an example of a *threshold concept* (Mayer and Land, 2003):

“A threshold concept can be considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress.”

Once the set concept is mastered, many proofs in undergraduate mathematics become a matter of unfolding definitions. Think about how this relates to the task of proving that if \(N\) and \(K\) are normal subgroups of \(G\), then so is \(NK\). I’ve seen students fall over this when their concepts of *group* and *normal subgroup* are founded not in (set-theoretic) definitions but in the manipulation of examples.

Mastery of a threshold concept is *irreversible*: “the change of perspective occasioned by acquisition of a threshold concept is unlikely to be forgotten, or will be unlearned only by considerable effort” (ibid.).

Learning to work with Lean confronted me with threshold concepts in type theory. Grappling with these concepts gave me a renewed appreciation of the analogous challenges facing students new to set theory.

In type theory, a term can simultaneously *hav*e a type and *be* a type. In most situations, only one of these aspects is significant. As in the proof of Cantor’s theorem, difficulties can arise in an argument where both aspects are relevant for the same object. In our later discussion of motives, we’ll view types dependent on terms through both these lenses.

### Types dependent on terms

Dependent type theory abounds with *types dependent on terms*, as described in Nederpelt and Geuvers (2014), Chapter 5. For data-driven examples of such objects, see my post on vectors and dependent function types. Types dependent on terms also arise naturally through the *propositions-as-types *paradigm. Consider, for example, the predicate \(P\) on \(\mathbb N\) defined by \(P(n) := 2n > 10\).

Type theoretically, \(P : \mathbb N \to \mathsf{Prop}\). That is, \(P\) is a function that takes a natural number and returns a proposition. In particular, for each natural number \(n\), the term \(P(n)\) *has *type \(\mathsf{Prop}\), written \(P(n) : \mathsf{Prop}\).

Moreover, in the proposition-as-types paradigm, \(P(n)\) *is* a type: the type of all proofs of \(P(n)\). Equally, to say \(h : P(n)\) is to say that the term \(h\) is a proof of \(P(n)\). This is the *proofs-as-terms* paradigm.

These ideas are made concrete in Lean:

```
def P (n : ℕ) : Prop := 2 * n > 10
example : Prop := P 6
example : P 6 := show 2 * 6 > 10, from dec_trivial
```

The first line above defines the predicate `P`

, the second shows `P 6 : Prop`

, i.e. that `P 6`

has type `Prop`

, equally that `P 6`

is a proposition. The third line exhibits a term of type `P 6`

which, by the proofs-as-terms paradigm, is a proof of `P 6`

. Here, we employ a proof by `dec_trivial`

, such proofs being the subject of a previous post on decidable propositions.

In practice, there is never (or rarely) any confusion as to whether `P n`

is to be treated as a term or as a type. The Lean user can happily prove results ignorant of the technical distinction.

This should not come as a surprise. An English language speaker doesn’t need to understand the terms ‘subject’ and ‘object’ to understand the semantic difference between the uses of ‘dog’ in ‘the dog ate the fish’ and ‘the fish ate the dog’.

Moreover, it is questionable whether knowing the subject-object distinction adds any value in conversations on canine-piscine diets. Grammatical frameworks are more useful for the study *of* language, foreign language acquisition, and natural language processing.

In both the type-theoretic and English-language contexts, the distinctions mentioned above are threshold concepts. Mastering them opens up previously inaccessible knowledge.

It is worth acknowledging that mastery is rarely a instantaneous affair. Reading the definitions of ‘subject’ and ‘object’ and working with simple examples does not instil in a person the ability to identify the subject and object in any given circumstance. What is the subject of the previous sentence? What is the subject of the previous sentence?

The same holds for mathematical concepts. Being able to repeat a definition does not evidence mastery. Rather, like Tamino and Pamina in *The Magic Flute*, the mathematician’s understanding of a concept is tested and shaped by applying it in ever more strenuous ordeals.

Only when I had digested the types dependent on terms concept was I able to understand the notion of motive as it appears in Chapter 7: Inductive Types of Theorem Proving in Lean (Avigad, de Moura, & Kong, 2021).

This notion underlies Lean’s primitives for dealing with recursion and induction. Of course, one *can* do recursion and induction in Lean without understanding motives. First, Lean + mathlib provide user-friendly interfaces such as pattern matching, and `induction`

and `cases`

tactics. Second, one can get by (as one does in mathematics and English language) by working from existing examples rather than having a ‘grammatical’ understanding.

Nevertheless, understanding motives provides a deeper level of understanding and permits one to wield confidently Lean’s eliminators.

In the remainder of this post, we’ll scaffold the notions of *types dependent on terms*, *induction*, *recursion*, *eliminators*, and *motives* in Lean, starting first with the familiar notion of induction on \(\mathbb N\).

## Informal induction on nat

Recall first the ordinary idea of induction on \(\mathbb N\). Informally, to prove a statement \(P(n)\) holds for every natural number \(n\), it suffices to prove (0) that \(P(0)\) is true and (1) for every natural number \(k\), if \(P(k)\) is true, then \(P(k+1)\) is true. For this reason, induction is likened to the domino effect. If (0) one can knock down domino \(0\) and (1) for every \(k\) if knowing one can knock down domino \(k\) means one can knock down domino \(k+1\), then one can knock down every domino.

Symbolically, given a predicate \(P\) on \(\mathbb N\), to prove \(\forall n, P(n)\) is to prove (0) \(P (0)\) and (1) \(\forall k, P(k) \to P(S(k))\).

Here, \(S\) is the ‘successor’ function. One should think of \(S(b)\) as the number after \(b\).

The assertion (0) \(P(0)\) is called the *base case*. Assertion (1) is the *induction step* and, when applying (1), the assertion \(P(k)\) is called the *induction hypothesis*.

As an example, we give an informal proof that \(\forall n, 0 + n = n\). The starting point is a recursive definition of addition. For every \(a\), we define \(a + 0 = a\). For every \(a\) and every \(b\), we define \(a + S(b) = S(a+b)\):

$$\begin{align*}

a + 0 &= a, \\

a + S (b) &= S(a + b).

\end{align*}

$$

Let the predicate \(P\) be defined so that \(P(n)\) means \(0 + n = n\). By induction, to prove our goal, it suffices to prove \(P(0)\) and \(\forall k, P(k) \to P(S(k))\).

The base case, \(P(0)\), states \(0 + 0 = 0\), which is precisely our first law of addition, applied with \(a = 0\).

To prove \(\forall k, P(k) \to P(S(k))\), we assume \(k \in \mathbb N\), assume the induction hypothesis \(h : P(k)\), and seek to prove \(P(S(k))\). Now \(h : 0 + k = k\) and our goal is to prove \(0 + S(k) = S(k)\). By the second law of addition, the goal is rewritten as \(S(0 + k) = S(k)\). Rewriting with the induction hypothesis, the goal is \(S(k) = S(k)\), which is true by reflexivity.

## Formal induction on nat

The informal notion of induction is captured in Lean by the following definition. The attribute `[elab_as_eliminator]`

here is needed so that Lean can properly infer the motive.

```
@[elab_as_eliminator, reducible]
def nat.ind (P : ℕ → Prop) (n : ℕ) (h₀ : P 0)
(h₁ : ∀ (k : ℕ), P k → P k.succ) : P n :=
nat.rec_on n h₀ h₁
```

Above, `P : ℕ → Prop`

is a predicate, `h₀ : P 0`

the base case, and `h₁ : ∀ (k : ℕ), P k → P k.succ`

the induction step. The definition provides a proof of `P n`

, for a given `n : ℕ`

. Here, `nat.rec_on`

is a built-in Lean recursive principle (an *eliminator*), automatically generated from the definition of `nat`

. We’ll return to this later. For now, we’ll use `nat.ind`

to give a formal proof of the result in the previous section, that `∀ a, 0 + a = a`

.

First, we record basic facts about addition whose proofs follow immediately from the definition of addition—a definition we’ll see later.

```
open nat
lemma add_zero (a : ℕ) : a + 0 = a := rfl
lemma add_succ (a b : ℕ) : a + succ b = succ (a + b) := rfl
```

For the result we have in mind, we’ll define the predicate `P`

as follows:

`def P := λ k, 0 + k = k`

For instance, `P 0`

is simply the statement `0 + 0 = 0`

.

`example : (P 0) = (0 = 0 + 0) := rfl`

We *prove* this base case by a simple application of `add_zero`

.

`lemma base : P 0 := add_zero 0`

It remains to prove the inductive step, `∀ k, P k → P k.succ`

. I’ve commented the proof below with the context and goal at each step.

```
lemma ind_step : ∀ k, P k → P k.succ :=
begin
unfold P,
intros k ih, -- k : ℕ, ih : 0 + k = k, ⊢ 0 + (succ k) = succ k
rw add_succ, -- ⊢ succ (0 + k) = succ k
rw ih, -- ⊢ succ k = succ k
end
```

To complete the proof, we merely apply `nat_ind`

.

`lemma zero_add (a : ℕ) : 0 + a = a := nat.ind P a base ind_step`

Of course, there are shorter proofs. Here’s one in which we use a lambda abstraction in place of `P`

.

```
lemma zero_add (a : ℕ) : 0 + a = a :=
begin
apply nat.ind (λ k, 0 + k = k),
{ refl, }, -- base case
{ intros k ih, rw [add_succ, ih], }, -- induction step
end
```

Finally, for the term-mode fans, we have the following:

```
lemma zero_add (a : ℕ) : 0 + a = a := nat.ind _ a rfl
(λ k ih, eq.trans (add_succ 0 k) (ih.symm ▸ rfl))
```

Note that Lean is able to infer the type of `P`

above, permitting us to replace `P`

with the placeholder `_`

. We can take advantage of inference in our tactic mode proof too:

```
lemma zero_add (a : ℕ) : 0 + a = a :=
begin
apply nat.ind _ a,
{ refl, },
{ intros k ih, rw [add_succ, ih], },
end
```

A nice compromise is the `refine`

tactic via which we can give a proof term for the base case and a tactic proof for the induction step.

```
lemma zero_add (a : ℕ) : 0 + a = a :=
begin
refine nat.ind _ a rfl _,
{ intros k ih, rw [add_succ, ih], },
end
```

## Recursively defined sequences of integers

Consider the sequence \((a_n)_{n=0}^\infty\) defined by \(a_0 := 6\) and \(a_{k+1} := 5 + 2 a_k\). This is a recursive definition of the sequence. One computes, for example, \(a_2\) by repeated application of the definition. Thus,

$$

a_2 = 5 + 2 a_1 = 5 + 2(5 + 2a_0) = 5 + 2(5 + 2 \times 6).

$$

Note the recursive definition of a sequence \((a_n)_{n=0}^\infty\) must provide (0) the initial term, \(a_0\) of the sequence and (1) for every natural number \(k\), an expression of \(a_{k+1}\) in terms of \(a_k\) and \(k\).

Another example is the sequence \((b_n)_{n=0}^\infty\) defined by \(b_0 := 0\) and \(b_{k+1} = k + b_k\).

To formalise this, we introduce a principle for defining sequences of integers in Lean. That is, a method for defining functions from `ℕ`

to `ℤ`

.

```
def nat.int_seq (n : ℕ) (a₀ : ℤ) (h : Π (k : ℕ) (ak : ℤ), ℤ)
: ℤ := nat.rec_on n a₀ h
```

In any given application of `nat.int_seq`

, we must provide `a₀`

, the initial term of the sequence, and `h`

, the recursive part of the definition. More specifically, `h`

must be a function that takes an index `k`

and the value `ak`

of the \(k\)-th term of the sequence and returns the value of the \(k+1\)-th term of the sequence.

Our sequence \((a_n)_{n=0}^\infty\) above is formalised as:

```
def seq1 (n : ℕ) : ℤ :=
nat.int_seq n (6 : ℤ) (λ k ak, 5 + 2 * ak)
```

From the definition of `seq1`

, we extract a lemma:

`lemma seq1_succ (k : ℕ) : seq1 (succ k) = 5 + 2 * (seq1 k) := rfl`

Lean can compute values of the sequence:

`example : seq1 2 = 39 := dec_trivial`

As you might expect, we can prove, via induction, a general formula for the \(n\)-th term of the sequence. Make sure you have `import tactic`

as the first line of your Lean file if you want to run the following.

```
lemma seq1_formula (n : ℕ) : seq1 n = 11 * 2 ^ n - 5 :=
begin
apply nat.ind (λ n, seq1 n = 11 * 2 ^ n - 5),
{ refl, },
{ intros k ih, rw [seq1_succ, ih], ring_exp, },
end
```

As before, the proof can be shortened.

```
lemma seq1_formula (n : ℕ) : seq1 n = 11 * 2 ^ n - 5 :=
nat.ind _ n rfl (λ k ih, by {rw [seq1_succ, ih], ring_exp })
```

The sequence \((b_n)_{n=0}^\infty\) given above has the following Lean definition:

```
def triangle (n : ℕ) : ℤ :=
nat.int_seq n 0 (λ k ak, k + ak)
```

### Exercise

Fill in the `sorry`

below. You may wish to prove an auxiliary lemma.

```
lemma triangle_formula (n : ℕ) :
2 * triangle (succ n) = n * (succ n) := sorry
```

## Sequences of vectors of increasing length

Ensure you have `import data.vector`

at the top of your Lean file for this section. We write a recursion principle intended for defining sequences \((v_n)_{n=0}^\infty\) where for each natural number \(n\), the term \(v_n\) is a vector of natural numbers of length \(n+1\).

For reasons that will become apparent in the next section, we first define `natvec`

so that `natvec k`

is the type of vectors of natural numbers of length `k + 1`

.

`def natvec (k : ℕ) := vector ℕ (succ k)`

Our recursion principle is:

```
ddef nat.vec_seq (n : ℕ) (a₀ : natvec 0)
(h : Π (k : ℕ) (ak : natvec k), natvec (succ k)) :
natvec n := nat.rec_on n a₀ h
```

Here, `a₀`

is the initial term in the sequence and `h`

is the recursive part of the definition. It takes a natural number `k`

and `ak : natvec k`

, the previous term in the sequence, and produces a new term of type `natvec (succ k)`

.

As an application of this principle, we define a sequence of vectors of triangular numbers.

```
def vseq_triangle (n : ℕ) : natvec n :=
nat.vec_seq n ⟨[0], rfl⟩ (λ k ak, vector.cons (k + ak.head) ak)
```

Here, for example, is the 5th term in the sequence.

`example : vseq_triangle 5 = ⟨[10, 6, 3, 1, 0, 0], _⟩ := rfl`

Compare the recursive parts of the definitions `nat.int_seq`

and `nat.vec_seq`

:

`h : Π (k : ℕ) (ak : ℤ), ℤ`

`h : Π (k : ℕ) (ak : natvec k), natvec (succ k)`

In the former case, each term of the sequence has type `ℤ`

. This is evinced by the type of `h`

. It takes the `k`

-th term `ak : ℤ`

and returns the `k+1`

-th term of type `ℤ`

.

In the latter case, the `k`

-th term has type `natvec k`

. Equally, it is a vector of natural numbers of length `k+1`

. The function `h`

returns the `k+1`

-the term of the sequence, here a term of type `natvec (succ k)`

.

Thus, each term in the sequence *has a different type*. For a deeper look at vectors, see my post on vectors and dependent function types.

If it seems unnatural (pardon the pun) for the `n`

-th term of the sequence to have length `n + 1`

, we can achieve much the same effect via a different recursion principle:

```
def nat.vec_seq_simple (n : ℕ) (h : Π (k : ℕ) (ak : vector ℕ k), vector ℕ (k+1)) : vector ℕ n :=
nat.rec_on n vector.nil h
```

The initial term of a sequence created by the above definition is fixed to be the nil vector. How then can we refer to the head of the ‘previous’ term in the sequence? One option is to write an auxiliary function that uses pattern matching on `k`

:

```
def next_vec : Π (k : ℕ), vector ℕ k → vector ℕ (k+1)
| 0 _ := ⟨[0], rfl⟩
| m@(a+1) am := vector.cons (m + am.head) am
def vseq_triangle' (n : ℕ) : vector ℕ n :=
nat.vec_seq_simple n next_vec
```

**However**, this is ‘cheating’ as the equation compiler converts this expression into a proof term involving the built-in `nat.rec_on`

(or, which is much the same, `nat.cases_on`

). In the next section, we’ll see how to do this manually.

## Motives and general recursion over nat

The principles of induction and recursion so far introduced are all special cases of a built-in recursion / elimination principle, `nat.rec_on`

. The command `#check @nat.rec_on`

gives its type:

```
nat.rec_on : Π {C : ℕ → Sort u_1} (n : ℕ), C 0 →
(Π (n : ℕ), C n → C n.succ) → C n
```

This can look daunting at first blush. It’s a little easier to comprehend if we name the hypotheses:

```
universe u
@[elab_as_eliminator, reducible]
def nat.rec_on' {C : ℕ → Sort u} (n : ℕ) (h₀ : C 0)
(h₁ : Π (k : ℕ), C k → C k.succ) : C n := nat.rec_on n h₀ h₁
```

`nat.rec_on`

is a principle for the definition of `ℕ`

-indexed sequences \((a_n)_{n=0}^\infty\) where for each natural number `n`

, the type of the term \(a_n\) is `C n`

. Compare this with the principle `nat.vec_seq`

from the Section Sequences of vectors of increasing length. The only difference is that we’ve generalised from `natvec n`

to `C n`

.

### Motives

The quantity `C`

is called the *motive* for the elimination (Avigad et al., 2021). In each particular application, the motive determines what type of function is being constructed. We use `nat.rec_on`

to define a function that gives, for each `n : ℕ`

, a term of type `C n`

; more formally, to define a dependent function of type `Π (n : ℕ), C n`

.

Elimination principle | Motive |

`nat.ind` | `P` , a term of type `ℕ → Prop` |

`nat.ind_seq` | `λ (k : ℕ), ℤ` |

`nat.vec_seq` | `natvec` |

Note `C`

is required to have type `ℕ → Sort u`

. That is, `C`

is a function that takes a natural number and returns a type (or, more precisely, a sort). Such a quantity is called a *type constructor*.

This is the stage at which we need to be clear about the fact that a term can both *have* a type and *be* a type. For a fixed `C : ℕ → Sort u`

and for a given `n : ℕ`

, the term `C n`

*is* a type, but it also *has* a type, viz. `Sort u`

.

It is entirely possible for `C`

to be a ‘constant’ function. This is the case for induction on `ℕ`

, where `C`

is the function `λ (k : ℕ), ℤ`

that maps each `k`

to `ℤ`

.

### nat as an inductive type

The eliminator `nat.rec_on`

isn’t defined in any line of Lean source code. Rather, Lean automatically creates an eliminator (indeed several eliminators) for each inductive type. The natural number type has the following definition.

```
inductive nat
| zero : nat
| succ (k : nat) : nat
```

This simply means that every term `n : ℕ`

is either `zero`

or is `succ k`

, for some `k : ℕ`

. For this reason, each of the functions `zero`

and `succ`

is called a *constructor* of the `ℕ`

type. Here, for example, is a term of type `ℕ`

, constructed via two applications of `succ`

and one of `zero`

.

`example : ℕ := succ (succ zero)`

You will have noticed that this definition of `ℕ`

closely mirrors the familiar notions of proof by induction and recursion. This is not a coincidence, as we’ll now see.

### Major and major premises

The eliminator `nat.rec_on`

has two arguments `h₀ : C 0`

and `h₁ : Π (k : ℕ), C k → C k.succ`

called *minor premises*. Each minor premise corresponds to a constructor of the `ℕ`

type. Recall the purpose of `nat.rec_on`

is to associate with each `n : ℕ`

a term of type `C n`

. The term `n`

is called the *major premise*.

Let’s pause to consider *why *this information is precisely what is required to define a function of type `Π (n : ℕ), C n`

.

Each `n : ℕ`

is constructed either from (0) `nat.zero`

or (1) `nat.succ`

. Our job is to give a term of type `C n`

. In case (0), this is equivalent to giving a term of type `C 0`

., which is precisely the minor premise `h₀`

. In case (1), the `succ`

constructor takes one argument, itself of type `ℕ`

. Let `k : ℕ`

be this argument. Then `n = succ k`

and to define `C n`

is to specify how `C (succ k)`

depends on `C k`

. More generally, to give a function `Π (k : ℕ), C k → C k.succ`

. This is the minor premise `h₁`

.

### Applications

In the section on sequences of vectors, we used pattern matching to define a sequence `vseq_triangle'`

such that for each `n : ℕ`

, `vseq_triangle' n`

is the length-`n`

vector \([t_{n-1}, t_{n-2}, \dots, t_1, t_0]\), where \(t_k\) is the $k$-th triangular number. We can do the same without pattern matching via two applications of `nat.rec_on`

.

To keep the things manageable, we write the second minor premise as a separate function, itself defined via `nat.rec_on`

.

```
def next_vec' : Π (k : ℕ), vector ℕ k → vector ℕ (k+1) :=
λ k, nat.rec_on k (λ am, ⟨[0], rfl⟩)
(λ m h am, vector.cons (m + am.head) am)
```

From this, we define `def vseq_triangle'`

, much as in our previous example.

```
def vseq_triangle' (n : ℕ) : vector ℕ n :=
nat.rec_on n vector.nil next_vec'
```

As promised, we can now give a definition of addition. First, we write a function `add_two`

that adds two to its argument. Such a function must return `2`

when its argument is `0`

. Let `k : ℕ`

and write `h`

for `add_two k`

. Informally, \(h = k + 2\). Then \((k+1) + 2 = (k + 2) + 1 = h + 1\). Formally, we require `add_two (succ k) = succ h`

. This translates into the argument `λ k h, succ h`

for the second minor premise:

`def add_two (n : ℕ) : ℕ := nat.rec_on n 2 (λ k h, succ h)`

As an application, we add two to `4`

.

`example : add_two 4 = 6 := rfl`

The motive of this definition is `λ x, ℕ`

, as can be seen by the following definition `add_two'`

and proof of its equivalence with `add_two`

.

```
def add_two' (n : ℕ) : ℕ :=
@nat.rec_on (λ x, ℕ) n 2 (λ k h, succ h)
example (n : ℕ) : add_two n = add_two' n := rfl
```

The above argument generalises if we replace `2`

with `m`

for a natural number `m`

to give a function `myadd`

such that `myadd m`

is the function that adds `m`

to its argument.

`def myadd (m n : ℕ) : ℕ := nat.rec_on n m (λ k h, succ h)`

For example, we add 11 to 5.

`example : myadd 11 5 = 16 := rfl`

The definition of `nat.add`

given in core Lean uses pattern matching rather than `nat.rec_on`

. As a result, `nat.add`

is not *definitionally* equal to `myadd`

. However, the two functions are (extensionally) equal. We prove this by (you guessed it) induction.

```
lemma myadd_eq_add (m n : ℕ) : myadd m n = nat.add m n :=
begin
apply nat.rec_on n,
{ refl, },
{ intros k ih,
dsimp [myadd, nat.add] at *, rw ih, },
end
```

### Exercises

- What is the difference between a type constructor and a dependent function?
- In the definition of
`next_vec'`

, what is the motive in the application of`nat.rec_on`

? - What is the motive for the definition of
`myadd_eq_add`

? - Adapting the ideas used to define
`vseq_triangle,`

fill in the`sorry`

s below. You may need to define auxiliary functions.

```
def next_fib : Π (k : ℕ), vector ℕ k → vector ℕ (k+1) :=
sorry
def vfib (n : ℕ) : vector ℕ n :=
nat.rec_on n vector.nil next_fib
lemma vfib_formula (n : ℕ) :
(vfib (n+3)).head = (vfib (n+2)).head + (vfib (n+1)).head :=
sorry
```

- Using
`nat.rec_on`

and`myadd`

, write the definition of a function`mymul : ℕ → ℕ → ℕ`

such that the following results hold.

```
lemma mymul_zero (m : ℕ) : mymul m 0 = 0 := rfl
lemma mymul_succ (m n : ℕ) :
mymul m (succ n) = myadd (mymul m n) m := rfl
```

- Prove
`mymul`

is equivalent to`nat.mul`

by filling in the`sorry`

below.

```
lemma mymul_eq_mul (m n : ℕ) : mymul m n = nat.mul m n :=
sorry
```

## What’s next?

This post has focussed entirely on defining functions of type `Π (n : ℕ), C n`

via the eliminator `nat.rec_on`

and special cases. As discussed, this eliminator is generated automatically by Lean from the definition of the induction type `ℕ`

.

Each inductive type `α`

has its own eliminator(s), with minor premises tailored to the constructors of `α`

. In a forthcoming post, we will look at inductive types and their eliminators in greater depth.

### Exercise

What might one mean by an eliminator for the type of binary trees with integer valued nodes?

## References

Avigad J., de Moura, L., and Kong, S (2021). ‘Theorem proving in Lean‘. Microsoft Research, accessed 30 May 2021.

Meyer, J. and Land. R (2003). ‘Threshold concepts and troublesome knowledge:

Linkages to ways of thinking and practising within the disciplines‘, in Rust, C. (ed.),

Improving Student Learning: Improving Student Learning Theory and Practice – Ten Years On. Oxford: Oxford Centre for Staff and Learning Development.

Nederpelt, R. and Geuvers, H. (2014). *Type theory and formal proof*. CUP.

Pingback: Well-founded recursion – exlean