In the last post, I showed how to prove, using only elementary tools, that zero is not the successor of any natural number, that O = succ(m) leads to a contradiction.

In this post, we’ll see that the successor function is injective. Writing S in place of succ, we’ll show that if S(m) = S(n), then m = n.

As before, we start with our own version of the natural numbers.

inductive mynat : Type
| O : mynat
| S : mynat → mynat

The strategy for our proof will be to find a predicate P on pairs of natural numbers such that on the assumption h : S(m) = S(n), the following criteria are satisfied:

From h₃, a proof of P (S m) (S n), we can easily prove the result m = n.

We can easily construct h₁, a proof of S(m) = S(m) → P (S m) (S m).

Why does this help? Substituting equation h into h₁ gives

h₂ : S(m) = S(n) → P (S m) (S n). But we know h, a proof of the antecedent of this implication, and hence we derive h₃ := h₂ h, the desired proof of P (S m) (S n). From this, we prove our initial target m = n.

I’m sure you will agree that the first condition above is satisfied if P is chosen so that P (S m) (S n) = m = n.

Since this is a good candidate for P, we’ll give it a Lean definition.

def P : mynat → mynat → Prop
| (S a) (S b) := a = b
| _ _ := true

The second pattern in the definition isn’t used on our proof, but Lean functions are total, so we must define P in all cases.

What remains is to prove the second criterion. By definition, P (S m) (S m) is simply m = m. This is true by reflexivity.

Those are all the ingredients. Time to assemble them up into a Lean proof.

lemma succ_inj (m n : mynat) (h : S m = S n) : m = n :=
have h₁ : S(m) = S(m) → P (S m) (S m), from λ _, rfl,
have h₂ : S(m) = S(n) → P (S m) (S n), from h ▸ h₁,
have h₃ : P (S m) (S n), from h₂ h,
show m = n, from h₃

The simplest type type with a recursive constructor is the type of natural numbers. Here’s our own implementation. Note O is the capital letter O, not the numeral 0.

inductive mynat : Type
| O : mynat
| succ : mynat → mynat

The most transparent parts of this definition are the constructors. We know O : mynat and we know succ n : mynat, given n : mynat.

In particular, we have both O : mynat and O.succ : mynat.

open mynat
example : mynat := O -- 0 is a natural number
example : mynat := O.succ -- so is 1

What isn’t evident is that O and O.succ are distinct.

In this blog post, we’ll show an elementary method for proving this result, but we are duty-bound to mention the standard, albeit more mystical, Lean approach. For each inductive type T, Lean automatically generates a function T.no_confusion that can be used to prove distinctness. The internals of this function will be the subject of a later post.

example : O ≠ O.succ := λ h, mynat.no_confusion h

Rather that using the no_confusion sledgehammer, we’ll adopt a more straightforward approach. As above, we’ll use a proof by negation. Assume h : O = O.succ and try to derive false.

What data do we have? We have an equation O = O.succ. Thus, anything true for O is also true for O.succ.

The clever idea is to construct a predicate Q on mynat such that Q(O) is something we can easily prove but for which Q(O.succ) is the statement O ≠ succ(O). Thus, by substituting the assumption h : O = O.succ into the proof of Q(O), we would have a proof of Q(O.succ), equally a proof of O ≠ succ(O). But this contradicts h.

One point to note about the proof above is that when we derive Q(O.succ) (i.e. O ≠ succ(O)), we haven’t immediately finished the proof. That’s because we only proved Q(O.succ) on the assumption h : O = O.succ.

One approach is to define Q(y) by O = y → P(y), where P is the predicate defined so that P(y) is true when y is O and is false otherwise.

def P : mynat → Prop
| O := true
| (succ k) := false
def Q (y : mynat) : Prop := O = y → P y

Then Q(O) is O = O → P(O), which is O = O → true. This, clearly, is something we can prove.

lemma QO : Q(O) := λ h, trivial

On the other hand, Q(O.succ) is O = O.succ → P(O.succ), equally O = O.succ → false.

example : Q(O.succ) = (O ≠ O.succ) := rfl

It merely remains to finish the argument by substitution.

example : O ≠ succ(O) :=
assume h : O = succ(O),
have h₂ : O ≠ succ(O), from @eq.subst _ Q _ _ h QO,
show false, from h₂ h

We can easily generalise this argument.

lemma succ_ne_zero (m : mynat) : O ≠ succ(m) :=
assume h : O = succ(m),
have h₂ : O ≠ succ(m), from @eq.subst _ Q _ _ h QO,
show false, from h₂ h

Note: To follow along with this post create a new Lean file and write the following at the top of your Lean file. If you do not have a copy of Lean installed you can follow along here (but it will run slower).

Groups are algebraic objects, and it is the study of these objects that forms an important part of abstract algebra, Group Theory. Before we see the definition of a group we must first familiarise ourselves with composition laws (also known as binary operations). Addition and multiplication on the integers \(\mathbb{Z}\) are examples of composition laws. They are essentially operations that take two elements of a set and combine them to make a third element.

Definition 1: Composition Law/Binary Operation

A composition law \(*\) on a set X is a rule assigning to \(a : X\) and \(b : X\) an element \(a * b : X\).

Exercise 1: Give another example of a composition law on the integers.

Is subtraction a composition law on the natural numbers \(\mathbb{N}\)? If we consider the natural numbers 3 and 7 we see that 3 – 7 = -4 which is not a natural number, so subtraction is not a composition law on \(\mathbb{N}\). Subtraction is however a composition law on the integers \(\mathbb{Z}\).

We are familiar with the following property of multiplication: \(a \times b = b \times a\). We generalize this notion with the following definition:

Definition 2: Commutativity

A composition law \(*\) on a set X is commutative if \(\forall a \,b : X\) we have that \(a * b = b * a\).

Note: a composition law is commutative if the order doesn’t matter.

Exercise 2:

Is addition on the natural numbers \(\mathbb{N}\) commutative?

Is subtraction on the integers \(\mathbb{Z}\) commutative?

We are also familiar with the fact that among the integers (a + b) + c = a + (b + c), as before we generalize this with the following definition:

Definition 3: Associativity

A composition law \(*\) on a set X is associative if \(\forall a\, b\, c : X\) we have that \((a*b)*c=a*(b*c)\).

Example 1: Addition is associative on the integers \(\mathbb{Z}\) because \(\forall a\, b\, c : \mathbb{Z}\) we have that \((a+b)+c = a+(b+c)\).

Example 2: Subtraction is not associative on the integers. Examine the following counter example \((3-7)-5=-9\) but \(3-(7-5)=1\) and clearly \(-9 \neq 1\).

Exercise 3: Is function composition associative?

Definition 4: Group

A pair \((G, *)\), where G is a non-empty set and \((*)\) is a composition law, is a group if:

\((*)\) is associative: \(\forall a\, b \,c : G \,, (a*b)*c=a*(b*c)\).

\((*)\) admits a neutral element: \(\exists 1_{G}, \,, \forall a : G, a*1_{G}=1_{G}*a=a\).

Every element of G has an inverse element in G: \(\forall a : G, \exists a^{-1} : G, a * a^{-1} = a^{-1}*a=1_{G}\).

A group is called abelian if its composition law is commutative.

Here are the group axioms as they appear in mathlib:

#check (mul_assoc : ∀ a b c : G, a * b * c = a * (b * c))
#check (one_mul : ∀ a : G, 1 * a = a)
#check (mul_one : ∀ a : G, a * 1 = a)
#check (mul_left_inv : ∀ a : G, a⁻¹ * a = 1)
#check (mul_right_inv : ∀ a : G, a * a⁻¹ = 1)

Exercise 4:

Is \((\mathbb{Z}, +)\) (The set of integers endowed with addition) a group? If so what is the neutral element.

Is \((\mathbb{Z}, \times)\) (The set of integers endowed with multiplication) a group? If so what is the neutral element.

Example 1: Given that \((G, *)\) is a group and \(a : G\), show that \(a * a = a \leftrightarrow a = 1_{G}\). (i.e if a * a = a then a is the neutral element of the group).

We first prove the → direction, let \(a : G\) then \((a*a)*a^{-1}=a*a^{-1}\) from which we see that \(a*(a*a^{-1})=a*a^{-1}\) which clearly reduces to \(a*1_{G}=1_{G}\) giving the required result \(a = 1_{G}\).

Conversely suppose that \(a = 1_{G}\) then \(a * a = 1_{G} * a = a\).

example (a : G) : a * a = a ↔ a = 1 :=
begin
split,--spliting the ↔
{--proving the → direction
intro h₁, --implication introduction
rw ← mul_right_inv a, -- using a * a⁻¹ = 1
nth_rewrite 1 ← h₁, --using the hypothesis a = a * a
--"nth_rewrite n" tells lean to only rewrite the nth a
rw mul_assoc, -- using associativity: a * b * c = a * (b * c)
rw mul_right_inv, -- using a * a⁻¹ = 1
rw mul_one, --using a * 1 = a
},
{--proving the ← direction
intro h₂,--implication introduction
rw h₂, --using a = 1
rw mul_one, --using a * 1 = a
}
end

Note: You can also use the “group” tactic to instantly solve goals like this.

Exercise 5: Given \(b \, c : G\), show that the inverse of \((b * c)\) is \((c^{-1}*b^{-1})\) in Lean. (Hint: Think about what it means for an element to be an inverse).

theorembc_inv (b c : G) : (b * c) * (c⁻¹ * b⁻¹) = 1 :=
begin
rw mul_assoc, -- using associativity: a * b * c = a * (b * c)
have h₁ : c * (c⁻¹ * b⁻¹) = b⁻¹, --delaring a hypothesis that we must now prove
{
rw ← mul_assoc, -- using associativity: a * (b * c) = a * b * c
rw mul_right_inv, -- using a * a⁻¹ = 1
rw one_mul, --using 1 * a = a
},
rw h₁, -- using the hypothesis we just proved
rw mul_right_inv, -- using a * a⁻¹ = 1end

Tip: You can click on the exercise to view a solution.

We will now prove that the inverse of an element of a group is unique. We will do this by supposing that there are two such inverses and then showing that they must be equal.

Theorem 1: Let \((G, *)\) be a group and \(a \, b \, c : G\) such that \(h_{1} : a * b= 1_{G}\) and \(h_{2} : a * c = 1_{G}\) then we have \(b = c\).

Proof: Using \(h_{1}\) and \(h_{2}\) we see that \(a * b = a * c\) from which we can easily deduce that \(b = c\) as required.

QED

theoremunique_inv (a b c : G) (h₁ : a * b = 1) (h₂ : a * c = 1) : b = c :=
begin
rw ← h₁ at h₂,
rw mul_right_inj a at h₂,
rw eq.symm h₂,
end

Exercise 6: Show that the identity element of a group is unique in Lean.

theoremunique_id (a b c : G) (h₁ : a * b = a) (h₂ : a * c = a) : b = c:=
begin
nth_rewrite 1 ← h₁ at h₂,
rw mul_right_inj a at h₂,
rw eq.symm h₂,
end

Notation: There are a few common systems of notation that we use with groups.

Additive Notation

Multiplicative Notation

\(a * b\)

\(a + b\)

\(ab\) or \(a \times b\) or \(a * b\)

Identity, \(id\)

\(0_{G}\) or \(0\)

\(1_{G}\) or \(1\)

Inverse of a

\(-a\)

\(a^{-1}\)

\(a * a\), \(a * a * a\)

\(2a\), \(3a\)

\(a^{2}\), \(a^{3}\)

You may have noticed that under our imports we had the line

variables (G : Type*) [group G]

This essentially is us telling lean “Let G be a group in multiplicative notation”, and so now we can freely use G in our theorem, lemmas and examples as we have done in this post.

Well-founded recursion generalises both strong induction and recursion. In this post we’ll:

Discuss the (informal) notion of a well-founded relation.

Show how to use well-founded relations (initially just with the < relation on ℕ) to define recursive functions and prove theorems

using the equation compiler and

by writing proof terms.

Explain why rfl proofs don’t work for functions defined by well-founded recursion.

Employ sneaky equation compiler tricks to simplify function writing.

Explain how to use the using_well_founded command

to work with custom relations via rel_tac and

to create decreasing proofs via dec_tac.

In subsequent posts, I plan to (1) use well-founded recursion to show that the quicksort algorithm sorts a list and (2) discuss the formal definition of well_founded.

The idea of well-founded recursion

In our post on recursors, we saw that the standard way to define a function in Lean is via structural recursion. For instance, to define a function f on ℕ, we must define f 0 and, for each n : ℕ, we must specify how f (n+1) depends on f n.

Not all functions are most naturally defined this way. Consider a log-like function function \(\mathrm{lg} : \mathbb N \to \mathbb N\) defined so that

$$ \mathrm{lg}(n)= \begin{cases} 0, & \text{ if } n = 0, \\ 1 + \mathrm{lg}(n/2), & \text{if } 0 < n. \end{cases} $$

Thus, recalling that we round when perform division on \(\mathbb N\), $$ \mathrm{lg}(7) = 1 + \mathrm{lg}(3) = 1 + (1 + \mathrm{lg}(1)) = 1 + (1 + (1 + \mathrm{lg}(0))) = 3. $$

This computation terminates because it depends on computing \(\mathrm{lg}(n)\) for each \(n\) in the finite decreasing chain \(7 > 3 > 1 > 0\).

More generally, every decreasing chain of natural numbers is finite. We say that the \(<\) relation on \(\mathbb N\) is well founded.

This isn’t (syntactically) the same as the definition of well founded given in Lean. We’ll come to the Lean definition in a later post. Moreover, as we’ll see in this post, the chain of equalities above cannot be proved by reflexivity.

Not all relations are well founded. In particular, the \(<\) relation on \(\mathbb Z\) is not well founded: you can easily find an infinitely long decreasing sequence of integers.

Lean has user-friendly mechanisms for:

Proving that a given relation is well founded.

Using well-founded relations to define functions.

As we explore these, make sure you have import tactic data.nat.parity tactic.induction at the top of your Lean file.

The equation compiler I

The equation compiler makes it (relatively) easy to write functions that depend on well-founded recursion.

So what exactly is the equation compiler? It’s the piece of software that takes declarations given using pattern matching, match expressions, etc. and compiles them down to Lean terms. For example, the function lg below is compiled to:

def lg._main._pack : Π (x : ℕ), (λ (x : ℕ), ℕ) x :=
λ (x : ℕ),
has_well_founded.wf.fix
(λ (x : ℕ),
x.cases_on
(id_rhs ((Π (y : ℕ), has_well_founded.r y 0 → ℕ) → ℕ)
(λ (F : Π (y : ℕ), has_well_founded.r y 0 → ℕ), 0))
(λ (n : ℕ),
id_rhs ((Π (y : ℕ), has_well_founded.r y (n + 1) → ℕ) → ℕ)
(λ (F : Π (y : ℕ), has_well_founded.r y (n + 1) → ℕ),
have this : (n + 1) / 2 < n + 1, from _,
1 + F ((n + 1) / 2) _))) x

You can see this first by typing #print lg, which shows:

And #print lg._main._pack gives the term at this top of this pop-out.

Here’s our function lg:

def lg : ℕ → ℕ
| 0 := 0
| (x + 1) :=
have (x + 1) / 2 < (x + 1), from nat.div_lt_self' x 0,
1 + lg ((x + 1)/2)

lg is defined so that lg 0 = 0 and lg (x + 1) = 1 + lg ((x + 1)/2) for every natural number x. But what’s happening with the have line?

We’re defining lg (x + 1) in terms of lg ((x + 1)/2. But this will only lead to a finite computation if \((x + 1)/2 < x + 1\). The have line provides Lean’s equation compiler with a proof of this fact.

If you remove the have line, Lean complains, ‘failed to prove recursive application is decreasing’. At the bottom of the error message, there’s some useful information:

nested exception message:
default_dec_tac failed
state:
lg : ℕ → ℕ,
x : ℕ
⊢ (x + 1) / 2 < x + 1

The last line shows what’s needed to prove prove that the recursive application is decreasing.

Now we can use the virtual machine to compute values of lg. For instance, #eval lg 32 gives 6 and #eval lg 270 gives 9. It looks like lg is related to the base-2 logarithm function. We’ll make this precise and prove part of the assertion later.

This was a simple example, but in other situations, we may need to give the equation compiler more information. For one thing, we may want to define a function on a type other than ℕ. In that case < (more precisely, nat.lt) will not be the correct relation. We’ll see later how to provide this information.

For now, let’s prove simple statements regarding lg. Our first goal is to prove lg 0 = 0. Were lg defined by structural recursion, this result could easily be proved by rfl, as in the example below:

def jane : ℕ → ℕ
| 0 := 2
| (x + 1) := 3 * jane x
example : jane 3 = 54 := rfl

But, for reasons that will be made clear in the next section, rfl doesn’t give a proof that lg 0 = 0. However, a rw will work.

lemma lg_zero : lg 0 = 0 := by { rw lg }
lemma lg_one : lg 1 = 1 := by { erw [lg, lg_zero] }

Above, erw is a more aggressive (more eager?) version of rw. We use it here so that Lean understands lg ((0 + 1) / 2) = lg 0.

Exercises

Write a function ex1 such that $$ \mathrm{ex1}(n)= \begin{cases} 1 ,& \text{if } n = 0, \\ n\times \mathrm{ex1}(n/3), & \text{otherwise}. \end{cases} $$

Prove that ex1 0 = 1 and that ex1 4 = 4.

Well-founded recursion by hand

The equation compiler does a marvellous job of hiding the details of the construction. In this section, we’ll uncover some of these details. Feel free to skip to the next section if this is not of interest to you.

The function lg_by_hand is extensionally equal to lg.

There are two parts to this definition. The actual computation is specified in myF. Before we go into the theory, compare this definition with that of lg above. The term h ((x + 1) / 2) (nat.div_lt_self' x 0) combines both the expression lg ((x + 1) / 2) and the inequality proved in the have line of lg.

The Lean function well_founded.fix takes 1) a proof hwf that a relation r is well founded (in this case, the theorem nat.lt_wf shows that < is well founded) and 2) a function F : Π x, (Π y, r y x → C y) → C x, where the motive C defines the type of the function being defined. It returns a function that takes each x to a term of type C x. In our case, myF takes the place of F. The motive is λ (a : ℕ), ℕ.

Crucially, the function returned by well_founded.fix (which we’ll henceforth abbreviate to fix) satisfies a ‘fixpoint equation’. Specifically,

fix hwf F x = F x (λ y h, fix hwf F y)

The theorem well_founded.fix_eq proves this assertion. In our case, we have:

lemma lg_by_hand_eq :
∀ x, lg_by_hand x = myF x (λ y h, lg_by_hand y) := well_founded.fix_eq _ _

This makes clear the sense in which myF x _ encodes the value of lg_by_hand x. We can use this theorem to compute lg_by_hand x for particular x.

Moreover, we can now see why rfl cannot prove lg 0 = 0 (or lg_by_hand 0 = 0). It’s because lg 0 isn’t definitionally equal to 0! What we can prove with rfl is the following:

Repeat the exercises concerning ex1 from the previous section, but using well_founded.fix to define your function. Take care not to re-use ex1 in your definitions!

Underhanded tricks

Our aim in writing lg and is to give a function that behaves like the base-2 logarithm. But it’s out by 1. A better definition would be:

$$ \mathrm{lg_2}(n)= \begin{cases} 0, & \text{ if } n \le 1, \\ 1 + \mathrm{lg_2}(n/2), & \text{otherwise}. \end{cases} $$

In the definition of lg, the argument split nicely into two cases: it’s either 0 or n+1, for some natural number n. This matches the inductive definition of ℕ.

How do we use the equation compiler to give a definition of lg2? One option is to use three patterns:

def lg2 : ℕ → ℕ
| 0 := 0
| 1 := 0
| (n + 2) :=
have h : (n + 2) / 2 = n / 2 + 1 :=
nat.add_div_of_dvd_left (dvd_refl 2),
have n / 2 + 1 < n + 2 := h ▸ nat.div_lt_self' _ _,
1 + lg2 ((n + 2) / 2)

This would get tedious if the first case of our (mathematical) function \(\mathrm{lg_2}\) were \(\mathrm{lg_2}(n) = 0\) for \(n \le 100\). It would be impossible if the first case were \(\mathrm{lg_2}(n) = 0\) for \(\mathrm{even}(n)\).

Trick 1

Our first underhanded trick (not a technical term) is to use one pattern! Below, we use the pattern n to match any input.

def lg2' : ℕ → ℕ
| n := if h : n ≤ 1 then 0 else
have n / 2 < n, from
nat.div_lt_self (by linarith) (nat.le_refl 2),
1 + lg2' (n / 2)

Trick 2

Everything we’ve done so far has been in term mode. Nothing in tactic mode. However, it’s fine to use tactic mode as long as the desired inequality is proved in term mode.

Fortunately, Lean permits (nested) switching between modes. The following examples illustrate the principle, though there is little to be gained by mode-switching in these simple cases.

In lg2'', we prove the inequality in term mode then switch to tactic mode. The exact tactic returns us to term mode. We provide the same term as before.

def lg2'' : ℕ → ℕ
| n := if h : n ≤ 1 then 0 else
have n / 2 < n,
from nat.div_lt_self (by linarith) (nat.le_refl 2),
by { exact 1 + lg2'' (n / 2) }

By contrast, lg2''' starts (after an initial pattern match and if statement) in tactic mode. We immediately switch back to term mode via exact.

def lg2''' : ℕ → ℕ
| n := if h : n ≤ 1 then 0 else
begin
exact have n / 2 < n,
from nat.div_lt_self (by linarith) (nat.le_refl 2),
1 + lg2''' (n / 2),
end

Trick 3: dec_tac

In fact, we can separate out the proof of the using_well_founded command and the dec_tac field.

def lg2_iv : ℕ → ℕ
| n := if h : n ≤ 1 then 0 else 1 + lg2_iv (n / 2)
using_well_founded
{ dec_tac :=
`[exact nat.div_lt_self (by linarith) (nat.le_refl 2)] }

dec_tac is a field of the structure well_founded_tactics. It should be a tactic—more precisely, a term of type tactic.unit. The odd-looking backtick-bracket notation creates such a term from an ordinary ‘interactive mode’ tactic.

Via this construction, we can (if desired) push almost an entire proof or definition into tactic mode.

Exercises

Using underhanded tricks, write a function ex2 given by $$ \mathrm{ex2}(n) = \begin{cases} 0, & \text{if } n \text{ is odd or } n = 0, \\ 1 + \mathrm{ex2}(n/2), & \text{otherwise}. \end{cases} $$

Prove that ex2 0 = 0 and that ex2 4 = 2.

Do the same, but using well_founded.fix to define your function (you can still use underhanded tricks, albeit indirectly).

Proofs by well-founded recursion

As you may have guessed by playing with lg, we expect:

$$\mathrm{lg(n)} = \lfloor\log_2(n)\rfloor + 1.$$

That is,

$$(n+1) < 2^{\mathrm{lg}(n+1)} \le 2(n+1),$$

for every natural number \(n\). In this section, we’ll prove the first of these inequalities. To start with, we’ll write the inequality as a predicate.

def lg_ineq (n : ℕ) : Prop := n + 1 < 2 ^ lg (n + 1)

Next we’ll give four proofs (!) of the desired result: one by hand and three using the equation compiler.

It transpires that we’ll need a couple of preliminary results:

lemma two_mul_succ_div_two {m : ℕ} : (2 * m + 1) / 2 = m :=
begin
rw [nat.succ_div, if_neg], norm_num,
rintros ⟨k, h⟩, exact nat.two_mul_ne_two_mul_add_one h.symm,
end
lemma two_mul_succ_succ {m : ℕ} : 2 * m + 1 + 1 = 2 * (m + 1) := by linarith

Proofs by hand

The ‘by hand’ proof uses well_founded.fix. The result and its proof is

lg_lemma_aux :
∀ (x : ℕ), (∀ (y : ℕ), y < x → lg_ineq y) → lg_ineq x

To prove this result, we can case split on x (considering whether we have a term of shape 0 or x + 1) and then decompose the second case to whether x is odd or even. Recall that h ‘carries’ with it proofs of the result for all y < x.

lemma lg_lemma_aux (x : ℕ) (h : Π (y : ℕ), y < x → lg_ineq y) : lg_ineq x :=
begin
cases x,
{ rw [lg_ineq, lg_one], norm_num, }, -- base case
dsimp [lg_ineq] at h ⊢,
rcases nat.even_or_odd x with ⟨m, rfl⟩ | ⟨m, rfl⟩,
{ have h₄ : m < 2 * m + 1, by linarith,
specialize h m h₄, rw [nat.succ_eq_add_one, lg, pow_add],
rw two_mul_succ_succ, norm_num, exact h },
{ have h₄ : m < 2 * m + 1 + 1, by linarith,
specialize h m h₄, rw [lg, pow_add],
rw [two_mul_succ_succ, two_mul_succ_div_two], linarith },
end

Note the have h₄ terms (in green) above that are needed to justify that the recursive application is decreasing.

Equation compiler proof 1

Alternatively , we can use the equation compiler. Our first approach uses tactics nested inside with the decreasing proof outside.

lemma lg_lemma2 : ∀ (x : ℕ), x + 1 < 2 ^ lg (x + 1)
| 0 := by { rw lg_one, norm_num, }
| (x + 1) := or.elim (nat.even_or_odd x)
( λ ⟨m, hm⟩,
have m < x + 1, by linarith, -- needed for wf recursion
begin
specialize lg_lemma2 m, rw [hm, lg, pow_add],
rw two_mul_succ_succ, norm_num, exact lg_lemma2,
end )
( λ ⟨m, hm⟩,
have m < x + 1, by linarith, -- needed for wf recursion
begin
specialize lg_lemma2 m, rw [hm, lg, pow_add],
rw [two_mul_succ_succ, two_mul_succ_div_two], linarith,
end )

If you aren’t fond of term mode, this proof may seem the most awkward as it uses or.elim and pattern-matching lambda abstraction.

Equation compiler proof 2

The next proof is a bit nicer. We put everything into tactic mode, pushout out to term mode only for the decreasing proof and the actual application of recursion.

lemma lg_lemma2' : ∀ (x : ℕ), x + 1 < 2 ^ lg (x + 1)
| 0 := by { rw lg_one, norm_num, }
| (x + 1) :=
begin
cases (nat.even_or_odd x),
{ rcases h with ⟨m, hm⟩,
rw [hm, lg, pow_add],
rw two_mul_succ_succ, norm_num,
exact have m < x + 1, by linarith,
lg_lemma2' m, },
{ rcases h with ⟨m, hm⟩,
rw [hm, lg, pow_add],
rw [two_mul_succ_succ, two_mul_succ_div_two],
exact have m < x + 1, by linarith,
show _, by { specialize lg_lemma2' m, linarith }, }
end

Note the weird construction on the penultimate line. We’re in term-mode here but push back into tactic mode using by.

Equation compiler proof 3

My favourite proof has (almost) everything in tactic mode. The decreasing proof is postponed to the final line via using_well_founded and dec_tac.

lemma lg_lemma2'' : ∀ (x : ℕ), x + 1 < 2 ^ lg (x + 1)
| 0 := by { rw lg_one, norm_num, }
| (x + 1) :=
begin
cases (nat.even_or_odd x),
{ rcases h with ⟨m, hm⟩, specialize lg_lemma2'' m,
rw [hm, lg, pow_add],
rw two_mul_succ_succ, norm_num, exact lg_lemma2'', },
{ rcases h with ⟨m, hm⟩,
specialize lg_lemma2'' m, rw [hm, lg, pow_add],
rw [two_mul_succ_succ, two_mul_succ_div_two], linarith }
end
using_well_founded
{ dec_tac := `[exact show m < x + 1, by linarith] }

It’s worth observing that, in this case, the tactic provided to dec_tac supplants both decreasing proofs in each of the previous results.

Exercises

Prove the corresponding upper bound result, viz. \(2^{\mathrm{lg}(n+1)} \le 2(n+1)\), for each \(n \in \mathbb N\). This should be easy.

Determine and prove the correct lower bound result for the function lg2 defined previously.

Using other relations with rel_tac

The < relation on ℕ isn’t always the correct well-founded relation for a given application. Indeed, we may not be defining a function on ℕ!

A nice example is given in mathlib. For natural numbers n and k, min_fac_aux n k is an auxiliary function used in the definition of the minimal prime factor of n. It is a verified trial-division algorithm.

$$ m_f (n, k) = \begin{cases} n, & \text{if } n < k^2, \\ k, & \text{if } k \mid n, \\ m_f (n, k+2), & \text{otherwise}. \end{cases} $$

At first blush, it looks as though this defines a non-terminating function. The value of \(m_f(n, k)\) depends on the value of \(m_f(n, k + 2)\) but \(k + 2\) is greater than \(k\). Before we freak out, let’s try to calculate \(m_f(77, 3)\).

In this example, the ‘\(k\) value’ increased by 2 per step and the calculation terminated when we reached a value \(k\) for which $k \mid 77$. However, we cannot guarantee that such a \(k\) will be reached. Here’s another example:

$$ m_f(11, 0) = m_f(11, 2) = m_f(11, 4) = 4 $$

The termination of the calculation above arises when we reach \(k\) such that \(n < k^2\).

Indeed, it’s clear that, no matter what the initial value of \(k\), by repeatedly adding \(2\) to \(k\), we will ultimately reach a situation where \(n < k^2\).

Using this observation, we’ll construct a well-founded relation.

Finding a well-founded relation

Our aim is to find a function \(f\) (which may depend on \(n\)) such that \(f(k +2) < f(k)\) for every \(k\). Intuitively, this shows that something is decreasing with every recursive application of \(m_f\).

This intuition is shored up by a theorem. Let f : α → ℕ be a function on α (called a measure in the context of well-founded relations). This function induces a relation ≺ on α defined so that a ≺ b means f a ≺ f b. It’s a theorem (called measure_wf in Lean) that any such relation will be well founded.

A first approach might be to try \(f(k) = 1 / k\). Surely \(1 / (k+2) < 1 / k\)? Actually no! As we’re dealing with natural number division, \(1 / k = 0\) for every non-zero \(k\). Though the induced relation is well-founded, it’s useless because we cannot prove the recursive application is decreasing.

What about \(-k\)? If we were working with integers and not natural numbers, this would work as \(-(k + 2) < -k\). Unfortunately, \(-k\) is \(0\) for every natural number \(k\).

However, all is not lost! Let’s take \(f(k) = \sqrt n – k + 2\). In our recursive application of \(m_f(n, k)\), we are guaranteed that \(n \ge k^2\). Thus, \(k \le \sqrt n\). From this, it follows that $$ f(k + 2) = \sqrt n – k < \sqrt n + 2 – k = f(k). $$

Putting this together, we have the following definition. Note the use of the ‘single pattern’ trick. We match every input argument with the single pattern k.

open nat
def min_fac_aux (n : ℕ) : ℕ → ℕ | k :=
if h : n < k * k then n else
if k ∣ n then k else
have sqrt n - k < sqrt n + 2 - k, -- needed for wf recursion
{ rw nat.sub_lt_sub_right_iff,
{ exact lt_trans (lt_add_one _) (lt_add_one _) },
{ rw nat.le_sqrt, exact le_of_not_gt h } },
min_fac_aux (k + 2)
using_well_founded {
rel_tac := λ _ _,
`[exact ⟨_, measure_wf (λ k, sqrt n + 2 - k)⟩]}

A new character, rel_tac, enters the stage. Here, as in most situations, we let Lean fill in the first two arguments by type inference. The last argument is a tactic whose purpose is to synthesize an instance of has_well_founded α. That is, it must (1) provide a relation r and (2) prove that r is well founded. As with dec_tac, we use the backtick-square bracket notation to produce a term of type tactic.unit.

In the example above, the tactic is:

exact ⟨_, measure_wf (λ k, sqrt n + 2 - k)⟩

We use a wildcard _ to let Lean fill in the relation. As intimated earlier,

measure_wf (λ k, sqrt n + 2 - k)

is a proof that the relation induced by the function λ k, sqrt n + 2 - k is well founded.

It’s worth noting that using_well_founded is always at work in the background. If the user doesn’t supply a dec_tac or rel_tac field, the command falls back on default tactics.

Note also that we can supply both dec_tac and rel_tac fields.

Finally, here is how mathlib defines the min_fac function using the auxiliary function above.

def min_fac : ℕ → ℕ
| 0 := 2
| 1 := 1
| (n+2) := if 2 ∣ n then 2 else min_fac_aux (n + 2) 3

Proofs with a custom relation

So we have a min_fac_aux function. Great. But how do we know it does anything useful? We should prove, at the very least, that min_fac_aux n k is a factor of n.

As you might expect by now, a proof using this custom relation is not so different in structure from the construction of the function itself. We’ll extract the proof that the recursion is decreasing from our above definition and note it as a separate lemma.

open nat
lemma min_fac_lemma (n k : ℕ) (h : ¬ n < k * k) :
sqrt n - k < sqrt n + 2 - k :=
begin
rw nat.sub_lt_sub_right_iff,
{ exact lt_trans (lt_add_one _) (lt_add_one _) },
{ rw nat.le_sqrt, exact le_of_not_gt h },
end

Using this result, we complete our proof.

lemma min_fac_dvd (n : ℕ) : ∀ (k : ℕ), (min_fac_aux n k) ∣ n
| k := if h : n < k * k then by { rw min_fac_aux, simp [h] } else
if hk : k ∣ n then by { rw min_fac_aux, simp [h, hk] } else
have _ := min_fac_lemma n k h,
by { rw min_fac_aux, simp [h, hk], exact min_fac_dvd (k+2) }
using_well_founded { rel_tac := λ _ _,
`[exact ⟨_, measure_wf (λ k, sqrt n + 2 - k)⟩]}

Exercises

Read about the Fermat factorisation method. Write a function fermat_fac to implement this method. Your function should rely on an auxiliary function fermat_fac_aux, defined via well-founded recursion.

Prove that fermat_fac n is a factor of n, for every natural number n.

Note: To follow along with this post create a new Lean file and write the following at the top of your Lean file. If you do not have a copy of Lean installed you can follow along here (but it will run slower).

In this post we will consider the type of natural numbers \(\mathbb{N} = \{0,1,2,3,…\}\). We would like to examine the structure of its elements, and so naturally we must develop a notion of divisibility. We do this by defining the following relation among the elements of \(\mathbb{N}\).

Definition 1: Divisibility

Given \(a \, b : \mathbb{N}\), we say that “a divides b” (or “a is a divisor of b”) if \( \exists \, c : \mathbb{N}\) such that \(b=a*c\).

Notation: \(*\) denotes multiplication, ‘a divides b’ is written as \(a \mid b\), and is typed “a \mid b” in Lean.

Let us now prove that 1 divides every natural number.

Theorem 1: If \(n : \mathbb{N}\) then \(1 \mid n\)

Proof: Suppose that \(n : \mathbb{N}\). Using definition 1 we see that \(1 \mid n\) means that \(\exists c : \mathbb{N}\) such that \(n=1\times c\). So it suffices to exhibit a natural number c for which \(n=1 * c\) holds, the obvious choice is \(c = n = 1 * n\).

QED

theorem one_dvd_n (n : ℕ) : 1 ∣ n :=
begin
use n,
--exhibiting our choice for c
rw one_mul,
-- telling lean that 1 * n = nend

Tip: Copy and paste the code into your Lean file and click at the end of each line to watch how the tactic state evolves.

As you can see in the above Lean proof we use the command “use” to exhibit our choice of natural number to resolve the existential quantifier \(\exists \). To complete the proof we have to remind Lean that we can rewrite \(1*n = n \).

Exercise 1: Prove that every natural number divides itself (i.e. if \(n : \mathbb{N}\) then \(n \mid n\)).

I mentioned earlier that we aim to examine the structure of the natural numbers, one interesting question we can ask is how many divisors (including 1 and itself) a number has. Inspect the following table (note that we have omitted 0 and 1):

Natural Number

Number of Divisors

2

2

3

2

4

3

5

2

6

4

7

2

8

4

9

3

10

4

11

2

12

6

13

2

14

4

Notice how the number of divisors randomly dip to a minimum of 2 every now and then, this uncertain pattern continues infinitely. The numbers with only 2 divisors are precisely those with 1 and themselves as their only divisors, namely the primes. Understanding the behaviour of the primes has troubled mathematicians since Euclid in Ancient Greece.

Definition 2: Prime

A natural number p is prime if \(2 \leq p\) and \(\forall d \mid p\) we have \(d = 1\) or \(d = p\).

Let’s see this in Lean:

def prime (p : ℕ) := 2 ≤ p ∧ ∀ d ∣ p, d = 1 ∨ d = p

So now when we write “prime p” in Lean we are saying “p is a prime number”.

Let’s prove that a prime cannot be zero, this may seem trivial but it isn’t to Lean.

Theorem 2: If \(n : \mathbb{N}\) and \(h_{1} : \textrm{n is prime}\) then \(n \neq 0 \).

Proof: Let us suppose that \(n = 0\) and derive a contradiction (or false). By \(h_{1}\) we know n is prime, by the definition of prime we have the following two hypotheses \(h_{3} : 2 \leq n\) and \(h_{4} : \forall d \mid n \quad\textrm{we have}\quad d = 1 \quad \textrm{or} \quad d = n\). Now rewriting (or substituting) \(h_{1} : n = 0\) into \(h_{3} : 2 \leq n\) we yield \(h_{3} : 2 \leq 0\) which is a contradiction as required.

QED

lemma prime_ne_zero (n : ℕ) (h₁ : prime n) : n ≠ 0 :=
begin
intro h₂,
-- In Lean n ≠ 0 is defined as n = 0 → false and so we use implication introduction
rw prime at h₁,
--unpacking the definition of prime
cases h₁ with h₃ h₄,
-- splitting the conjunction in the definition of prime
rw h₂ at h₃,
-- substituting n = 0 into the inequality
linarith,
-- the linarith tactic solves goals involving linear inequalitiesend

The line “rw prime at h₁,” is not necessary for this proof, however it does make it easier to make sense of the evolving tactic state. Comment the line out by typing “- -” in front of it and see that Lean still accepts the proof.

Exercise 2: Prove that a prime cannot be 1 in Lean.

Lean eliminators combine the notions of recursion and induction. Put another way, eliminators are used to define functions on an inductive type (such as \(\mathbb N\)). The motive describes what kind of function is being defined.

To understand motives, one needs to know about types-dependent-on-terms, a threshold concept (in the language of education) that opens doors to the world of type theory.

In this post, I’ll (1) compare threshold concepts in set theory and type theory and (2) scaffold the motive concept in the special case of eliminators on \(\mathbb N\) through the examples of induction, recursive definition of sequences of integers, and recursive definition of sequences of vectors of increasing length.

Threshold concepts

One of the common sources of confusion for beginning undergraduate mathematicians is the difference between the set membership and subset relations. The problem is pronounced in situations where an object can, in some parts of an argument, act as a set member and, in other parts, as a subset.

A case in point is Cantor’s theorem on the impossibility of a surjection between a set \(A\) and its power set \(\mathcal P (A)\). Cantor’s proof assumes, for a contradiction, that there exists a surjection \(f : A \to \mathcal P(A)\). Let \(B = \{ x \in A : x\notin f(x)\}\). Then \(B\) is simultaneously a subset of \(A\) and an element of \(\mathcal P(A)\). As \(f\) is surjective, there exists \(c \in A\) such that \(f(c) = B\).

The contradiction arises by considering each of the cases \(c \in B\) and \(c \notin B\). Understanding what is meant by \(c \in B\), \(c \notin B\) and \(f(c) = B\) is crucial to appreciating Cantor’s proof.

For a teacher, the challenge is that the set concept is so deeply embedded that it is almost impossible to understand the source of the student’s confusion.

All things are difficult before they are easy

variously attributed to Saadi Shirazi and Thomas Fuller.

This skill of unpacking and using set-theoretic definitions is an example of a threshold concept (Mayer and Land, 2003):

“A threshold concept can be considered as akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress.”

Once the set concept is mastered, many proofs in undergraduate mathematics become a matter of unfolding definitions. Think about how this relates to the task of proving that if \(N\) and \(K\) are normal subgroups of \(G\), then so is \(NK\). I’ve seen students fall over this when their concepts of group and normal subgroup are founded not in (set-theoretic) definitions but in the manipulation of examples.

Mastery of a threshold concept is irreversible: “the change of perspective occasioned by acquisition of a threshold concept is unlikely to be forgotten, or will be unlearned only by considerable effort” (ibid.).

Learning to work with Lean confronted me with threshold concepts in type theory. Grappling with these concepts gave me a renewed appreciation of the analogous challenges facing students new to set theory.

In type theory, a term can simultaneously have a type and be a type. In most situations, only one of these aspects is significant. As in the proof of Cantor’s theorem, difficulties can arise in an argument where both aspects are relevant for the same object. In our later discussion of motives, we’ll view types dependent on terms through both these lenses.

Types dependent on terms

Dependent type theory abounds with types dependent on terms, as described in Nederpelt and Geuvers (2014), Chapter 5. For data-driven examples of such objects, see my post on vectors and dependent function types. Types dependent on terms also arise naturally through the propositions-as-types paradigm. Consider, for example, the predicate \(P\) on \(\mathbb N\) defined by \(P(n) := 2n > 10\).

Type theoretically, \(P : \mathbb N \to \mathsf{Prop}\). That is, \(P\) is a function that takes a natural number and returns a proposition. In particular, for each natural number \(n\), the term \(P(n)\) has type \(\mathsf{Prop}\), written \(P(n) : \mathsf{Prop}\).

Moreover, in the proposition-as-types paradigm, \(P(n)\) is a type: the type of all proofs of \(P(n)\). Equally, to say \(h : P(n)\) is to say that the term \(h\) is a proof of \(P(n)\). This is the proofs-as-terms paradigm.

These ideas are made concrete in Lean:

def P (n : ℕ) : Prop := 2 * n > 10
example : Prop := P 6
example : P 6 := show 2 * 6 > 10, from dec_trivial

The first line above defines the predicate P, the second shows P 6 : Prop, i.e. that P 6 has type Prop, equally that P 6 is a proposition. The third line exhibits a term of type P 6 which, by the proofs-as-terms paradigm, is a proof of P 6. Here, we employ a proof by dec_trivial, such proofs being the subject of a previous post on decidable propositions.

In practice, there is never (or rarely) any confusion as to whether P n is to be treated as a term or as a type. The Lean user can happily prove results ignorant of the technical distinction.

This should not come as a surprise. An English language speaker doesn’t need to understand the terms ‘subject’ and ‘object’ to understand the semantic difference between the uses of ‘dog’ in ‘the dog ate the fish’ and ‘the fish ate the dog’.

Moreover, it is questionable whether knowing the subject-object distinction adds any value in conversations on canine-piscine diets. Grammatical frameworks are more useful for the study of language, foreign language acquisition, and natural language processing.

In both the type-theoretic and English-language contexts, the distinctions mentioned above are threshold concepts. Mastering them opens up previously inaccessible knowledge.

It is worth acknowledging that mastery is rarely a instantaneous affair. Reading the definitions of ‘subject’ and ‘object’ and working with simple examples does not instil in a person the ability to identify the subject and object in any given circumstance. What is the subject of the previous sentence? What is the subject of the previous sentence?

The same holds for mathematical concepts. Being able to repeat a definition does not evidence mastery. Rather, like Tamino and Pamina in The Magic Flute, the mathematician’s understanding of a concept is tested and shaped by applying it in ever more strenuous ordeals.

Only when I had digested the types dependent on terms concept was I able to understand the notion of motive as it appears in Chapter 7: Inductive Types of Theorem Proving in Lean (Avigad, de Moura, & Kong, 2021).

This notion underlies Lean’s primitives for dealing with recursion and induction. Of course, one can do recursion and induction in Lean without understanding motives. First, Lean + mathlib provide user-friendly interfaces such as pattern matching, and induction and cases tactics. Second, one can get by (as one does in mathematics and English language) by working from existing examples rather than having a ‘grammatical’ understanding.

Nevertheless, understanding motives provides a deeper level of understanding and permits one to wield confidently Lean’s eliminators.

In the remainder of this post, we’ll scaffold the notions of types dependent on terms, induction, recursion, eliminators, and motives in Lean, starting first with the familiar notion of induction on \(\mathbb N\).

Informal induction on nat

Recall first the ordinary idea of induction on \(\mathbb N\). Informally, to prove a statement \(P(n)\) holds for every natural number \(n\), it suffices to prove (0) that \(P(0)\) is true and (1) for every natural number \(k\), if \(P(k)\) is true, then \(P(k+1)\) is true. For this reason, induction is likened to the domino effect. If (0) one can knock down domino \(0\) and (1) for every \(k\) if knowing one can knock down domino \(k\) means one can knock down domino \(k+1\), then one can knock down every domino.

Symbolically, given a predicate \(P\) on \(\mathbb N\), to prove \(\forall n, P(n)\) is to prove (0) \(P (0)\) and (1) \(\forall k, P(k) \to P(S(k))\).

Here, \(S\) is the ‘successor’ function. One should think of \(S(b)\) as the number after \(b\).

The assertion (0) \(P(0)\) is called the base case. Assertion (1) is the induction step and, when applying (1), the assertion \(P(k)\) is called the induction hypothesis.

As an example, we give an informal proof that \(\forall n, 0 + n = n\). The starting point is a recursive definition of addition. For every \(a\), we define \(a + 0 = a\). For every \(a\) and every \(b\), we define \(a + S(b) = S(a+b)\):

$$\begin{align*} a + 0 &= a, \\ a + S (b) &= S(a + b). \end{align*} $$

Let the predicate \(P\) be defined so that \(P(n)\) means \(0 + n = n\). By induction, to prove our goal, it suffices to prove \(P(0)\) and \(\forall k, P(k) \to P(S(k))\).

The base case, \(P(0)\), states \(0 + 0 = 0\), which is precisely our first law of addition, applied with \(a = 0\).

To prove \(\forall k, P(k) \to P(S(k))\), we assume \(k \in \mathbb N\), assume the induction hypothesis \(h : P(k)\), and seek to prove \(P(S(k))\). Now \(h : 0 + k = k\) and our goal is to prove \(0 + S(k) = S(k)\). By the second law of addition, the goal is rewritten as \(S(0 + k) = S(k)\). Rewriting with the induction hypothesis, the goal is \(S(k) = S(k)\), which is true by reflexivity.

Formal induction on nat

The informal notion of induction is captured in Lean by the following definition. The attribute [elab_as_eliminator] here is needed so that Lean can properly infer the motive.

@[elab_as_eliminator, reducible]
def nat.ind (P : ℕ → Prop) (n : ℕ) (h₀ : P 0)
(h₁ : ∀ (k : ℕ), P k → P k.succ) : P n :=
nat.rec_on n h₀ h₁

Above, P : ℕ → Prop is a predicate, h₀ : P 0 the base case, and h₁ : ∀ (k : ℕ), P k → P k.succ the induction step. The definition provides a proof of P n, for a given n : ℕ. Here, nat.rec_on is a built-in Lean recursive principle (an eliminator), automatically generated from the definition of nat. We’ll return to this later. For now, we’ll use nat.ind to give a formal proof of the result in the previous section, that ∀ a, 0 + a = a.

First, we record basic facts about addition whose proofs follow immediately from the definition of addition—a definition we’ll see later.

open nat
lemma add_zero (a : ℕ) : a + 0 = a := rfl
lemma add_succ (a b : ℕ) : a + succ b = succ (a + b) := rfl

For the result we have in mind, we’ll define the predicate P as follows:

def P := λ k, 0 + k = k

For instance, P 0 is simply the statement 0 + 0 = 0.

example : (P 0) = (0 = 0 + 0) := rfl

We prove this base case by a simple application of add_zero.

lemma base : P 0 := add_zero 0

It remains to prove the inductive step, ∀ k, P k → P k.succ. I’ve commented the proof below with the context and goal at each step.

lemma ind_step : ∀ k, P k → P k.succ :=
begin
unfold P,
intros k ih, -- k : ℕ, ih : 0 + k = k, ⊢ 0 + (succ k) = succ k
rw add_succ, -- ⊢ succ (0 + k) = succ k
rw ih, -- ⊢ succ k = succ k
end

To complete the proof, we merely apply nat_ind.

lemma zero_add (a : ℕ) : 0 + a = a := nat.ind P a base ind_step

Of course, there are shorter proofs. Here’s one in which we use a lambda abstraction in place of P.

lemma zero_add (a : ℕ) : 0 + a = a :=
begin
apply nat.ind (λ k, 0 + k = k),
{ refl, }, -- base case
{ intros k ih, rw [add_succ, ih], }, -- induction step
end

Finally, for the term-mode fans, we have the following:

lemma zero_add (a : ℕ) : 0 + a = a := nat.ind _ a rfl
(λ k ih, eq.trans (add_succ 0 k) (ih.symm ▸ rfl))

Note that Lean is able to infer the type of P above, permitting us to replace P with the placeholder _. We can take advantage of inference in our tactic mode proof too:

lemma zero_add (a : ℕ) : 0 + a = a :=
begin
apply nat.ind _ a,
{ refl, },
{ intros k ih, rw [add_succ, ih], },
end

A nice compromise is the refine tactic via which we can give a proof term for the base case and a tactic proof for the induction step.

lemma zero_add (a : ℕ) : 0 + a = a :=
begin
refine nat.ind _ a rfl _,
{ intros k ih, rw [add_succ, ih], },
end

Recursively defined sequences of integers

Consider the sequence \((a_n)_{n=0}^\infty\) defined by \(a_0 := 6\) and \(a_{k+1} := 5 + 2 a_k\). This is a recursive definition of the sequence. One computes, for example, \(a_2\) by repeated application of the definition. Thus, $$ a_2 = 5 + 2 a_1 = 5 + 2(5 + 2a_0) = 5 + 2(5 + 2 \times 6). $$

Note the recursive definition of a sequence \((a_n)_{n=0}^\infty\) must provide (0) the initial term, \(a_0\) of the sequence and (1) for every natural number \(k\), an expression of \(a_{k+1}\) in terms of \(a_k\) and \(k\).

Another example is the sequence \((b_n)_{n=0}^\infty\) defined by \(b_0 := 0\) and \(b_{k+1} = k + b_k\).

To formalise this, we introduce a principle for defining sequences of integers in Lean. That is, a method for defining functions from ℕ to ℤ.

In any given application of nat.int_seq, we must provide a₀, the initial term of the sequence, and h, the recursive part of the definition. More specifically, h must be a function that takes an index k and the value ak of the \(k\)-th term of the sequence and returns the value of the \(k+1\)-th term of the sequence.

Our sequence \((a_n)_{n=0}^\infty\) above is formalised as:

As you might expect, we can prove, via induction, a general formula for the \(n\)-th term of the sequence. Make sure you have import tactic as the first line of your Lean file if you want to run the following.

lemma seq1_formula (n : ℕ) : seq1 n = 11 * 2 ^ n - 5 :=
begin
apply nat.ind (λ n, seq1 n = 11 * 2 ^ n - 5),
{ refl, },
{ intros k ih, rw [seq1_succ, ih], ring_exp, },
end

As before, the proof can be shortened.

lemma seq1_formula (n : ℕ) : seq1 n = 11 * 2 ^ n - 5 :=
nat.ind _ n rfl (λ k ih, by {rw [seq1_succ, ih], ring_exp })

The sequence \((b_n)_{n=0}^\infty\) given above has the following Lean definition:

def triangle (n : ℕ) : ℤ :=
nat.int_seq n 0 (λ k ak, k + ak)

Exercise

Fill in the sorry below. You may wish to prove an auxiliary lemma.

Ensure you have import data.vector at the top of your Lean file for this section. We write a recursion principle intended for defining sequences \((v_n)_{n=0}^\infty\) where for each natural number \(n\), the term \(v_n\) is a vector of natural numbers of length \(n+1\).

For reasons that will become apparent in the next section, we first define natvec so that natvec k is the type of vectors of natural numbers of length k + 1.

def natvec (k : ℕ) := vector ℕ (succ k)

Our recursion principle is:

ddef nat.vec_seq (n : ℕ) (a₀ : natvec 0)
(h : Π (k : ℕ) (ak : natvec k), natvec (succ k)) :
natvec n := nat.rec_on n a₀ h

Here, a₀ is the initial term in the sequence and h is the recursive part of the definition. It takes a natural number k and ak : natvec k, the previous term in the sequence, and produces a new term of type natvec (succ k).

As an application of this principle, we define a sequence of vectors of triangular numbers.

def vseq_triangle (n : ℕ) : natvec n :=
nat.vec_seq n ⟨[0], rfl⟩ (λ k ak, vector.cons (k + ak.head) ak)

Here, for example, is the 5th term in the sequence.

Compare the recursive parts of the definitions nat.int_seq and nat.vec_seq:

h : Π (k : ℕ) (ak : ℤ), ℤ

h : Π (k : ℕ) (ak : natvec k), natvec (succ k)

In the former case, each term of the sequence has type ℤ. This is evinced by the type of h. It takes the k-th term ak : ℤ and returns the k+1-th term of type ℤ.

In the latter case, the k-th term has type natvec k. Equally, it is a vector of natural numbers of length k+1. The function h returns the k+1-the term of the sequence, here a term of type natvec (succ k).

If it seems unnatural (pardon the pun) for the n-th term of the sequence to have length n + 1, we can achieve much the same effect via a different recursion principle:

def nat.vec_seq_simple (n : ℕ) (h : Π (k : ℕ) (ak : vector ℕ k), vector ℕ (k+1)) : vector ℕ n :=
nat.rec_on n vector.nil h

The initial term of a sequence created by the above definition is fixed to be the nil vector. How then can we refer to the head of the ‘previous’ term in the sequence? One option is to write an auxiliary function that uses pattern matching on k:

def next_vec : Π (k : ℕ), vector ℕ k → vector ℕ (k+1)
| 0 _ := ⟨[0], rfl⟩
| m@(a+1) am := vector.cons (m + am.head) am
def vseq_triangle' (n : ℕ) : vector ℕ n :=
nat.vec_seq_simple n next_vec

However, this is ‘cheating’ as the equation compiler converts this expression into a proof term involving the built-in nat.rec_on (or, which is much the same, nat.cases_on). In the next section, we’ll see how to do this manually.

Motives and general recursion over nat

The principles of induction and recursion so far introduced are all special cases of a built-in recursion / elimination principle, nat.rec_on. The command #check @nat.rec_on gives its type:

nat.rec_on : Π {C : ℕ → Sort u_1} (n : ℕ), C 0 →
(Π (n : ℕ), C n → C n.succ) → C n

This can look daunting at first blush. It’s a little easier to comprehend if we name the hypotheses:

universe u
@[elab_as_eliminator, reducible]
def nat.rec_on' {C : ℕ → Sort u} (n : ℕ) (h₀ : C 0)
(h₁ : Π (k : ℕ), C k → C k.succ) : C n := nat.rec_on n h₀ h₁

nat.rec_on is a principle for the definition of ℕ-indexed sequences \((a_n)_{n=0}^\infty\) where for each natural number n, the type of the term \(a_n\) is C n. Compare this with the principle nat.vec_seq from the Section Sequences of vectors of increasing length. The only difference is that we’ve generalised from natvec n to C n.

Motives

The quantity C is called the motive for the elimination (Avigad et al., 2021). In each particular application, the motive determines what type of function is being constructed. We use nat.rec_on to define a function that gives, for each n : ℕ, a term of type C n; more formally, to define a dependent function of type Π (n : ℕ), C n.

Elimination principle

Motive

nat.ind

P, a term of type ℕ → Prop

nat.ind_seq

λ (k : ℕ), ℤ

nat.vec_seq

natvec

`

Note C is required to have type ℕ → Sort u. That is, C is a function that takes a natural number and returns a type (or, more precisely, a sort). Such a quantity is called a type constructor.

This is the stage at which we need to be clear about the fact that a term can both have a type and be a type. For a fixed C : ℕ → Sort u and for a given n : ℕ, the term C nis a type, but it also has a type, viz. Sort u.

It is entirely possible for C to be a ‘constant’ function. This is the case for induction on ℕ, where C is the function λ (k : ℕ), ℤ that maps each k to ℤ.

nat as an inductive type

The eliminator nat.rec_on isn’t defined in any line of Lean source code. Rather, Lean automatically creates an eliminator (indeed several eliminators) for each inductive type. The natural number type has the following definition.

inductive nat
| zero : nat
| succ (k : nat) : nat

This simply means that every term n : ℕ is either zero or is succ k, for some k : ℕ. For this reason, each of the functions zero and succ is called a constructor of the ℕ type. Here, for example, is a term of type ℕ, constructed via two applications of succ and one of zero.

example : ℕ := succ (succ zero)

You will have noticed that this definition of ℕ closely mirrors the familiar notions of proof by induction and recursion. This is not a coincidence, as we’ll now see.

Major and major premises

The eliminator nat.rec_on has two arguments h₀ : C 0 and h₁ : Π (k : ℕ), C k → C k.succ called minor premises. Each minor premise corresponds to a constructor of the ℕ type. Recall the purpose of nat.rec_on is to associate with each n : ℕ a term of type C n. The term n is called the major premise.

Let’s pause to consider why this information is precisely what is required to define a function of type Π (n : ℕ), C n.

Each n : ℕ is constructed either from (0) nat.zero or (1) nat.succ. Our job is to give a term of type C n. In case (0), this is equivalent to giving a term of type C 0., which is precisely the minor premise h₀. In case (1), the succ constructor takes one argument, itself of type ℕ. Let k : ℕ be this argument. Then n = succ k and to define C n is to specify how C (succ k) depends on C k. More generally, to give a function Π (k : ℕ), C k → C k.succ. This is the minor premise h₁.

Applications

In the section on sequences of vectors, we used pattern matching to define a sequence vseq_triangle' such that for each n : ℕ, vseq_triangle' n is the length-n vector \([t_{n-1}, t_{n-2}, \dots, t_1, t_0]\), where \(t_k\) is the $k$-th triangular number. We can do the same without pattern matching via two applications of nat.rec_on.

To keep the things manageable, we write the second minor premise as a separate function, itself defined via nat.rec_on.

def next_vec' : Π (k : ℕ), vector ℕ k → vector ℕ (k+1) :=
λ k, nat.rec_on k (λ am, ⟨[0], rfl⟩)
(λ m h am, vector.cons (m + am.head) am)

From this, we define def vseq_triangle', much as in our previous example.

def vseq_triangle' (n : ℕ) : vector ℕ n :=
nat.rec_on n vector.nil next_vec'

As promised, we can now give a definition of addition. First, we write a function add_two that adds two to its argument. Such a function must return 2 when its argument is 0. Let k : ℕ and write h for add_two k. Informally, \(h = k + 2\). Then \((k+1) + 2 = (k + 2) + 1 = h + 1\). Formally, we require add_two (succ k) = succ h. This translates into the argument λ k h, succ h for the second minor premise:

def add_two (n : ℕ) : ℕ := nat.rec_on n 2 (λ k h, succ h)

As an application, we add two to 4.

example : add_two 4 = 6 := rfl

The motive of this definition is λ x, ℕ, as can be seen by the following definition add_two' and proof of its equivalence with add_two.

def add_two' (n : ℕ) : ℕ :=
@nat.rec_on (λ x, ℕ) n 2 (λ k h, succ h)
example (n : ℕ) : add_two n = add_two' n := rfl

The above argument generalises if we replace 2 with m for a natural number m to give a function myadd such that myadd m is the function that adds m to its argument.

def myadd (m n : ℕ) : ℕ := nat.rec_on n m (λ k h, succ h)

For example, we add 11 to 5.

example : myadd 11 5 = 16 := rfl

The definition of nat.add given in core Lean uses pattern matching rather than nat.rec_on. As a result, nat.add is not definitionally equal to myadd. However, the two functions are (extensionally) equal. We prove this by (you guessed it) induction.

lemma myadd_eq_add (m n : ℕ) : myadd m n = nat.add m n :=
begin
apply nat.rec_on n,
{ refl, },
{ intros k ih,
dsimp [myadd, nat.add] at *, rw ih, },
end

Exercises

What is the difference between a type constructor and a dependent function?

In the definition of next_vec', what is the motive in the application of nat.rec_on?

What is the motive for the definition of myadd_eq_add?

Adapting the ideas used to define vseq_triangle, fill in the sorrys below. You may need to define auxiliary functions.

Using nat.rec_on and myadd, write the definition of a function mymul : ℕ → ℕ → ℕ such that the following results hold.

lemma mymul_zero (m : ℕ) : mymul m 0 = 0 := rfl
lemma mymul_succ (m n : ℕ) :
mymul m (succ n) = myadd (mymul m n) m := rfl

Prove mymul is equivalent to nat.mul by filling in the sorry below.

lemma mymul_eq_mul (m n : ℕ) : mymul m n = nat.mul m n :=
sorry

What’s next?

This post has focussed entirely on defining functions of type Π (n : ℕ), C n via the eliminator nat.rec_on and special cases. As discussed, this eliminator is generated automatically by Lean from the definition of the induction type ℕ.

Each inductive type α has its own eliminator(s), with minor premises tailored to the constructors of α. In a forthcoming post, we will look at inductive types and their eliminators in greater depth.

Exercise

What might one mean by an eliminator for the type of binary trees with integer valued nodes?

References

Avigad J., de Moura, L., and Kong, S (2021). ‘Theorem proving in Lean‘. Microsoft Research, accessed 30 May 2021.

Hi I am Omar Harhara, I just finished my 4th year of my integrated masters mathematics degree. In this post I will describe my experiences with Lean to date. I started learning to use Lean in September 2020 with the natural number game and mathematics in lean. For my dissertation I investigated Lean’s usefulness as a teaching tool, specifically in the context of an introductory proof course. This involved holding weekly “Proof Skills” sessions alongside James Arthur (who will also be blogging on this page). These sessions supplemented the delivery of MTH1001 – Mathematical Structures at the University of Exeter and focussed on developing proof comprehension and construction with the use of Lean. The structure of these sessions was based on student feedback. We opened the sessions with a “Pen and Paper” proof with emphasis on Lean terminology (goals, types, local context etc.) and then proceeded to exhibit the same proof and/or similar proofs in Lean after students were familiar with the concepts at hand. We found that Lean offered students an alternative perspective on the proof at hand. Students often struggle with keeping track of what remains to be proved and knowing when the proof is complete. Lean answers these questions instantly guiding students through the proof by showing the remaining goals in the tactic state. For this reason I believe Lean shows great promise as a teaching tool. This summer I will be developing Lean learning resources here.

Ordinary mathematical functions have a domain and a codomain. In Lean, we write f : α → β to denote that f is a function with domain α and codomain β. More briefly, that f is a function from α to β.

The following is a simple example of a function from ℕ to ℤ that we’ve called double.

def double (n : ℕ) : ℤ := 2 * n

We can check that double has the expected type by issuing the command #check double. This displays double : ℕ → ℤ. Alternatively, we can prove that double has the expected type, as in the following example.

example : ℕ → ℤ := double

The odd-looking statement above says, “Here’s an example, I’m going to produce a term of type ℕ → ℤ. And double is just such a term”. Lean’s type-checking mechanism goes to work. It finds no errors and stays quiet. Were you to replace ℤ above with an ℕ, you’d get an error message. Try it and see.

Lean permits a more general definition of function. The codomain of a so-called dependent function can depend on the type of its argument. Our aim in this post is to fill in the sorry below to define a dependent function.

def zero_vec (n : ℕ) : vector ℝ n := sorry

Important: if you’re following along at home, you’ll need to place import data.vector data.real.basic at the top of your Lean file.

The function zero_vec should map its input n to a length n zero vector of real numbers. The domain of the function is still ℕ, but codomain vector ℝ n depends on the value n.

Lists and vectors

To start, we’ll define a function that produces a list of length n consisting of real zeros. Below, note that the codomain is list ℝ, the type of lists with real entries.

def zero_list (n : ℕ) : list ℝ := list.repeat 0 n

This is a simple application of one of the core Lean functions for working with lists. I’m sure you can guess what list.repeat does. To check our intuition, we have the following special case.

example : zero_list 5 = [0,0,0,0,0] := rfl

The rfl here is a proof by reflexivity, an (iterated) application of the definitions until the computer is left with two terms that are clearly equal.

What’s the difference between a vector on length n and a list of length n? The list type doesn’t “know” about the length of its terms. A term of type vector ℝ n consists of two items of information: (1) a list and (2) a proof that the list has length n. Here’s an example.

def myvec : vector ℝ 5 := ⟨[5,10,15,20,25], rfl⟩

The vector myvec consists of the vector [5,10,15,20,25] and rfl, the proof that this vector has length 5.

Using the same idea, we construct a vector of length 100, consisting of zeros.

example : vector ℝ 100 := ⟨zero_list 100, rfl⟩

So surely, it’s now a simple matter of replacing 100 with n to produce a vector of length n. But the following doesn’t work.

example (n : ℕ) : vector ℝ n := ⟨zero_list n, rfl⟩

Why not? A proof by rfl involves a finite computation. You can see this in action if you replace 100 above with a larger number. The proof takes longer and longer until you hit your computer’s limit. On my computer, using 5000 produces a “deep recursion” error.

When we replace 100 with n, we’re asking for something entirely different. We’re asking for a proof that works when n is arbitrary. This is not the same as fixing n and asking a proof for that particular n. In much the same way, it’s a finite computation to determine, for any given even number \(n > 2\) whether \(n\) is a sum of two primes. However, it’s quite a different thing to do that for an arbitrary \(n\).

Conveniently, core Lean has a lemma list.length_repeat that shows a list produced by repeating a single term n times has length n.

Dependent functions

Using list.length_repeat to provide the proof, we can finally define our function.

def zero_vec (n : ℕ) : vector ℝ n :=
⟨zero_list n, list.length_repeat 0 n⟩

Issuing the #check zero_vec command produces:

zero_vec : Π (n : ℕ), vector ℝ n

The Pi-binderΠ is merely a fancy notation for writing a dependent function type. It shouldn’t be confused with the use of Π in mathematics to denote a product. For the dependent function zero_vec, the domain is ℕ, but we label it with a name (here n) to be used as an argument for the codomain vector ℝ n. There’s nothing special about n. The following proves that zero_vec has type Π (m : ℕ), vector ℝ m.

example : Π (m : ℕ), vector ℝ m := zero_vec

In fact, all functions in Lean have a dependent function type. It just so happens that the codomain is a constant ‘function’ of the input value for ordinary functions.

Consider again our function double.

example : Π (n : ℕ), ℤ := double

The above shows that double is a dependent function, albeit one in which the codomain ℤ does not depend on n.

Exercise

Generalise zero_vec to a function const_vec such that const_vec c n is a length n vector all of whose entries are c, for some real constant c. What is the type of const_vec?

Generalise further to a function const_vec' where const_vec' c n is is a length n vector all of whose entries are c, for some c : α, for an arbitrary type α. What is the type of const_vec'?

(Hard) Write a function that adds two vectors of natural numbers of length n. Do this by filling in the sorry below. def add_vec {n : ℕ} (v₁ v₂: vector ℕ n) : vector ℕ n := sorry Run a test on your definition by ensuring the following gives no error. example : add_vec ⟨[0,1,2],rfl⟩ ⟨[5,6,7],rfl⟩ = ⟨[5,7,9], rfl⟩ := rfl You’ll need experience writing recursive definitions in Lean.

(Harder) Having finished the exercise above, show that the nth entry of add_vec v₁ v₂ is the sum of the nth entries of v₁ and v₂. That is, fill in the sorry below: lemma nth_add_nth : Π {n : ℕ} (v₁ v₂ : vector ℕ n) (a : fin n), v₁.nth a + v₂.nth a = (add_vec v₁ v₂).nth a := sorry

That is, it is decidable whether x=x, for any x : α and for any Sort α. We’ll use this to show 5 ≠ 5 is decidable.

example : decidable (5 ≠ 5) := infer_instance

How did this work? Surely we can only prove 5=5 is decidable, not 5 ≠ 5? But recall we’ve already prove that ¬p is decidable if p is. Type class inference chains together these two instances.

True and false propositions

Consider the following proof:

example : ite (5 = 5) true false := trivial

The term trivial is a proof of true. But ite (5 = 5) true false is definitionally equal to true. Whence, trivial is an appropriate proof!

For the remainder of this post, we’ll use a stripped-down version of the natural numbers, mynat. We do this to hide the type class instances that Lean knows about for the ordinary natural number type. If you don’t want to use mynat, please just replace it everywhere below with nat.

Here’s our proof of decidable (a ≤ b).

instance decidable_mynat_le (a : mynat) : ∀ b : mynat,
decidable (a ≤ b) :=
begin
induction a with a ha,
{ intro b, apply is_true, apply mynat.zero_le, },
{ intro b,
induction b with b hb,
{ apply is_false, apply not_succ_le_zero, },
{ cases ha b with h h,
{ apply is_false, intro h₂,
apply h, exact le_of_succ_le_succ h₂, },
{ apply is_true, exact succ_le_succ_of_le h, }, } },
end

type mismatch, term
trivial
has type
true
but is expected to have type
ite (30 ≤ 4) true false

The previous examples were straightforward applications of single decidable instances. Of more interest, we have that (5 ≤ 6) ∧ (x = x) is a true decidable proposition for any x. We know this because the type class inference systems allows Lean to derive a decision procedure for (5 ≤ 6) ∧ (x = x) by chaining together decidability of 5 ≤ 6 and of x = x

As an exercise, come up with an example where the second pattern in of_as_true is matched. You’ll end up proving a false statement, hence at least one of your hypotheses must be false.

Lean uses the notation dec_trivial as a shorthand for of_as_true trivial.

example : (5 ≠ 7) ∧ (10 ≤ 20) := dec_trivial

Exercise

Referring to the definition of mynat, fill in the sorry below: