TacticsMore Basic Tactics

Overview

This chapter introduces several additional proof strategies and tactics that allow us to begin proving more interesting properties of functional programs. We will see:
  • how to use auxiliary lemmas in both "forward-style" and "backward-style" proofs;
  • how to reason about data constructors (in particular, how to use the fact that they are injective and disjoint);
  • how to strengthen an induction hypothesis (and when such strengthening is required); and
  • more details on how to reason by case analysis.

Set Warnings "-notation-overridden,-parsing".
From DMFP Require Export Poly.

The apply Tactic

We often encounter situations where the goal to be proved is exactly the same as some hypothesis in the context or some previously proved lemma.

Theorem silly1 : (n m o p : nat),
     n = m
     [n;o] = [n;p]
     [n;o] = [m;p].
Proof.
  intros n m o p eq1 eq2.
  rewrite <- eq1.
Here, we could finish with "rewrite eq2. reflexivity." as we have done several times before. We can achieve the same effect in a single step by using the apply tactic instead:

  apply eq2. Qed.
silly1 is the first theorem we've seen that has multiple hypotheses. How should we parse the following proposition?

     n = m
     [n;o] = [n;p] →
     [n;o] = [m;p].
The answer is that implication is right associative, that is, we assume that parentheses go on the right, as in:

     n = m
     ([n;o] = [n;p] →
      [n;o] = [m;p]).
If we were to read this proposition aloud, we could naively read it as:
    if it is the case that [n = m], 
    then if it is the case that [[n;o] = [n;p]]
         then it is the case that [[n;o] = [m;p]].
But we can read it more naturally as:
    if it is the case that [n = m] 
                       and [[n;o] = [n;p]],
    then it is the case that [[n;o] = [m;p]].
That is, we've translated nested implications to a conjuction, with the word "and". We'll talk more about conjunctions in Logic.
The apply tactic also works with conditional hypotheses and lemmas: if the statement being applied is an implication, then the premises of this implication will be added to the list of subgoals needing to be proved. If there are no premises (i.e., H isn't an implication), then you're done!

Theorem silly2 : (n m o p : nat),
     n = m
     ( (q r : nat), q = r [q;o] = [r;p])
     [n;o] = [m;p].
Proof.
  intros n m o p eq1 eq2.
  apply eq2. apply eq1. Qed.
Typically, when we use apply H, the statement H will begin with a that binds some universal variables. When Coq matches the current goal against the conclusion of H, it will try to find appropriate values for these variables. For example, when we do apply eq2 in the following proof, the universal variable q in eq2 gets instantiated with n and r gets instantiated with m.

Theorem silly2a : (n m : nat),
     (n,n) = (m,m)
     ( (q r : nat), (q,q) = (r,r) [q] = [r])
     [n] = [m].
Proof.
  intros n m eq1 eq2.
  apply eq2. apply eq1. Qed.

Exercise: 2 stars, standard, optional (silly_ex)

Complete the following proof without using simpl.

Theorem silly_ex :
     ( n, evenb n = true oddb (S n) = true)
     evenb 3 = true
     oddb 4 = true.
Proof.
  (* FILL IN HERE *) Admitted.
To use the apply tactic, the (conclusion of the) theorem, lemma, hypothesis, or other fact being applied must match the goal exactly -- for example, apply will not work if the left and right sides of the equality are swapped.

Theorem silly3_firsttry : (n : nat),
     true = eqb n 5
     eqb (S (S n)) 7 = true.
Proof.
  intros n H.
  simpl.
Here we cannot use apply directly, but we can use the symmetry tactic, which switches the left and right sides of an equality in the goal.

  symmetry.
  simpl. (* (This simpl is optional, since apply will perform
            simplification first, if needed.) *)

  apply H. Qed.

Exercise: 3 stars, standard (apply_exercise1)

(Hint: You can use apply with previously defined lemmas, not just hypotheses in the context. Remember that Search is your friend.)

Theorem rev_exercise1 : (l l' : list nat),
     l = rev l'
     l' = rev l.
Proof.
  (* FILL IN HERE *) Admitted.
To sum up, one can say apply H when H is a hypothesis in the context or a previously proven lemma of the form x1 ..., H1 H2 ... Hn and the current goal is shape Hn (for some instantatiation of each of the xi). After running apply H in such a state, you will have a subgoal for each of H1 through Hn-1. In the special case where H isn't an implication, your proof will be completed.
The informal analogue of the apply tactic is phrasing like "by H, it suffices to show Hn by proving each of H1, ..., Hn-1", followed by a proof of those subsidiary parts. In the case where H isn't an implication, you can try using language like "we are done by H" or "which H proves exactly".

The apply ... with ... Tactic

The following silly example uses two rewrites in a row to get from [a,b] to [e,f].

Example trans_eq_example : (a b c d e f : nat),
     [a;b] = [c;d]
     [c;d] = [e;f]
     [a;b] = [e;f].
Proof.
  intros a b c d e f eq1 eq2.
  rewriteeq1. rewriteeq2. reflexivity. Qed.
Since this is a common pattern, we might like to pull it out as a lemma recording, once and for all, the fact that equality is transitive.

Theorem trans_eq : (X:Type) (n m o : X),
  n = m m = o n = o.
Proof.
  intros X n m o eq1 eq2. rewriteeq1. rewriteeq2.
  reflexivity. Qed.
Now, we should be able to use trans_eq to prove the above example. However, to do this we need a slight refinement of the apply tactic.

Example trans_eq_example' : (a b c d e f : nat),
     [a;b] = [c;d]
     [c;d] = [e;f]
     [a;b] = [e;f].
Proof.
  intros a b c d e f eq1 eq2.
If we simply tell Coq apply trans_eq at this point, it can tell (by matching the goal against the conclusion of the lemma) that it should instantiate X with [nat], n with [a,b], and o with [e,f]. However, the matching process doesn't determine an instantiation for m: we have to supply one explicitly by adding with (m:=[c,d]) to the invocation of apply.

  apply trans_eq with (m:=[c;d]).
  apply eq1. apply eq2. Qed.
Actually, we usually don't have to include the name m in the with clause; Coq is often smart enough to figure out which instantiation we're giving. We could instead write: apply trans_eq with [c;d].

Exercise: 3 stars, standard, optional (apply_with_exercise)

Example trans_eq_exercise : (n m o p : nat),
     m = (minustwo o)
     (n + p) = m
     (n + p) = (minustwo o).
Proof.
  (* FILL IN HERE *) Admitted.
The informal analogue of apply H with (x:=m) is to say "by H with m for x" or "which we can see by H on m"

The injection and discriminate Tactics

Recall the definition of natural numbers:
     Inductive nat : Type :=
       | O
       | S (n : nat).
This definition asserts that every number has one of two forms: either it is the constructor O or it is built by applying the constructor S to another number. But there is more here than meets the eye: implicit in the definition are two more facts:
  • The constructor S is injective, or one-to-one. That is, if S n = S m, it must be that n = m.
  • The constructors O and S are disjoint. That is, O is not equal to S n for any n.
Similar principles apply to all inductively defined types: all constructors are injective, and the values built from distinct constructors are never equal. For lists, the cons constructor is injective and nil is different from every non-empty list. For booleans, true and false are different. (Since true and false take no arguments, their injectivity is neither here nor there.) And so on.
For example, we can prove the injectivity of S by using the pred function defined in Basics.v.

Theorem S_injective : (n m : nat),
  S n = S m
  n = m.
Proof.
  intros n m H1.
  assert (H2: n = pred (S n)). { reflexivity. }
  rewrite H2. rewrite H1. reflexivity.
Qed.
This technique can be generalized to any constructor by writing the equivalent of pred -- i.e., writing a function that "undoes" one application of the constructor. As a more convenient alternative, Coq provides a tactic called injection that allows us to exploit the injectivity of any constructor. Here is an alternate proof of the above theorem using injection:

Theorem S_injective' : (n m : nat),
  S n = S m
  n = m.
Proof.
  intros n m H.
By writing injection H as Hmn at this point, we are asking Coq to generate all equations that it can infer from H using the injectivity of constructors (in the present example, the equation n = m). Each such equation is added as a hypothesis (with the name Hmn in this case) into the context.

  injection H as Hnm. apply Hnm.
Qed.
Here's a more interesting example that shows how injection can derive multiple equations at once.

Theorem injection_ex1 : (n m o : nat),
  [n; m] = [o; o]
  [n] = [m].
Proof.
  intros n m o H.
  (* WORKED IN CLASS *)
  injection H as H1 H2.
  rewrite H1. rewrite H2. reflexivity.
Qed.
Alternatively, if you just say injection H with no as clause, then all the equations will be turned into hypotheses at the beginning of the goal.

Theorem injection_ex2 : (n m o : nat),
  [n; m] = [o; o]
  [n] = [m].
Proof.
  intros n m o H.
  injection H.
  (* WORKED IN CLASS *)
  intros H1 H2. rewrite H1. rewrite H2. reflexivity.
Qed.

Exercise: 3 stars, standard (injection_ex3)

Example injection_ex3 : (X : Type) (x y z : X) (l j : list X),
  x :: y :: l = z :: j
  j = z :: l
  x = y.
Proof.
  (* FILL IN HERE *) Admitted.
So much for injectivity of constructors. What about disjointness?
The principle of disjointness says that two terms beginning with different constructors (like O and S, or true and false) can never be equal. This means that, any time we find ourselves in a context where we've assumed that two such terms are equal, we are justified in concluding anything we want, since the assumption is nonsensical.
The discriminate tactic embodies this principle: It is used on a hypothesis involving an equality between different constructors (e.g., S n = O), and it solves the current goal immediately. Here is an example:

Theorem eqb_0_l : n,
   eqb 0 n = true n = 0.
Proof.
  intros n.
We can proceed by case analysis on n. The first case is trivial.

  destruct n as [| n'] eqn:E.
  - (* n = 0 *)
    intros H. reflexivity.
However, the second one doesn't look so simple: assuming 0 =? (S n') = true, we must show S n' = 0! The way forward is to observe that the assumption itself is nonsensical:

  - (* n = S n' *)
    simpl.
If we use discriminate on this hypothesis, Coq confirms that the subgoal we are working on is impossible and removes it from further consideration.

    intros H. discriminate H.
Qed.
This is an instance of a logical principle known as the principle of explosion, which asserts that a contradictory hypothesis entails anything (even false things!).

Theorem discriminate_ex1 : (n : nat),
  S n = O
  2 + 2 = 5.
Proof.
  intros n contra. discriminate contra. Qed.

Theorem discriminate_ex2 : (n m : nat),
  false = true
  [n] = [m].
Proof.
  intros n m contra. discriminate contra. Qed.
If you find the principle of explosion confusing, remember that these proofs are not showing that the conclusion of the statement holds. Rather, they are showing that, if the nonsensical situation described by the premise did somehow arise, then the nonsensical conclusion would also follow, because we'd be living in an inconsistent universe where every statement is true. We'll explore the principle of explosion in more detail in the next chapter, Logic.

Exercise: 1 star, standard (discriminate_ex3)

Example discriminate_ex3 :
   (X : Type) (x y z : X) (l j : list X),
    x :: y :: l = []
    x = z.
Proof.
  (* FILL IN HERE *) Admitted.
The injectivity of constructors allows us to reason that (n m : nat), S n = S m n = m. The converse of this implication is an instance of a more general fact about both constructors and functions, which we will find convenient in a few places below:

Theorem f_equal : (A B : Type) (f: A B) (x y: A),
  x = y f x = f y.
Proof. intros A B f x y eq. rewrite eq. reflexivity. Qed.

Theorem eq_implies_succ_equal : (n m : nat),
    n = m S n = S m.
Proof. intros n m H. apply f_equal. apply H. Qed.
There is also a tactic named `f_equal` that can prove such theorems. Given a goal of the form f a1 ... an = g b1 ... bn, the tactic f_equal will produce subgoals of the form f = g, a1 = b1, ..., an = bn. At the same time, any of these subgoals that are simple enough (e.g., immediately provable by reflexivity) will be automatically discharged by f_equal.
Note that the f_equal tactic isn't necessary---one can always use the lemma.

Theorem eq_implies_succ_equal' : (n m : nat),
    n = m S n = S m.
Proof. intros n m H. f_equal. apply H. Qed.

Using Tactics on Hypotheses

By default, most tactics work on the goal formula and leave the context unchanged. However, many tactics also have a variant that performs a similar operation on a statement in the context.
For example, the tactic simpl in H performs simplification in the hypothesis named H in the context.

Theorem S_inj : (n m : nat) (b : bool),
     eqb (S n) (S m) = b
     eqb n m = b.
Proof.
  intros n m b H. simpl in H. apply H. Qed.
Similarly, apply L in H matches some conditional statement L (of the form L1 L2, say) against a hypothesis H in the context. However, unlike ordinary apply (which rewrites a goal matching L2 into a subgoal L1), apply L in H matches H against L1 and, if successful, replaces it with L2.
In other words, apply L in H gives us a form of "forward reasoning": from L1 L2 and a hypothesis matching L1, it produces a hypothesis matching L2. By contrast, apply L is "backward reasoning": it says that if we know L1L2 and we are trying to prove L2, it suffices to prove L1.
Here is a variant of a proof from above, using forward reasoning throughout instead of backward reasoning.

Theorem silly3' : (n : nat),
  (eqb n 5 = true eqb (S (S n)) 7 = true)
  true = eqb n 5
  true = eqb (S (S n)) 7.
Proof.
  intros n eq H.
  symmetry in H. apply eq in H. symmetry in H.
  apply H. Qed.
Forward reasoning starts from what is given (premises, previously proven theorems) and iteratively draws conclusions from them until the goal is reached. Backward reasoning starts from the goal, and iteratively reasons about what would imply the goal, until premises or previously proven theorems are reached. If you've seen informal proofs before (for example, in a math class), they probably used forward reasoning. They might have even told you that backward reasoning was wrong or not allowed! In general, idiomatic use of Coq tends to favor backward reasoning, but in some situations the forward style can be easier to think about.

Exercise: 3 stars, standard, recommended (plus_n_n_injective)

Practice using "in" variants in this proof. (Hint: use plus_n_Sm.)

Theorem plus_n_n_injective : n m,
     n + n = m + m
     n = m.
Proof.
  intros n. induction n as [| n'].
  (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard (eq_base_true)

Lemma eq_base_true : (b1 b2 : base),
    eq_base b1 b2 = true b1 = b2.
Proof.
  (* FILL IN HERE *) Admitted.

Varying the Induction Hypothesis

Sometimes it is important to control the exact form of the induction hypothesis when carrying out inductive proofs in Coq. In particular, we need to be careful about which of the assumptions we move (using intros) from the goal to the context before invoking the induction tactic. For example, suppose we want to show that the double function is injective -- i.e., that it maps different arguments to different results:
    Theorem double_injective: n m,
      double n = double mn = m.
The way we start this proof is a bit delicate: if we begin with
      intros n. induction n.
all is well. But if we begin it with
      intros n m. induction n.
we get stuck in the middle of the inductive case... The strategy of doing fewer intros before an induction to obtain a more general IH doesn't always work by itself; sometimes some rearrangement of quantified variables is needed. Suppose, for example, that we wanted to prove double_injective by induction on m instead of n.

Theorem double_injective_FAILED : n m,
     double n = double m
     n = m.
Proof.
  intros n m. induction n as [| n'].
  - (* n = O *) simpl. intros eq. destruct m as [| m'].
    + (* m = O *) reflexivity.
    + (* m = S m' *) discriminate eq.
  - (* n = S n' *) intros eq. destruct m as [| m'].
    + (* m = O *) discriminate eq.
    + (* m = S m' *) apply f_equal.
At this point, the induction hypothesis, IHn', does not give us n' = m' -- there is an extra S in the way -- so the goal is not provable.

      Abort.
What went wrong?
The problem is that, at the point we invoke the induction hypothesis, we have already introduced m into the context -- intuitively, we have told Coq, "Let's consider some particular n and m..." and we now have to prove that, if double n = double m for these particular n and m, then n = m.
The next tactic, induction n says to Coq: We are going to show the goal by induction on n. That is, we are going to prove, for all n, that the proposition
  • P n = "if double n = double m, then n = m"
holds, by showing
  • P O
    (i.e., "if double O = double m then O = m") and
  • P n P (S n)
    (i.e., "if double n = double m then n = m" implies "if double (S n) = double m then S n = m").
If we look closely at the second statement, it is saying something rather strange: it says that, for a particular m, if we know
  • "if double n = double m then n = m"
then we can prove
  • "if double (S n) = double m then S n = m".
To see why this is strange, let's think of a particular m -- say, 5. The statement is then saying that, if we know
  • Q = "if double n = 10 then n = 5"
then we can prove
  • R = "if double (S n) = 10 then S n = 5".
But knowing Q doesn't give us any help at all with proving R! (If we tried to prove R from Q, we would start with something like "Suppose double (S n) = 10..." but then we'd be stuck: knowing that double (S n) is 10 tells us nothing about whether double n is 10, so Q is useless.)
Trying to carry out this proof by induction on n when m is already in the context doesn't work because we are then trying to prove a relation involving every n but just a single m.
The successful proof of double_injective leaves m in the goal statement at the point where the induction tactic is invoked on n:

Theorem double_injective : n m,
     double n = double m
     n = m.
Proof.
  intros n. induction n as [| n'].
  - (* n = O *) simpl. intros m eq. destruct m as [| m'].
    + (* m = O *) reflexivity.
    + (* m = S m' *) discriminate eq.

  - (* n = S n' *) simpl.
Notice that both the goal and the induction hypothesis are different this time: the goal asks us to prove something more general (i.e., to prove the statement for every m), but the IH is correspondingly more flexible, allowing us to choose any m we like when we apply the IH.

    intros m eq.
Now we've chosen a particular m and introduced the assumption that double n = double m. Since we are doing a case analysis on n, we also need a case analysis on m to keep the two "in sync."

    destruct m as [| m'].
    + (* m = O *) simpl.
The 0 case is trivial:

      discriminate eq.

    + (* m = S m' *)
      apply f_equal.
At this point, since we are in the second branch of the destruct m, the m' mentioned in the context is the predecessor of the m we started out talking about. Since we are also in the S branch of the induction, this is perfect: if we instantiate the generic m in the IH with the current m' (this instantiation is performed automatically by the apply in the next step), then IHn' gives us exactly what we need to finish the proof.

      apply IHn'. injection eq as goal. apply goal. Qed.
What you should take away from all this is that we need to be careful about using induction to try to prove something too specific: To prove a property of n and m by induction on n, it is sometimes important to leave m generic.
In an informal proof, you should simply be careful about what is given and what isn't. If you're deliberately delaying introducing a variable, it's good to be explicit. For example, in an informal analogue of the above, we might say, "by induction on n, leaving m general".
The following exercise requires the same pattern.

Exercise: 2 stars, standard (eqb_true)

Theorem eqb_true : n m,
    eqb n m = true n = m.
Proof.
  (* FILL IN HERE *) Admitted.
The strategy of doing fewer intros before an induction to obtain a more general IH doesn't always work by itself; sometimes some rearrangement of quantified variables is needed. Suppose, for example, that we wanted to prove double_injective by induction on m instead of n.

Theorem double_injective_take2_FAILED : n m,
     double n = double m
     n = m.
Proof.
  intros n m. induction m as [| m'].
  - (* m = O *) simpl. intros eq. destruct n as [| n'].
    + (* n = O *) reflexivity.
    + (* n = S n' *) discriminate eq.
  - (* m = S m' *) intros eq. destruct n as [| n'].
    + (* n = O *) discriminate eq.
    + (* n = S n' *) apply f_equal.
        (* Stuck again here, just like before. *)
Abort.
The problem is that, to do induction on m, we must first introduce n. (If we simply say induction m without introducing anything first, Coq will automatically introduce n for us!)
What can we do about this? One possibility is to rewrite the statement of the lemma so that m is quantified before n. This works, but it's not nice: We don't want to have to twist the statements of lemmas to fit the needs of a particular strategy for proving them! Rather we want to state them in the clearest and most natural way.
What we can do instead is to first introduce all the quantified variables and then re-generalize one or more of them, selectively taking variables out of the context and putting them back at the beginning of the goal. The generalize dependent tactic does this.

Theorem double_injective_take2 : n m,
     double n = double m
     n = m.
Proof.
  intros n m.
  (* n and m are both in the context *)
  generalize dependent n.
  (* Now n is back in the goal and we can do induction on
     m and get a sufficiently general IH. *)

  induction m as [| m'].
  - (* m = O *) simpl. intros n eq. destruct n as [| n'].
    + (* n = O *) reflexivity.
    + (* n = S n' *) discriminate eq.
  - (* m = S m' *) intros n eq. destruct n as [| n'].
    + (* n = O *) discriminate eq.
    + (* n = S n' *) simpl. apply f_equal.
      apply IHm'. injection eq as goal. apply goal. Qed.
Let's look at an informal proof of this theorem. Note that the proposition we prove by induction leaves n quantified, corresponding to the use of generalize dependent in our formal proof.
Theorem: For any nats n and m, if double n = double m, then n = m.
Proof: Let m be a nat. We prove by induction on m that, for any n, if double n = double m then n = m.
  • First, suppose m = 0, and suppose n is a number such that double n = double m. We must show that n = 0.
    Since m = 0, by the definition of double we have double n = 0. There are two cases to consider for n. If n = 0 we are done, since m = 0 = n, as required. Otherwise, if n = S n' for some n', we derive a contradiction: by the definition of double, we can calculate double n = S (S (double n')), but this contradicts the assumption that double n = 0.
  • Second, suppose m = S m' and that n is again a number such that double n = double m. We must show that n = S m', with the induction hypothesis that for every number s, if double s = double m' then s = m'.
    By the fact that m = S m' and the definition of double, we have double n = S (S (double m')). There are two cases to consider for n.
    If n = 0, then by definition double n = 0, a contradiction.
    Thus, we may assume that n = S n' for some n', and again by the definition of double we have S (S (double n')) = S (S (double m')), which implies injectivity of constructors that double n' = double m'. Instantiating the induction hypothesis with n' thus allows us to conclude that n' = m', and it follows immediately that S n' = S m'. Since S n' = n and S m' = m, this is just what we wanted to show.

Exercise: 3 stars, standard, recommended (gen_dep_practice)

Prove this by induction on l.

Theorem nth_error_after_last: (n : nat) (X : Type) (l : list X),
     length l = n
     nth_error l n = None.
Proof.
  (* FILL IN HERE *) Admitted.

Unfolding Definitions

It sometimes happens that we need to manually unfold a Definition so that we can manipulate its right-hand side. For example, if we define...

Definition square n := n × n.
... and try to prove a simple fact about square...

Lemma square_mult : n m, square (n × m) = square n × square m.
Proof.
  intros n m.
  simpl.
... we get stuck: simpl doesn't simplify anything at this point, and since we haven't proved any other facts about square, there is nothing we can apply or rewrite with.
To make progress, we can manually unfold the definition of square:

  unfold square.
Now we have plenty to work with: both sides of the equality are expressions involving multiplication, and we have lots of facts about multiplication at our disposal. In particular, we know that it is commutative and associative, and from these facts it is not hard to finish the proof.

  rewrite mult_assoc.
  assert (H : n × m × n = n × n × m).
  { rewrite mult_comm. apply mult_assoc. }
  rewrite H. rewrite mult_assoc. reflexivity.
Qed.
At this point, a deeper discussion of unfolding and simplification is in order.
You may already have observed that tactics like simpl, reflexivity, and apply will often unfold the definitions of functions automatically when this allows them to make progress. For example, if we define foo m to be the constant 5...

Definition foo (x: nat) := 5.
then the simpl in the following proof (or the reflexivity, if we omit the simpl) will unfold foo m to (fun x 5) m and then further simplify this expression to just 5.

Fact silly_fact_1 : m, foo m + 1 = foo (m + 1) + 1.
Proof.
  intros m.
  simpl.
  reflexivity.
Qed.
However, this automatic unfolding is rather conservative. For example, if we define a slightly more complicated function involving a pattern match...

Definition bar x :=
  match x with
  | O ⇒ 5
  | S _ ⇒ 5
  end.
...then the analogous proof will get stuck:

Fact silly_fact_2_FAILED : m, bar m + 1 = bar (m + 1) + 1.
Proof.
  intros m.
  simpl. (* Does nothing! *)
Abort.
The reason that simpl doesn't make progress here is that it notices that, after tentatively unfolding bar m, it is left with a match whose scrutinee, m, is a variable, so the match cannot be simplified further. (It is not smart enough to notice that the two branches of the match are identical.) So it gives up on unfolding bar m and leaves it alone. Similarly, tentatively unfolding bar (m+1) leaves a match whose scrutinee is a function application (that, itself, cannot be simplified, even after unfolding the definition of +), so simpl leaves it alone.
At this point, there are two ways to make progress. One is to use destruct m to break the proof into two cases, each focusing on a more concrete choice of m (O vs S _). In each case, the match inside of bar can now make progress, and the proof is easy to complete.

Fact silly_fact_2 : m, bar m + 1 = bar (m + 1) + 1.
Proof.
  intros m.
  destruct m.
  - simpl. reflexivity.
  - simpl. reflexivity.
Qed.
This approach works, but it depends on our recognizing that the match hidden inside bar is what was preventing us from making progress.
A more straightforward way to make progress is to explicitly tell Coq to unfold bar.

Fact silly_fact_2' : m, bar m + 1 = bar (m + 1) + 1.
Proof.
  intros m.
  unfold bar.
Now it is apparent that we are stuck on the match expressions on both sides of the =, and we can use destruct to finish the proof without thinking too hard.

  destruct m.
  - reflexivity.
  - reflexivity.
Qed.
The unfold tactic is particularly useful for working with higher order functions, which often get stuck on matches you can't see. Recall delete_add_edit from Poly. Here are our versions, with a bonus definition, copy_edit:

Definition delete_edit (src : strand) : list edit :=
  map (fun _delete) src.

Definition add_edit (tgt : strand) : list edit :=
  map (fun badd b) tgt.

Definition delete_add_edit (src : strand) (tgt : strand) : list edit :=
  delete_edit src ++ add_edit tgt.

Definition copy_edit (src: strand) : list edit :=
  map (fun _copy) src.
Let's prove each of them valid.

Exercise: 1 star, standard (delete_edit_valid)

Lemma delete_edit_valid : (src : strand),
    valid_edit src [] (delete_edit src).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard (add_edit_valid)

Lemma add_edit_valid : (tgt : strand),
    valid_edit [] tgt (add_edit tgt).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 1 star, standard (copy_edit_valid)

Lemma copy_edit_valid : src,
    valid_edit src src (copy_edit src).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard (delete_add_valid)

This one is the hardest. You ought to be able to simply combine delete_edit_valid and add_edit_valid, but it doesn't quite work. (Try it and see!) What helper lemma do you need to prove about how delete_edit and ++ interact? Your lemma might need to state a property more general than what you need just for you to be able to prove it! You might look to lemmas we proved in Poly for inspiration.
Before you get help, spend at least twenty minutes in front of a whiteboard, without your computer, trying to figure it out. Hint: if your lemma should mention delete_edit.

Lemma delete_add_valid : (src tgt : strand),
    valid_edit src tgt (delete_add_edit src tgt).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard (costs)

Prove lemmas characterizing the costs of each of our edit functions.

Lemma add_cost : tgt,
    total_cost (add_edit tgt) = length tgt.
Proof.
  (* FILL IN HERE *) Admitted.

Lemma delete_cost : src,
    total_cost (delete_edit src) = length src.
Proof.
  (* FILL IN HERE *) Admitted.

Lemma copy_cost_0 : (src : strand),
    total_cost (copy_edit src) = 0.
Proof.
  (* FILL IN HERE *) Admitted.
With those definitions under our belt, we can write a more interesting edit function: substitute when possible, only adding or deleting when one list runs out.

Fixpoint naive_sub_edit (src : strand) (tgt : strand) : list edit :=
  match (src, tgt) with
  | ([], tgt)add_edit tgt
  | (src, [])delete_edit src
  | (_::src', b2::tgt')
    substitute b2 :: naive_sub_edit src' tgt'
  end.

Exercise: 2 stars, standard (naive_sub_edit_valid)

Lemma naive_sub_edit_valid : src tgt,
    valid_edit src tgt (naive_sub_edit src tgt).
Proof.
  (* FILL IN HERE *) Admitted.

Using destruct on Compound Expressions

We have seen many examples where destruct is used to perform case analysis of the value of some variable. But sometimes we need to reason by cases on the result of some expression. We can also do this with destruct.
Here are some examples:

Definition sillyfun (n : nat) : bool :=
  if eqb n 3 then false
  else if eqb n 5 then false
       else false.

Theorem sillyfun_false : (n : nat),
    sillyfun n = false.
Proof.
  intros n. unfold sillyfun.
  destruct (eqb n 3).
  - (* eqb n 3 = true *) reflexivity.
  - (* eqb n 3 = false *) destruct (eqb n 5).
    + (* eqb n 5 = true *) reflexivity.
    + (* eqb n 5 = false *) reflexivity. Qed.
After unfolding sillyfun in the above proof, we find that we are stuck on if (eqb n 3) then ... else .... But either n is equal to 3 or it isn't, so we can use destruct (eqb n 3) to let us reason about the two cases.
In general, the destruct tactic can be used to perform case analysis of the results of arbitrary computations. If e is an expression whose type is some inductively defined type T, then, for each constructor c of T, destruct e generates a subgoal in which all occurrences of e (in the goal and in the context) are replaced by c.
More practically, we can use these case analyses to reason about the complex decisions made in programs. Here's a function that computes the minimum of three arguments:

Definition min3 (n1 n2 n3 : nat) : nat :=
  if leb n1 n2
  then if leb n1 n3
       then n1
       else n3
  else if leb n2 n3
       then n2
       else n3.

Exercise: 2 stars, standard (argmin3_min3_eqs)

Relate min3 to argmin3.
Lemma argmin3_min3_eqs : {A:Type} (cost : A nat) (o1 o2 o3 : A) (n1 n2 n3 : nat),
    cost o1 = n1
    cost o2 = n2
    cost o3 = n3
    cost (argmin3 cost o1 o2 o3) = min3 n1 n2 n3.
Proof.
  (* FILL IN HERE *) Admitted.

Corollary argmin3_min3 : {A:Type} (cost : A nat) (o1 o2 o3 : A),
    cost (argmin3 cost o1 o2 o3) = min3 (cost o1) (cost o2) (cost o3).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 2 stars, standard (sub_edit)

Define a function sub_edit that generates an edit similarly to naive_sub_edit, but without the naivete. (If you're not sure what we mean... do the question (3) in the informal work!)

Fixpoint sub_edit (src : strand) (tgt : strand) : list edit (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
(* Do not modify the following line: *)
Definition manual_grade_for_sub_edit : option (nat×string) := None.

Exercise: 2 stars, standard (sub_edit_valid)

Prove that sub_edit produces valid edits.
Lemma sub_edit_valid : src tgt,
    valid_edit src tgt (sub_edit src tgt).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard, optional (combine_split)

Here is an implementation of the split function mentioned in chapter Poly:

Fixpoint split {X Y : Type} (l : list (X×Y))
  : (list X) × (list Y) :=
  match l with
  | []([], [])
  | (x, y) :: t
    match split t with
    | (lx, ly)(x :: lx, y :: ly)
    end
  end.
Prove that split and combine are inverses in the following sense:

Theorem combine_split : X Y (l : list (X × Y)) l1 l2,
    split l = (l1, l2)
    combine l1 l2 = l.
Proof.
  (* FILL IN HERE *) Admitted.
However, destructing compound expressions requires a bit of care, as such destructs can sometimes erase information we need to complete a proof. For example, suppose we define a function sillyfun1 like this:

Definition sillyfun1 (n : nat) : bool :=
  if eqb n 3 then true
  else if eqb n 5 then true
       else false.
Now suppose that we want to convince Coq of the (rather obvious) fact that sillyfun1 n yields true only when n is odd. By analogy with the proofs we did with sillyfun above, it is natural to start the proof like this:

Theorem sillyfun1_odd_FAILED : (n : nat),
    sillyfun1 n = true
    oddb n = true.
Proof.
  intros n eq. unfold sillyfun1 in eq.
  destruct (eqb n 3).
  (* stuck... *)
Abort.
We get stuck at this point because the context does not contain enough information to prove the goal! The problem is that the substitution performed by destruct is too brutal -- it threw away every occurrence of eqb n 3, but we need to keep some memory of this expression and how it was destructed, because we need to be able to reason that, since eqb n 3 = true in this branch of the case analysis, it must be that n = 3, from which it follows that n is odd.
What we would really like is to substitute away all existing occurences of eqb n 3, but at the same time add an equation to the context that records which case we are in. Recall that the eqn: qualifier allows us to introduce such an equation, giving it a name that we choose.

Theorem sillyfun1_odd : (n : nat),
    sillyfun1 n = true
    oddb n = true.
Proof.
  intros n eq. unfold sillyfun1 in eq.
  destruct (eqb n 3) eqn:Heqe3.
    (* Now we have the same state as at the point where we got
     stuck above, except that the context contains an extra
     equality assumption, which is exactly what we need to
     make progress. *)

    - (* e3 = true *) apply eqb_true in Heqe3.
    rewriteHeqe3. reflexivity.
  - (* e3 = false *)
        (* When we come to the second equality test in the body
        of the function we are reasoning about, we can use
        eqn: again in the same way, allow us to finish the
        proof. *)

        destruct (eqb n 5) eqn:Heqe5.
    + (* e5 = true *)
      apply eqb_true in Heqe5.
      rewriteHeqe5. reflexivity.
    + (* e5 = false *) discriminate eq. Qed.

Exercise: 2 stars, standard (destruct_eqn_practice)

Theorem bool_fn_applied_thrice :
   (f : bool bool) (b : bool),
    f (f (f b)) = f b.
Proof.
  (* FILL IN HERE *) Admitted.

Review

We've now seen many of Coq's most fundamental tactics. We'll introduce a few more in the coming chapters. But basically we've got what we need to get work done.
Here are the ones we've seen:
  • intros: move hypotheses/variables from goal to context
  • reflexivity: finish the proof (when the goal looks like e = e)
  • apply: prove goal using a hypothesis, lemma, or constructor
  • apply... in H: apply a hypothesis, lemma, or constructor to a hypothesis in the context (forward reasoning)
  • apply... with...: explicitly specify values for variables that cannot be determined by pattern matching
  • simpl: simplify computations in the goal
  • simpl in H: ... or a hypothesis
  • rewrite: use an equality hypothesis (or lemma) to rewrite the goal
  • rewrite ... in H: ... or a hypothesis
  • symmetry: changes a goal of the form t=u into u=t
  • symmetry in H: changes a hypothesis of the form t=u into u=t
  • unfold: replace a defined constant by its right-hand side in the goal
  • unfold... in H: ... or a hypothesis
  • destruct... as...: case analysis on values of inductively defined types
  • destruct... eqn:...: specify the name of an equation to be added to the context, recording the result of the case analysis
  • induction... as...: induction on values of inductively defined types
  • injection: reason by injectivity of constructors
  • discriminate: reason by injectivity of constructors
  • assert (H: e) (or assert (e) as H): introduce a "local lemma" e and call it H
  • generalize dependent x: move the variable x (and anything else that depends on it) from the context back to an explicit hypothesis in the goal formula

Levenshtein's edit distance

We've looked a bunch of different edit functions: it's time to look at Vladimir Levenshtein's!
The key idea is to consider four possible edits at each juncture and choose the cheapest one. Suppose we're trying to edit b1 :: src into b2 :: tgt. (When one is empty, things are simpler.)
  • If b1 and b2 are the same, we can copy, continuing with src and tgt.
  • If b1 and b2 are different, we can substitute b2, continuing with src and tgt.
  • We can delete b1, continuing with src and b2 :: tgt.
  • We can add b2, continuing with b1 :: src and tgt.
Here's a first cut at a definition:

Fail Fixpoint levenshtein (src tgt : strand) : list edit :=
  match (src,tgt) with
  | ([], _)add_edit tgt
  | (_, [])delete_edit src
  | (b1::src', b2::tgt')
    let copy_sub_edits := (if eq_base b1 b2 then copy else substitute b2) :: levenshtein src' tgt' in
    let delete_edits := delete :: levenshtein src' tgt in
    let add_edits := add b2 :: levenshtein src tgt' in
    
    argmin3 total_cost copy_sub_edits delete_edits add_edits
  end.
What went wrong? The problem is that Coq can't see that this function terminates. The problem is that sometimes src shrinks and sometimes tgt shrinks, but it's not uniform.
There are many ways to solve this problem, but the easiest is to define an "inner loop". The let fix inner tgt := ... in inner tgt notation says that we're going to define an recursive function inside of levenshtein. It takes one argument, tgt, which happens to be the same name as the outer function.
Looking after the in, we can see that we immediately apply inner to our original argument tgt.
The net effect is that in every call to levenshtein, the src argument gets smaller. If src stays the same, we can call inner, where tgt gets smaller.

Fixpoint levenshtein (src tgt : strand) : list edit :=
  let fix inner tgt :=
      match (src,tgt) with
      | ([], _)add_edit tgt
      | (_, [])delete_edit src
      | (b1::src', b2::tgt')
        let copy_sub_edits := (if eq_base b1 b2 then copy else substitute b2)
                                :: levenshtein src' tgt' in
        let delete_edits := delete :: levenshtein src' tgt in
        let add_edits := add b2 :: inner tgt' in
        
        argmin3 total_cost copy_sub_edits delete_edits add_edits
      end
  in
  inner tgt.

Exercise: 1 star, standard (levenshtein__sub_edit)

Later on, in IndProp2, we'll prove that levenshtein produces optimal edits, i.e., you can't do better. That might not be obvious, though!
Give an example of a pair of DNA strands where levenshtein does much better than sub_edit. (Our testing script asks for twice as good, but you can find sources and targets that make levenshtein arbitrarily better.)
Definition levenshtein__sub_edit_src : strand
  (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Definition levenshtein__sub_edit_tgt : strand
  (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.

Example levenshtein__sub_edit_cost :
  leb (2 × (total_cost (levenshtein levenshtein__sub_edit_src levenshtein__sub_edit_tgt)))
           (total_cost (sub_edit levenshtein__sub_edit_src levenshtein__sub_edit_tgt)) = true.
(* FILL IN HERE *) Admitted.
(* 
Proof. simpl. reflexivity. Qed.
*)

Exercise: 2 stars, standard (argmin3_valid)

Lemma argmin3_valid : (src tgt : strand) (cost : list edit nat) (edits1 edits2 edits3 : list edit),
    valid_edit src tgt edits1
    valid_edit src tgt edits2
    valid_edit src tgt edits3
    valid_edit src tgt (argmin3 cost edits1 edits2 edits3).
Proof.
  (* FILL IN HERE *) Admitted.
We're finally equipped to prove that Levenshtein's edit distance generates valid edits! You'll want to use Search to find relevant lemmas from Poly and Tactics.

Exercise: 3 stars, standard (levenshtein_valid)

Lemma levenshtein_valid : (src tgt : strand),
    valid_edit src tgt (levenshtein src tgt).
Proof.
  (* FILL IN HERE *) Admitted.

Levenshtein edit distance: computing only the cost

We originally motivated edit distance as a way of measuring how similar or different two sequences (in our case, of DNA) are.
The function we wrote, levenshtein, doesn't directly compute cost, though---it computes the edits and then we can compute the cost from those edits. We haven't particularly talked about efficiency in this course, but it is worth remarking that it generating a list of edits consumes both space and time, the two most common axes of efficiency. (There are others, though: energy efficiency and code size to name two.) We could save space and time by merely computing the cost, rather than computing all the edits.

Exercise: 3 stars, standard (levenshtein_distance)

Write a function that computes the cost of an edit without actually computing the edit. Your function levenshtein_distance should be structured like levenshtein but should instead just return the cost without explicitly working with any edits.
Note: simply calling total_cost (levenshtein src tgt) doesn't get any points (and makes the following theorem trivial). Do the work!
Fixpoint levenshtein_distance (src tgt : strand) : nat (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
(* Do not modify the following line: *)
Definition manual_grade_for_levenshtein_distance : option (nat×string) := None.

Exercise: 3 stars, standard (levenshtein_distance_correct)

Unfortunately, it's not enough just to define the function. It'd be nice to know that the cost that levenshtein_distance produces the right cost, i.e., that it agrees with levenshtein.
Lemma levenshtein_distance_correct : src tgt,
    total_cost (levenshtein src tgt) = levenshtein_distance src tgt.
Proof.
  (* FILL IN HERE *) Admitted.

Additional Exercises

Exercise: 3 stars, standard (eqb_sym)

Theorem eqb_sym : (n m : nat),
    eqb n m = eqb m n.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars, standard, optional (eqb_trans)

Theorem eqb_trans : n m p,
    eqb n m = true
    eqb m p = true
    eqb n p = true.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars, advanced (split_combine)

We proved, in an exercise above, that for all lists of pairs, combine is the inverse of split. How would you formalize the statement that split is the inverse of combine? When is this property true?
Complete the definition of split_combine_statement below with a property that states that split is the inverse of combine. Then, prove that the property holds. (Be sure to leave your induction hypothesis general by not doing intros on more things than necessary. Hint: what property do you need of l1 and l2 for split combine l1 l2 = (l1,l2) to be true?)

Definition split_combine_statement : Prop
(* (": Prop" means that we are giving a name to a
     logical proposition here.) *)

(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.

Theorem split_combine : split_combine_statement.
Proof.
  (* FILL IN HERE *) Admitted.
(* Do not modify the following line: *)
Definition manual_grade_for_split_combine : option (nat×string) := None.

Exercise: 3 stars, advanced (filter_exercise)

This one is a bit challenging. Pay attention to the form of your induction hypothesis.

Theorem filter_exercise : (X : Type) (test : X bool)
                                 (x : X) (l lf : list X),
    filter test l = x :: lf
    test x = true.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 4 stars, advanced, optional (forall_exists_challenge)

Define two recursive Fixpoints, forallb and existsb. The first checks whether every element in a list satisfies a given predicate:
      forallb oddb [1;3;5;7;9] = true

      forallb negb [false;false] = true

      forallb evenb [0;2;4;5] = false

      forallb (eqb 5) [] = true
The second checks whether there exists an element in the list that satisfies a given predicate:
      existsb (eqb 5) [0;2;3;6] = false

      existsb (andb true) [true;true;false] = true

      existsb oddb [1;0;0;0;0;3] = true

      existsb evenb [] = false
Next, define a nonrecursive version of existsb -- call it existsb' -- using forallb and negb.
Finally, prove a theorem existsb_existsb' stating that existsb' and existsb have the same behavior.

(* FILL IN HERE *)

(* Mon Apr 6 09:27:40 PDT 2020 *)