theory Solutions
imports First_Steps "Recipes/Timing"
begin
chapter \<open>Solutions to Most Exercises\label{ch:solutions}\<close>
text \<open>\solution{fun:revsum}\<close>
ML %grayML\<open>fun rev_sum
((p as Const (@{const_name plus}, _)) $ t $ u) = p $ u $ rev_sum t
| rev_sum t = t\<close>
text \<open>
An alternative solution using the function @{ML_ind mk_binop in HOLogic} is:
\<close>
ML %grayML\<open>fun rev_sum t =
let
fun dest_sum (Const (@{const_name plus}, _) $ u $ u') = u' :: dest_sum u
| dest_sum u = [u]
in
foldl1 (HOLogic.mk_binop @{const_name plus}) (dest_sum t)
end\<close>
text \<open>\solution{fun:makesum}\<close>
ML %grayML\<open>fun make_sum t1 t2 =
HOLogic.mk_nat (HOLogic.dest_nat t1 + HOLogic.dest_nat t2)\<close>
text \<open>\solution{fun:killqnt}\<close>
ML %linenosgray\<open>val quantifiers = [@{const_name All}, @{const_name Ex}]
fun kill_trivial_quantifiers trm =
let
fun aux t =
case t of
Const (s1, T1) $ Abs (x, T2, t2) =>
if member (op =) quantifiers s1 andalso not (loose_bvar1 (t2, 0))
then incr_boundvars ~1 (aux t2)
else Const (s1, T1) $ Abs (x, T2, aux t2)
| t1 $ t2 => aux t1 $ aux t2
| Abs (s, T, t') => Abs (s, T, aux t')
| _ => t
in
aux trm
end\<close>
text \<open>
In line 7 we traverse the term, by first checking whether a term is an
application of a constant with an abstraction. If the constant stands for
a listed quantifier (see Line 1) and the bound variable does not occur
as a loose bound variable in the body, then we delete the quantifier.
For this we have to increase all other dangling de Bruijn indices by
\<open>-1\<close> to account for the deleted quantifier. An example is
as follows:
@{ML_response [display,gray]
\<open>@{prop "\<forall>x y z. P x = P z"}
|> kill_trivial_quantifiers
|> pretty_term @{context}
|> pwriteln\<close>
\<open>\<forall>x z. P x = P z\<close>}
\<close>
text \<open>\solution{fun:makelist}\<close>
ML %grayML\<open>fun mk_rev_upto i =
1 upto i
|> map (HOLogic.mk_number @{typ int})
|> HOLogic.mk_list @{typ int}
|> curry (op $) @{term "rev :: int list \<Rightarrow> int list"}\<close>
text \<open>\solution{ex:debruijn}\<close>
ML %grayML\<open>fun P n = @{term "P::nat \<Rightarrow> bool"} $ (HOLogic.mk_number @{typ "nat"} n)
fun rhs 1 = P 1
| rhs n = HOLogic.mk_conj (P n, rhs (n - 1))
fun lhs 1 n = HOLogic.mk_imp (HOLogic.mk_eq (P 1, P n), rhs n)
| lhs m n = HOLogic.mk_conj (HOLogic.mk_imp
(HOLogic.mk_eq (P (m - 1), P m), rhs n), lhs (m - 1) n)
fun de_bruijn n =
HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))\<close>
text \<open>\solution{ex:scancmts}\<close>
ML %grayML\<open>val any = Scan.one (Symbol.not_eof)
val scan_cmt =
let
val begin_cmt = Scan.this_string "(*"
val end_cmt = Scan.this_string "*)"
in
begin_cmt |-- Scan.repeat (Scan.unless end_cmt any) --| end_cmt
>> (enclose "(**" "**)" o implode)
end
val parser = Scan.repeat (scan_cmt || any)
val scan_all =
Scan.finite Symbol.stopper parser >> implode #> fst\<close>
text \<open>
By using \<open>#> fst\<close> in the last line, the function
@{ML scan_all} retruns a string, instead of the pair a parser would
normally return. For example:
@{ML_matchresult [display,gray]
\<open>let
val input1 = (Symbol.explode "foo bar")
val input2 = (Symbol.explode "foo (*test*) bar (*test*)")
in
(scan_all input1, scan_all input2)
end\<close>
\<open>("foo bar", "foo (**test**) bar (**test**)")\<close>}
\<close>
text \<open>\solution{ex:contextfree}\<close>
ML %grayML\<open>datatype expr =
Number of int
| Mult of expr * expr
| Add of expr * expr
fun parse_basic xs =
(Parse.nat >> Number
|| Parse.$$$ "(" |-- parse_expr --| Parse.$$$ ")") xs
and parse_factor xs =
(parse_basic --| Parse.$$$ "*" -- parse_factor >> Mult
|| parse_basic) xs
and parse_expr xs =
(parse_factor --| Parse.$$$ "+" -- parse_expr >> Add
|| parse_factor) xs\<close>
text \<open>\solution{ex:dyckhoff}\<close>
text \<open>
The axiom rule can be implemented with the function @{ML assume_tac}. The other
rules correspond to the theorems:
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{rl}
$\wedge_R$ & @{thm [source] conjI}\\
$\vee_{R_1}$ & @{thm [source] disjI1}\\
$\vee_{R_2}$ & @{thm [source] disjI2}\\
$\longrightarrow_R$ & @{thm [source] impI}\\
$=_R$ & @{thm [source] iffI}\\
\end{tabular}
&
\begin{tabular}{rl}
$False$ & @{thm [source] FalseE}\\
$\wedge_L$ & @{thm [source] conjE}\\
$\vee_L$ & @{thm [source] disjE}\\
$=_L$ & @{thm [source] iffE}
\end{tabular}
\end{tabular}
\end{center}
For the other rules we need to prove the following lemmas.
\<close>
lemma impE1:
shows "\<lbrakk>A \<longrightarrow> B; A; B \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
by iprover
lemma impE2:
shows "\<lbrakk>(C \<and> D) \<longrightarrow> B; C \<longrightarrow> (D \<longrightarrow>B) \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
and "\<lbrakk>(C \<or> D) \<longrightarrow> B; \<lbrakk>C \<longrightarrow> B; D \<longrightarrow> B\<rbrakk> \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
and "\<lbrakk>(C \<longrightarrow> D) \<longrightarrow> B; D \<longrightarrow> B \<Longrightarrow> C \<longrightarrow> D; B \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
and "\<lbrakk>(C = D) \<longrightarrow> B; (C \<longrightarrow> D) \<longrightarrow> ((D \<longrightarrow> C) \<longrightarrow> B) \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
by iprover+
text \<open>
Now the tactic which applies a single rule can be implemented
as follows.
\<close>
ML %linenosgray\<open>fun apply_tac ctxt =
let
val intros = @{thms conjI disjI1 disjI2 impI iffI}
val elims = @{thms FalseE conjE disjE iffE impE2}
in
assume_tac ctxt
ORELSE' resolve_tac ctxt intros
ORELSE' eresolve_tac ctxt elims
ORELSE' (eresolve_tac ctxt [@{thm impE1}] THEN' assume_tac ctxt)
end\<close>
text \<open>
In Line 11 we apply the rule @{thm [source] impE1} in concjunction
with @{ML assume_tac} in order to reduce the number of possibilities that
need to be explored. You can use the tactic as follows.
\<close>
lemma
shows "((((P \<longrightarrow> Q) \<longrightarrow> P) \<longrightarrow> P) \<longrightarrow> Q) \<longrightarrow> Q"
apply(tactic \<open>(DEPTH_SOLVE o apply_tac @{context}) 1\<close>)
done
text \<open>
We can use the tactic to prove or disprove automatically the
de Bruijn formulae from Exercise \ref{ex:debruijn}.
\<close>
ML %grayML\<open>fun de_bruijn_prove ctxt n =
let
val goal = HOLogic.mk_Trueprop (HOLogic.mk_imp (lhs n n, rhs n))
in
Goal.prove ctxt ["P"] [] goal
(fn _ => (DEPTH_SOLVE o apply_tac ctxt) 1)
end\<close>
text \<open>
You can use this function to prove de Bruijn formulae.
\<close>
ML %grayML\<open>de_bruijn_prove @{context} 3\<close>
text \<open>\solution{ex:addsimproc}\<close>
ML %grayML\<open>fun dest_sum term =
case term of
(@{term "(+):: nat \<Rightarrow> nat \<Rightarrow> nat"} $ t1 $ t2) =>
(snd (HOLogic.dest_number t1), snd (HOLogic.dest_number t2))
| _ => raise TERM ("dest_sum", [term])
fun get_sum_thm ctxt t (n1, n2) =
let
val sum = HOLogic.mk_number @{typ "nat"} (n1 + n2)
val goal = Logic.mk_equals (t, sum)
in
Goal.prove ctxt [] [] goal (K (Arith_Data.arith_tac ctxt 1))
end
fun add_sp_aux ctxt t =
let
val t' = Thm.term_of t
in
SOME (get_sum_thm ctxt t' (dest_sum t'))
handle TERM _ => NONE
end\<close>
text \<open>The setup for the simproc is\<close>
simproc_setup %gray add_sp ("t1 + t2") = \<open>K add_sp_aux\<close>
text \<open>and a test case is the lemma\<close>
lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) ((4 + 1)::nat)"
apply(tactic \<open>simp_tac (put_simpset HOL_basic_ss @{context} addsimprocs [@{simproc add_sp}]) 1\<close>)
txt \<open>
where the simproc produces the goal state
\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\bigskip
\<close>(*<*)oops(*>*)
text \<open>\solution{ex:addconversion}\<close>
text \<open>
The following code assumes the function @{ML dest_sum} from the previous
exercise.
\<close>
ML %grayML\<open>fun add_simple_conv ctxt ctrm =
let
val trm = Thm.term_of ctrm
in
case trm of
@{term "(+)::nat \<Rightarrow> nat \<Rightarrow> nat"} $ _ $ _ =>
get_sum_thm ctxt trm (dest_sum trm)
| _ => Conv.all_conv ctrm
end
val add_conv = Conv.bottom_conv add_simple_conv
fun add_tac ctxt = CONVERSION (add_conv ctxt)\<close>
text \<open>
A test case for this conversion is as follows
\<close>
lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) ((4 + 1)::nat)"
apply(tactic \<open>add_tac @{context} 1\<close>)?
txt \<open>
where it produces the goal state
\begin{minipage}{\textwidth}
@{subgoals [display]}
\end{minipage}\bigskip
\<close>(*<*)oops(*>*)
text \<open>\solution{ex:compare}\<close>
text \<open>
We use the timing function @{ML timing_wrapper} from Recipe~\ref{rec:timing}.
To measure any difference between the simproc and conversion, we will create
mechanically terms involving additions and then set up a goal to be
simplified. We have to be careful to set up the goal so that
other parts of the simplifier do not interfere. For this we construct an
unprovable goal which, after simplification, we are going to ``prove'' with
the help of ``\isacommand{sorry}'', that is the method @{ML Skip_Proof.cheat_tac}
For constructing test cases, we first define a function that returns a
complete binary tree whose leaves are numbers and the nodes are
additions.
\<close>
ML %grayML\<open>fun term_tree n =
let
val count = Unsynchronized.ref 0;
fun term_tree_aux n =
case n of
0 => (count := !count + 1; HOLogic.mk_number @{typ nat} (!count))
| _ => Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
$ (term_tree_aux (n - 1)) $ (term_tree_aux (n - 1))
in
term_tree_aux n
end\<close>
text \<open>
This function generates for example:
@{ML_response [display,gray]
\<open>pwriteln (pretty_term @{context} (term_tree 2))\<close>
\<open>1 + 2 + (3 + 4)\<close>}
The next function constructs a goal of the form \<open>P \<dots>\<close> with a term
produced by @{ML term_tree} filled in.
\<close>
ML %grayML\<open>fun goal n = HOLogic.mk_Trueprop (@{term "P::nat\<Rightarrow> bool"} $ (term_tree n))\<close>
text \<open>
Note that the goal needs to be wrapped in a @{term "Trueprop"}. Next we define
two tactics, \<open>c_tac\<close> and \<open>s_tac\<close>, for the conversion and simproc,
respectively. The idea is to first apply the conversion (respectively simproc) and
then prove the remaining goal using @{ML \<open>cheat_tac\<close> in Skip_Proof}.
\<close>
ML Skip_Proof.cheat_tac
ML %grayML\<open>local
fun mk_tac ctxt tac =
timing_wrapper (EVERY1 [tac, Skip_Proof.cheat_tac ctxt])
in
fun c_tac ctxt = mk_tac ctxt (add_tac ctxt)
fun s_tac ctxt = mk_tac ctxt (simp_tac
(put_simpset HOL_basic_ss ctxt addsimprocs [@{simproc add_sp}]))
end\<close>
text \<open>
This is all we need to let the conversion run against the simproc:
\<close>
ML %grayML\<open>val _ = Goal.prove @{context} [] [] (goal 8)
(fn {context, ...} => c_tac context)
val _ = Goal.prove @{context} [] [] (goal 8)
(fn {context, ...} => s_tac context)\<close>
text \<open>
If you do the exercise, you can see that both ways of simplifying additions
perform relatively similar with perhaps some advantages for the
simproc. That means the simplifier, even if much more complicated than
conversions, is quite efficient for tasks it is designed for. It usually does not
make sense to implement general-purpose rewriting using
conversions. Conversions only have clear advantages in special situations:
for example if you need to have control over innermost or outermost
rewriting, or when rewriting rules are lead to non-termination.
\<close>
end