--- a/CookBook/Tactical.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Tactical.thy Thu Mar 12 08:11:02 2009 +0100
@@ -1,6 +1,5 @@
theory Tactical
imports Base FirstSteps
-uses "infix_conv.ML"
begin
chapter {* Tactical Reasoning\label{chp:tactical} *}
@@ -11,7 +10,7 @@
considerably the burden of manual reasoning, for example, when introducing
new definitions. These proof procedures are centred around refining a goal
state using tactics. This is similar to the \isacommand{apply}-style
- reasoning at the user level, where goals are modified in a sequence of proof
+ reasoning at the user-level, where goals are modified in a sequence of proof
steps until all of them are solved. However, there are also more structured
operations available on the ML-level that help with the handling of
variables and assumptions.
@@ -66,7 +65,7 @@
and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
"Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
tactics and tactic combinators; see also Chapters 3 and 4 in the old
- Isabelle Reference Manual.
+ Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual.
\end{readmore}
Note that in the code above we use antiquotations for referencing the theorems. Many theorems
@@ -101,7 +100,7 @@
text {*
By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
- user level of Isabelle the tactic @{ML foo_tac} or
+ user-level of Isabelle the tactic @{ML foo_tac} or
any other function that returns a tactic.
The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
@@ -289,8 +288,8 @@
printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
the goal state as represented internally (highlighted boxes). This
illustrates that every goal state in Isabelle is represented by a theorem:
- when we start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when we finish the proof the
+ when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}}
\end{figure}
*}
@@ -328,8 +327,9 @@
text {*
Let us start with the tactic @{ML print_tac}, which is quite useful for low-level
- debugging of tactics. It just prints out a message and the current goal state.
- Processing the proof
+ debugging of tactics. It just prints out a message and the current goal state
+ (unlike @{ML my_print_tac}, it prints the goal state as the user would see it).
+ For example, processing the proof
*}
lemma shows "False \<Longrightarrow> True"
@@ -386,7 +386,7 @@
text {*
Note the number in each tactic call. Also as mentioned in the previous section, most
- basic tactics take such an argument; it addresses the subgoal they are analysing.
+ basic tactics take such a number as argument; it addresses the subgoal they are analysing.
In the proof below, we first split up the conjunction in the second subgoal by
focusing on this subgoal first.
*}
@@ -478,21 +478,10 @@
takes an additional number as argument that makes explicit which premise
should be instantiated.
- To improve readability of the theorems we produce below, we shall use
- the following function
-*}
-
-ML{*fun no_vars ctxt thm =
-let
- val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
-in
- thm'
-end*}
-
-text {*
- that transform the schematic variables of a theorem into free variables.
- Using this function for the first @{ML RS}-expression above would produce
- the more readable result:
+ To improve readability of the theorems we produce below, we shall use the
+ function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
+ schematic variables into free ones. Using this function for the first @{ML
+ RS}-expression above produces the more readable result:
@{ML_response_fake [display,gray]
"no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
@@ -662,17 +651,20 @@
also described in \isccite{sec:results}.
\end{readmore}
- A similar but less powerful function than @{ML SUBPROOF} is @{ML SUBGOAL}.
- It allows you to inspect a given subgoal. With this you can implement
- a tactic that applies a rule according to the topmost logic connective in the
- subgoal (to illustrate this we only analyse a few connectives). The code
- of the tactic is as follows.\label{tac:selecttac}
+ Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
+ and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
+ presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
+ cterm}). With this you can implement a tactic that applies a rule according
+ to the topmost logic connective in the subgoal (to illustrate this we only
+ analyse a few connectives). The code of the tactic is as
+ follows.\label{tac:selecttac}
+
*}
-ML %linenosgray{*fun select_tac (t,i) =
+ML %linenosgray{*fun select_tac (t, i) =
case t of
- @{term "Trueprop"} $ t' => select_tac (t',i)
- | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t',i)
+ @{term "Trueprop"} $ t' => select_tac (t', i)
+ | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
| @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
| @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
| @{term "Not"} $ _ => rtac @{thm notI} i
@@ -689,10 +681,12 @@
construct the patterns, the pattern in Line 8 cannot be constructed in this
way. The reason is that an antiquotation would fix the type of the
quantified variable. So you really have to construct the pattern using the
- basic term-constructors. This is not necessary in other cases, because their type
- is always fixed to function types involving only the type @{typ bool}. For the
- final pattern, we chose to just return @{ML all_tac}. Consequently,
- @{ML select_tac} never fails.
+ basic term-constructors. This is not necessary in other cases, because their
+ type is always fixed to function types involving only the type @{typ
+ bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
+ manually.) For the catch-all pattern, we chose to just return @{ML all_tac}.
+ Consequently, @{ML select_tac} never fails.
+
Let us now see how to apply this tactic. Consider the four goals:
*}
@@ -727,8 +721,7 @@
the first goal. To do this can result in quite messy code. In contrast,
the ``reverse application'' is easy to implement.
- The tactic @{ML "CSUBGOAL"} is similar to @{ML SUBGOAL} except that it takes
- a @{ML_type cterm} instead of a @{ML_type term}. Of course, this example is
+ Of course, this example is
contrived: there are much simpler methods available in Isabelle for
implementing a proof procedure analysing a goal according to its topmost
connective. These simpler methods use tactic combinators, which we will
@@ -922,7 +915,7 @@
@{ML REPEAT1} is similar, but runs the tactic at least once (failing if
this is not possible).
- If you are after the ``primed'' version of @{ML repeat_xmp} then you
+ If you are after the ``primed'' version of @{ML repeat_xmp}, then you
need to implement it as
*}
@@ -1013,14 +1006,416 @@
*}
-section {* Rewriting and Simplifier Tactics *}
+section {* Simplifier Tactics *}
+
+text {*
+ A lot of convenience in the reasoning with Isabelle derives from its
+ powerful simplifier. The power of simplifier is a strength and a weakness at
+ the same time, because you can easily make the simplifier to run into a loop and its
+ behaviour can be difficult to predict. There is also a multitude
+ of options that you can configurate to control the behaviour of the simplifier.
+ We describe some of them in this and the next section.
+
+ There are the following five main tactics behind
+ the simplifier (in parentheses is their user-level counterpart):
+
+ \begin{isabelle}
+ \begin{center}
+ \begin{tabular}{l@ {\hspace{2cm}}l}
+ @{ML simp_tac} & @{text "(simp (no_asm))"} \\
+ @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
+ @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\
+ @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
+ @{ML asm_full_simp_tac} & @{text "(simp)"}
+ \end{tabular}
+ \end{center}
+ \end{isabelle}
+
+ All of the tactics take a simpset and an interger as argument (the latter as usual
+ to specify the goal to be analysed). So the proof
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(simp)
+done
+
+text {*
+ corresponds on the ML-level to the tactic
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
+done
+
+text {*
+ If the simplifier cannot make any progress, then it leaves the goal unchanged,
+ i.e.~does not raise any error message. That means if you use it to unfold a
+ definition for a constant and this constant is not present in the goal state,
+ you can still safely apply the simplifier.
+
+ When using the simplifier, the crucial information you have to provide is
+ the simpset. If this information is not handled with care, then the
+ simplifier can easily run into a loop. Therefore a good rule of thumb is to
+ use simpsets that are as minimal as possible. It might be surprising that a
+ simpset is more complex than just a simple collection of theorems used as
+ simplification rules. One reason for the complexity is that the simplifier
+ must be able to rewrite inside terms and should also be able to rewrite
+ according to rules that have precoditions.
+
+
+ The rewriting inside terms requires congruence rules, which
+ are meta-equalities typical of the form
+
+ \begin{isabelle}
+ \begin{center}
+ \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
+ {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
+ \end{center}
+ \end{isabelle}
+
+ with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}.
+ Every simpset contains only
+ one concruence rule for each term-constructor, which however can be
+ overwritten. The user can declare lemmas to be congruence rules using the
+ attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as
+ equations, which are then internally transformed into meta-equations.
+
+
+ The rewriting with rules involving preconditions requires what is in
+ Isabelle called a subgoaler, a solver and a looper. These can be arbitrary
+ tactics that can be installed in a simpset and which are called during
+ various stages during simplification. However, simpsets also include
+ simprocs, which can produce rewrite rules on demand (see next
+ section). Another component are split-rules, which can simplify for example
+ the ``then'' and ``else'' branches of if-statements under the corresponding
+ precoditions.
+
+
+ \begin{readmore}
+ For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}
+ and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in
+ @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in
+ @{ML_file "Provers/splitter.ML"}.
+ \end{readmore}
+
+ \begin{readmore}
+ FIXME: Find the right place Discrimination nets are implemented
+ in @{ML_file "Pure/net.ML"}.
+ \end{readmore}
+
+ The most common combinators to modify simpsets are
+
+ \begin{isabelle}
+ \begin{tabular}{ll}
+ @{ML addsimps} & @{ML delsimps}\\
+ @{ML addcongs} & @{ML delcongs}\\
+ @{ML addsimprocs} & @{ML delsimprocs}\\
+ \end{tabular}
+ \end{isabelle}
+
+ (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})
+*}
+
+text_raw {*
+\begin{figure}[tp]
+\begin{isabelle}*}
+ML{*fun print_ss ctxt ss =
+let
+ val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
+
+ fun name_thm (nm, thm) =
+ " " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
+ fun name_ctrm (nm, ctrm) =
+ " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
+
+ val s1 = ["Simplification rules:"]
+ val s2 = map name_thm simps
+ val s3 = ["Congruences rules:"]
+ val s4 = map name_thm congs
+ val s5 = ["Simproc patterns:"]
+ val s6 = map name_ctrm procs
+in
+ (s1 @ s2 @ s3 @ s4 @ s5 @ s6)
+ |> separate "\n"
+ |> implode
+ |> warning
+end*}
+text_raw {*
+\end{isabelle}
+\caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing
+ all printable information stored in a simpset. We are here only interested in the
+ simplifcation rules, congruence rules and simprocs.\label{fig:printss}}
+\end{figure} *}
+
+text {*
+ To see how they work, consider the two functions in Figure~\ref{fig:printss}, which
+ print out some parts of a simpset. If you use them to print out the components of the
+ empty simpset, in ML @{ML empty_ss} and the most primitive simpset
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} empty_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ you can see it contains nothing. This simpset is usually not useful, except as a
+ building block to build bigger simpsets. For example you can add to @{ML empty_ss}
+ the simplification rule @{thm [source] Diff_Int} as follows:
+*}
+
+ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
+
+text {*
+ Printing then out the components of the simpset gives:
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss1"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+Simproc patterns:"}
+
+ (FIXME: Why does it print out ??.unknown)
+
+ Adding also the congruence rule @{thm [source] UN_cong}
+*}
+
+ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}
text {*
- @{ML rewrite_goals_tac}
+ gives
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss2"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+ UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
+Simproc patterns:"}
+
+ Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss}
+ expects this form of the simplification and congruence rules. However, even
+ when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
+
+ In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
+ printing out the components of this simpset
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_basic_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ also produces ``nothing'', the printout is misleading. In fact
+ the @{ML HOL_basic_ss} is setup so that it can solve goals of the
+ form @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]};
+ and also resolve with assumptions. For example:
+*}
+
+lemma
+ "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
+apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
+done
+
+text {*
+ This behaviour is not because of simplification rules, but how the subgoaler, solver
+ and looper are set up. @{ML HOL_basic_ss} is usually a good start to build your
+ own simpsets, because of the low danger of causing loops via interacting simplifiction
+ rules.
+
+ The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing
+ already many useful simplification and congruence rules for the logical
+ connectives in HOL.
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_ss"
+"Simplification rules:
+ Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
+ HOL.the_eq_trivial: THE x. x = y \<equiv> y
+ HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
+ \<dots>
+Congruences rules:
+ HOL.simp_implies: \<dots>
+ \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
+ op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
+Simproc patterns:
+ \<dots>"}
+
- @{ML ObjectLogic.full_atomize_tac}
-
- @{ML ObjectLogic.rulify_tac}
+ The simplifier is often used to unfold definitions in a proof. For this the
+ simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the
+ definition
+*}
+
+definition "MyTrue \<equiv> True"
+
+lemma shows "MyTrue \<Longrightarrow> True \<and> True"
+apply(rule conjI)
+apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *})
+txt{* then the tactic produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ As you can see, the tactic unfolds the definitions in all subgoals.
+*}
+
+
+text_raw {*
+\begin{figure}[tp]
+\begin{isabelle} *}
+types prm = "(nat \<times> nat) list"
+consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)
+
+primrec (perm_nat)
+ "[]\<bullet>c = c"
+ "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab
+ else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))"
+
+primrec (perm_prod)
+ "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)"
+
+primrec (perm_list)
+ "pi\<bullet>[] = []"
+ "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)"
+
+lemma perm_append[simp]:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
+by (induct pi\<^isub>1) (auto)
+
+lemma perm_eq[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
+by (induct pi) (auto)
+
+lemma perm_rev[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "pi\<bullet>((rev pi)\<bullet>c) = c"
+by (induct pi arbitrary: c) (auto)
+
+lemma perm_compose:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)"
+by (induct pi\<^isub>2) (auto)
+text_raw {*
+\end{isabelle}\medskip
+\caption{A simple theory about permutations over @{typ nat}. The point is that the
+ lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
+ it would cause the simplifier to loop. It can still be used as a simplification
+ rule if the permutation is sufficiently protected.\label{fig:perms}
+ (FIXME: Uses old primrec.)}
+\end{figure} *}
+
+
+text {*
+ The simplifier is often used in order to bring terms into a normal form.
+ Unfortunately, often the situation arises that the corresponding
+ simplification rules will cause the simplifier to run into an infinite
+ loop. Consider for example the simple theory about permutations over natural
+ numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
+ push permutations as far inside as possible, where they might disappear by
+ Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
+ it would be desirable to add also the lemma @{thm [source] perm_compose} to
+ the simplifier for pushing permutations over other permutations. Unfortunately,
+ the right-hand side of this lemma is again an instance of the left-hand side
+ and so causes an infinite loop. The seems to be no easy way to reformulate
+ this rule and so one ends up with clunky proofs like:
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(simp)
+apply(rule trans)
+apply(rule perm_compose)
+apply(simp)
+done
+
+text {*
+ It is however possible to create a single simplifier tactic that solves
+ such proofs. The trick is to introduce an auxiliary constant for permutations
+ and split the simplification into two phases (below actually three). Let
+ assume the auxiliary constant is
+*}
+
+definition
+ perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
+where
+ "pi \<bullet>aux c \<equiv> pi \<bullet> c"
+
+text {* Now the two lemmas *}
+
+lemma perm_aux_expand:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
+unfolding perm_aux_def by (rule refl)
+
+lemma perm_compose_aux:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
+unfolding perm_aux_def by (rule perm_compose)
+
+text {*
+ are simple consequence of the definition and @{thm [source] perm_compose}.
+ More importantly, the lemma @{thm [source] perm_compose_aux} can be safely
+ added to the simplifier, because now the right-hand side is not anymore an instance
+ of the left-hand side. In a sense it freezes all redexes of permutation compositions
+ after one step. In this way, we can split simplification of permutations
+ into three phases without the user not noticing anything about the auxiliary
+ contant. We first freeze any instance of permutation compositions in the term using
+ lemma @{thm [source] "perm_aux_expand"} (Line 9);
+ then simplifiy all other permutations including pusing permutations over
+ other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
+ finally ``unfreeze'' all instances of permutation compositions by unfolding
+ the definition of the auxiliary constant.
+*}
+
+ML %linenosgray{*val perm_simp_tac =
+let
+ val thms1 = [@{thm perm_aux_expand}]
+ val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev},
+ @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @
+ @{thms perm_list.simps} @ @{thms perm_nat.simps}
+ val thms3 = [@{thm perm_aux_def}]
+in
+ simp_tac (HOL_basic_ss addsimps thms1)
+ THEN' simp_tac (HOL_basic_ss addsimps thms2)
+ THEN' simp_tac (HOL_basic_ss addsimps thms3)
+end*}
+
+text {*
+ For all three phases we have to build simpsets addig specific lemmas. As is sufficient
+ for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain
+ the desired results. Now we can solve the following lemma
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(tactic {* perm_simp_tac 1 *})
+done
+
+
+text {*
+ in one step. This tactic can deal with most instances of normalising permutations;
+ in order to solve all cases we have to deal with corner-cases such as the
+ lemma being an exact instance of the permutation composition lemma. This can
+ often be done easier by implementing a simproc or a conversion. Both will be
+ explained in the next two chapters.
+
+ (FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
+
+ (FIXME: What are the second components of the congruence rules---something to
+ do with weak congruence constants?)
+
+ (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
+
+ (FIXME: @{ML ObjectLogic.full_atomize_tac},
+ @{ML ObjectLogic.rulify_tac})
*}
@@ -1349,7 +1744,7 @@
"*** Exception- CTERM (\"no conversion\", []) raised"}
A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
- produces an equation between a term and its beta-normal form. For example
+ produces a meta-equation between a term and its beta-normal form. For example
@{ML_response_fake [display,gray]
"let
@@ -1362,7 +1757,7 @@
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
- since the pretty printer for @{ML_type cterm}s already beta-normalises terms.
+ since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
But by the way how we constructed the term (using the function
@{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s),
we can be sure the left-hand side must contain beta-redexes. Indeed
@@ -1374,9 +1769,9 @@
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
+ val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
in
- #prop (rep_thm
- (Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)))
+ #prop (rep_thm thm)
end"
"Const (\"==\",\<dots>) $
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
@@ -1401,15 +1796,15 @@
The main point of conversions is that they can be used for rewriting
@{ML_type cterm}s. To do this you can use the function @{ML
- "Conv.rewr_conv"} which expects a meta-equation as an argument. Suppose we
- want to rewrite a @{ML_type cterm} according to the (meta)equation:
+ "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
+ want to rewrite a @{ML_type cterm} according to the meta-equation:
*}
lemma true_conj1: "True \<and> P \<equiv> P" by simp
text {*
- You can see how this function works with the example
- @{term "True \<and> (Foo \<longrightarrow> Bar)"}:
+ You can see how this function works in the example rewriting
+ the @{ML_type cterm} @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
@{ML_response_fake [display,gray]
"let
@@ -1420,7 +1815,8 @@
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
- outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match the
+ outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
+ exactly the
left-hand side of the theorem, then @{ML Conv.rewr_conv} raises
the exception @{ML CTERM}.
@@ -1458,7 +1854,7 @@
Here the conversion of @{thm [source] true_conj1} only applies
in the first case, but fails in the second. The whole conversion
- does not fail, however, because the combinator @{ML else_conv} will then
+ does not fail, however, because the combinator @{ML Conv.else_conv} will then
try out @{ML Conv.all_conv}, which always succeeds.
Apart from the function @{ML beta_conversion in Thm}, which is able to fully
@@ -1479,9 +1875,10 @@
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
The reason for this behaviour is that @{text "(op \<or>)"} expects two
- arguments. So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
+ arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
conversion is then applied to @{text "t2"} which in the example above
- stands for @{term "True \<and> Q"}.
+ stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
+ the conversion to the first argument of an application.
The function @{ML Conv.abs_conv} applies a conversion under an abstractions.
For example:
@@ -1495,13 +1892,14 @@
end"
"\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
- The conversion that goes under an application is
- @{ML Conv.combination_conv}. It expects two conversions as arguments,
- each of which is applied to the corresponding ``branch'' of the application.
+ Note that this conversion needs a context as an argument. The conversion that
+ goes under an application is @{ML Conv.combination_conv}. It expects two conversions
+ as arguments, each of which is applied to the corresponding ``branch'' of the
+ application.
- We can now apply all these functions in a
- conversion that recursively descends a term and applies a conversion in all
- possible positions.
+ We can now apply all these functions in a conversion that recursively
+ descends a term and applies a ``@{thm [source] true_conj1}''-conversion
+ in all possible positions.
*}
ML %linenosgray{*fun all_true1_conv ctxt ctrm =
@@ -1510,17 +1908,17 @@
(Conv.arg_conv (all_true1_conv ctxt) then_conv
Conv.rewr_conv @{thm true_conj1}) ctrm
| _ $ _ => Conv.combination_conv
- (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
+ (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm*}
text {*
- This functions descends under applications (Line 6 and 7) and abstractions
- (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"}
- (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2
+ This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
+ it descends under applications (Line 6 and 7) and abstractions
+ (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
we need to transform the @{ML_type cterm} into a @{ML_type term} in order
to be able to pattern-match the term. To see this
- conversion in action, consider the following example
+ conversion in action, consider the following example:
@{ML_response_fake [display,gray]
"let
@@ -1531,8 +1929,6 @@
end"
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
- where the conversion is applied ``deep'' inside the term.
-
To see how much control you have about rewriting by using conversions, let us
make the task a bit more complicated by rewriting according to the rule
@{text true_conj1}, but only in the first arguments of @{term If}s. Then
@@ -1544,7 +1940,7 @@
Const (@{const_name If}, _) $ _ =>
Conv.arg_conv (all_true1_conv ctxt) ctrm
| _ $ _ => Conv.combination_conv
- (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
+ (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm *}
@@ -1555,7 +1951,7 @@
"let
val ctxt = @{context}
val ctrm =
- @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
+ @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
in
if_true1_conv ctxt ctrm
end"
@@ -1581,15 +1977,15 @@
Finally, conversions can also be turned into tactics and then applied to
goal states. This can be done with the help of the function @{ML CONVERSION},
- and also some predefined conversion combinator which traverse a goal
+ and also some predefined conversion combinators that traverse a goal
state. The combinators for the goal state are: @{ML Conv.params_conv} for
- going under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML Conv.prems_conv} for applying the conversion to all
- premises of a goal, and @{ML Conv.concl_conv} for applying the conversion to
+ converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
+ Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
+ premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
the conclusion of a goal.
Assume we want to apply @{ML all_true1_conv} only in the conclusion
- of the goal, and @{ML if_true1_conv} should only be applied to the premises.
+ of the goal, and @{ML if_true1_conv} should only apply to the premises.
Here is a tactic doing exactly that:
*}
@@ -1599,8 +1995,8 @@
in
CONVERSION
(Conv.params_conv ~1 (fn ctxt =>
- (Conv.prems_conv ~1 (if_true1_conv ctxt)) then_conv
- Conv.concl_conv ~1 (all_true1_conv ctxt)) ctxt) i
+ (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
+ Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i
end)*}
text {*
@@ -1628,16 +2024,33 @@
and simprocs; the advantage of conversions, however, is that you never have
to worry about non-termination.
+ (FIXME: explain @{ML Conv.try_conv})
+
+ \begin{exercise}\label{ex:addconversion}
+ Write a tactic that does the same as the simproc in exercise
+ \ref{ex:addsimproc}, but is based in conversions. That means replace terms
+ of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
+ the same assumptions as in \ref{ex:addsimproc}.
+ \end{exercise}
+
+ \begin{exercise}
+ Compare which way (either Exercise ~\ref{addsimproc} or \ref{ex:addconversion}) of
+ rewriting such terms is faster. For this you might have to construct quite
+ large terms. Also see Recipe \ref{rec:timing} for information about timing.
+ \end{exercise}
+
\begin{readmore}
See @{ML_file "Pure/conv.ML"} for more information about conversion combinators.
Further conversions are defined in @{ML_file "Pure/thm.ML"},
@{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.
\end{readmore}
+
*}
-section {* Structured Proofs *}
+
+section {* Structured Proofs (TBD) *}
text {* TBD *}