CookBook/Tactical.thy
changeset 149 253ea99c1441
parent 148 84d1392186d3
child 150 cb39c41548bd
child 169 d3fcc1a0272c
--- a/CookBook/Tactical.thy	Wed Feb 25 22:13:41 2009 +0000
+++ b/CookBook/Tactical.thy	Thu Feb 26 12:16:24 2009 +0000
@@ -311,7 +311,7 @@
   "(C)"}. Since the goal @{term C} can potentially be an implication, there is
   a ``protector'' wrapped around it (in from of an outermost constant @{text
   "Const (\"prop\", bool \<Rightarrow> bool)"}; however this constant
-  is invisible in the figure). This prevents that premises of @{text C} are
+  is invisible in the figure). This wrapper prevents that premises of @{text C} are
   mis-interpreted as open subgoals. While tactics can operate on the subgoals
   (the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
   @{term C} intact, with the exception of possibly instantiating schematic
@@ -453,7 +453,7 @@
 (*<*)oops(*>*)
 
 text {*
-  where the application of Rule @{text bspec} generates two subgoals involving the
+  where the application of rule @{text bspec} generates two subgoals involving the
   meta-variable @{text "?x"}. Now, if you are not careful, tactics 
   applied to the first subgoal might instantiate this meta-variable in such a 
   way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
@@ -576,7 +576,7 @@
   \end{tabular}
   \end{quote}
 
-  Note in the actual output the brown colour of the variables @{term x} and 
+  Notice in the actual output the brown colour of the variables @{term x} and 
   @{term y}. Although they are parameters in the original goal, they are fixed inside
   the subproof. By convention these fixed variables are printed in brown colour.
   Similarly the schematic variable @{term z}. The assumption, or premise, 
@@ -727,12 +727,13 @@
   the first goal. To do this can result in quite messy code. In contrast, 
   the ``reverse application'' is easy to implement.
 
-  (FIXME: mention @{ML "CSUBGOAL"})
+  The tactic @{ML "CSUBGOAL"} is similar to @{ML SUBGOAL} except that it takes
+  a @{ML_type cterm} instead of a @{ML_type term}.  Of course, this example is
+  contrived: there are much simpler methods available in Isabelle for
+  implementing a proof procedure analysing a goal according to its topmost
+  connective. These simpler methods use tactic combinators, which we will
+  explain in the next section.
 
-  Of course, this example is contrived: there are much simpler methods available in 
-  Isabelle for implementing a proof procedure analysing a goal according to its topmost
-  connective. These simpler methods use tactic combinators, which we will explain 
-  in the next section.
 *}
 
 section {* Tactic Combinators *}
@@ -1027,10 +1028,10 @@
 
 text {*
   In Isabelle you can also implement custom simplification procedures, called
-  \emph{simprocs}. Simprocs can be triggered on a specified term-pattern and
-  rewrite a term according to a theorem. They are useful in cases where
-  a rewriting rule must be produced on ``demand'' or when rewriting by
-  simplification is too unpredictable and potentially loops.
+  \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
+  term-pattern and rewrite a term according to a theorem. They are useful in
+  cases where a rewriting rule must be produced on ``demand'' or when
+  rewriting by simplification is too unpredictable and potentially loops.
 
   To see how simprocs work, let us first write a simproc that just prints out
   the pattern which triggers it and otherwise does nothing. For this
@@ -1052,7 +1053,7 @@
   returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
   moment we are \emph{not} interested in actually rewriting anything. We want
   that the simproc is triggered by the pattern @{term "Suc n"}. This can be
-  done by adding the simproc to the current simproc as follows
+  done by adding the simproc to the current simpset as follows
 *}
 
 simproc_setup fail_sp ("Suc n") = {* K fail_sp_aux *}
@@ -1114,7 +1115,8 @@
 text {*
   Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
   (therefore we printing it out using the function @{ML string_of_term in Syntax}).
-  We can turn this function into a proper simproc using
+  We can turn this function into a proper simproc using the function 
+  @{ML Simplifier.simproc_i}:
 *}
 
 
@@ -1130,7 +1132,7 @@
   Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
   The function also takes a list of patterns that can trigger the simproc.
   Now the simproc is set up and can be explicitly added using
-  {ML addsimprocs} to a simpset whenerver
+  @{ML addsimprocs} to a simpset whenerver
   needed. 
 
   Simprocs are applied from inside to outside and from left to right. You can
@@ -1138,7 +1140,7 @@
 *}
 
 lemma shows "Suc (Suc 0) = (Suc 1)"
-  apply(tactic {* simp_tac (HOL_ss addsimprocs [fail_sp']) 1*})
+  apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*})
 (*<*)oops(*>*)
 
 text {*
@@ -1190,7 +1192,7 @@
 *}
 
 lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
-  apply(tactic {* simp_tac (HOL_ss addsimprocs [plus_one_sp]) 1*})
+  apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*})
 txt{*
   where the simproc produces the goal state
 
@@ -1208,7 +1210,6 @@
   Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
   with the number @{text n} increase by one. First we implement a function that
   takes a term and produces the corresponding integer value.
-
 *}
 
 ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
@@ -1412,7 +1413,7 @@
 
   @{ML_response_fake [display,gray]
 "let 
-  val ctrm =  @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
+  val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
 in
   Conv.rewr_conv @{thm true_conj1} ctrm
 end"
@@ -1439,7 +1440,8 @@
   "(\<lambda>x. x \<and> False) True \<equiv> False"}
 
   where we first beta-reduce the term and then rewrite according to
-  @{thm [source] true_conj1}. 
+  @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
+  normalising all terms.)
 
   The conversion combinator @{ML else_conv} tries out the 
   first one, and if it does not apply, tries the second. For example 
@@ -1459,11 +1461,11 @@
   does not fail, however, because the combinator @{ML else_conv} will then 
   try out @{ML Conv.all_conv}, which always succeeds.
 
-  Apart from the function @{ML beta_conversion in Thm}, which able to fully
-  beta-normalise a term, the restriction of conversions so far is that they
+  Apart from the function @{ML beta_conversion in Thm}, which is able to fully
+  beta-normalise a term, the conversions so far are restricted in that they
   only apply to the outer-most level of a @{ML_type cterm}. In what follows we
   will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
-  the conversion on the first argument of an application, that is the term
+  the conversion to the first argument of an application, that is the term
   must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
   to @{text t2}.  For example
 
@@ -1477,7 +1479,7 @@
 "P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
 
   The reason for this behaviour is that @{text "(op \<or>)"} expects two
-  arguments.  So the term must be of the form @{text "Const \<dots> $ t1 $ t2"}. The
+  arguments.  So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
   conversion is then applied to @{text "t2"} which in the example above
   stands for @{term "True \<and> Q"}.
 
@@ -1515,8 +1517,10 @@
 text {*
   This functions descends under applications (Line 6 and 7) and abstractions 
   (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"}
-  (Lines 3-5); otherwise it leaves the term unchanged (Line 9). To see this 
-  conversion in action, consider the following example.
+  (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2
+  we need to transform the @{ML_type cterm} into a @{ML_type term} in order
+  to be able to pattern-match the term. To see this 
+  conversion in action, consider the following example
 
 @{ML_response_fake [display,gray]
 "let
@@ -1527,9 +1531,11 @@
 end"
   "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
 
-  To see how much control you have about rewriting via conversions, let us 
+  where the conversion is applied ``deep'' inside the term.
+
+  To see how much control you have about rewriting by using conversions, let us 
   make the task a bit more complicated by rewriting according to the rule
-  @{text true_conj1}, but only in the first arguments of @{term If}s. The 
+  @{text true_conj1}, but only in the first arguments of @{term If}s. Then 
   the conversion should be as follows.
 *}
 
@@ -1543,7 +1549,7 @@
   | _ => Conv.all_conv ctrm *}
 
 text {*
-  Here is an application of this conversion:
+  Here is an example for this conversion:
 
   @{ML_response_fake [display,gray]
 "let
@@ -1560,7 +1566,7 @@
 text {*
   So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, 
   also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example, 
-  consider @{ML all_true1_conv} and the lemma:
+  consider the conversion @{ML all_true1_conv} and the lemma:
 *}
 
 lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
@@ -1583,7 +1589,7 @@
   the conclusion of a goal.
 
   Assume we want to apply @{ML all_true1_conv} only in the conclusion
-  of the goal, and @{ML if_true1_conv} should only be applied in the premises.
+  of the goal, and @{ML if_true1_conv} should only be applied to the premises.
   Here is a tactic doing exactly that:
 *}
 
@@ -1611,9 +1617,7 @@
 apply(tactic {* true1_tac 1 *})
 txt {* where the tactic yields the goal state
 
-       \begin{minipage}{\textwidth}
-       @{subgoals [display]} 
-       \end{minipage} *}
+       @{subgoals [display]}*}
 (*<*)oops(*>*)
 
 text {*