ProgTutorial/Tactical.thy
changeset 412 73f716b9201a
parent 411 24fc00319c4a
child 413 95461cf6fd07
--- a/ProgTutorial/Tactical.thy	Fri Dec 04 23:45:32 2009 +0100
+++ b/ProgTutorial/Tactical.thy	Sun Dec 06 14:26:14 2009 +0100
@@ -1410,6 +1410,12 @@
 
   \footnote{\bf FIXME: show rewriting of cterms}
 
+  There is one restriction you have to keep in mind when using the simplifier:
+  it can only deal with rewriting rules that are higher order pattern 
+  (see Section \ref{sec:univ} on unification). If a rule is not a pattern,
+  then you have to use simprocs or conversions, which we shall describe in
+  the next section.
+
   When using the simplifier, the crucial information you have to provide is
   the simpset. If this information is not handled with care, then, as
   mentioned above, the simplifier can easily run into a loop. Therefore a good
@@ -2114,9 +2120,9 @@
 text {*
   Conversions are a thin layer on top of Isabelle's inference kernel, and can
   be viewed as a controllable, bare-bone version of Isabelle's simplifier.
-  The purpose of conversions is to manipulate on @{ML_type cterm}s. However,
+  The purpose of conversions is to manipulate @{ML_type cterm}s. However,
   we will also show in this section how conversions can be applied to theorems
-  and used as tactics. The type of conversions is
+  and to goal states. The type of conversions is
 *}
 
 ML{*type conv = cterm -> thm*}
@@ -2352,7 +2358,7 @@
   @{ML bottom_conv in More_Conv} traverses first the the term and
   on the ``way down'' applies the conversion, whereas @{ML top_conv in
   More_Conv} applies the conversion on the ``way up''. To see the difference, 
-  assume the following two meta-equations.
+  assume the following two meta-equations
 *}
 
 lemma conj_assoc:
@@ -2362,9 +2368,10 @@
 by simp_all
 
 text {*
-  and the @{ML_type cterm} @{text "(a \<and> (b \<and> c)) \<and> d"}. The reader should
+  and the @{ML_type cterm} @{text "(a \<and> (b \<and> c)) \<and> d"}. Here you should
   pause for a moment to be convinced that rewriting top-down and bottom-up 
-  according to the two meta-equations produces two results. 
+  according to the two meta-equations produces two results. Below these
+  two results are calculated. 
 
   @{ML_response_fake [display, gray]
   "let
@@ -2379,7 +2386,7 @@
   "(\"(a \<and> (b \<and> c)) \<and> d \<equiv> a \<and> (b \<and> (c \<and> d))\",
  \"(a \<and> (b \<and> c)) \<and> d \<equiv> (a \<and> b) \<and> (c \<and> d)\")"}
 
-  To see how much control you have about rewriting by using conversions, let us 
+  To see how much control you have over rewriting with conversions, let us 
   make the task a bit more complicated by rewriting according to the rule
   @{text true_conj1}, but only in the first arguments of @{term If}s. Then 
   the conversion should be as follows.
@@ -2395,8 +2402,8 @@
 
 text {*
   In the first function we only treat the top-most redex and also only the
-  success case and otherwise return @{ML no_conv in Conv}.  To apply this
-  conversion inside a term, we use in the last line the combinator @{ML_ind
+  success case. As default we return @{ML no_conv in Conv}.  To apply this
+  ``simple'' conversion inside a term, we use in the last line the combinator @{ML_ind
   top_sweep_conv in More_Conv}, which traverses the term starting from the
   root and applies it to all the redexes on the way, but stops in each branch as
   soon as it found one redex. Here is an example for this conversion:
@@ -2416,14 +2423,14 @@
 text {*
   So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, 
   also work on theorems using the function @{ML_ind  fconv_rule in Conv}. As an example, 
-  consider the conversion @{ML true_conj1_conv} and the lemma:
+  consider again the conversion @{ML true_conj1_conv} and the lemma:
 *}
 
 lemma foo_test: 
   shows "P \<or> (True \<and> \<not>P)" by simp
 
 text {*
-  Using the conversion @{ML true_conj1_conv} you can transform this theorem into a 
+  Using the conversion you can transform this theorem into a 
   new theorem as follows
 
   @{ML_response_fake [display,gray]
@@ -2435,22 +2442,21 @@
 end" 
   "?P \<or> \<not> ?P"}
 
-  Finally, conversions can also be turned into tactics and then applied to
-  goal states. This can be done with the help of the function 
-  @{ML_ind CONVERSION in Tactical},
-  and also some predefined conversion combinators that traverse a goal
+  Finally, Isabelle provides function @{ML_ind CONVERSION in Tactical} 
+  for turning conversions into tactics. This needs some predefined conversion 
+  combinators that traverse a goal
   state and can selectively apply the conversion. The combinators for 
   the goal state are: 
 
   \begin{itemize}
   \item @{ML_ind params_conv in Conv} for converting under parameters
-  (i.e.~where goals are of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
-
-  \item @{ML_ind prems_conv in Conv} for applying a conversion to all
-  premises of a goal, and
+  (i.e.~where a goal state is of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
+
+  \item @{ML_ind prems_conv in Conv} for applying a conversion to 
+  premises of a goal state, and
 
   \item @{ML_ind concl_conv in Conv} for applying a conversion to the
-  conclusion of a goal.
+  conclusion of a goal state.
   \end{itemize}
 
   Assume we want to apply @{ML true_conj1_conv} only in the conclusion
@@ -2488,9 +2494,9 @@
   As you can see, the premises are rewritten according to @{ML if_true1_conv}, while
   the conclusion according to @{ML true_conj1_conv}. If we only have one
   conversion that should be uniformly applied to the whole goal state, we
-  can also use @{ML_ind top_conv in More_Conv}.
-
-  Conversions can also be helpful for constructing meta-equality theorems.
+  can simplify @{ML true1_tac} using @{ML_ind top_conv in More_Conv}.
+
+  Conversions are also be helpful for constructing meta-equality theorems.
   Such theorems are often definitions. As an example consider the following
   two ways of defining the identity function in Isabelle. 
 *}
@@ -2503,9 +2509,10 @@
 
 text {*
   Although both definitions define the same function, the theorems @{thm
-  [source] id1_def} and @{thm [source] id2_def} are different. However it is
-  easy to transform one theorem into the other. The function @{ML_ind abs_def
-  in Drule} can automatically transform theorem @{thm [source] id1_def}
+  [source] id1_def} and @{thm [source] id2_def} are different meta-equations. However it is
+  easy to transform one into the other. The function @{ML_ind abs_def
+  in Drule} from the structure @{ML_struct Drule} can automatically transform 
+  theorem @{thm [source] id1_def}
   into @{thm [source] id2_def} by abstracting all variables on the 
   left-hand side in the right-hand side.
 
@@ -2544,7 +2551,7 @@
   assumption in @{ML unabs_def} is that the right-hand side is an
   abstraction. We compute the possibly empty list of abstracted variables in
   Line 4 using the function @{ML_ind strip_abs_vars in Term}. For this we have to
-  transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we
+  first transform the @{ML_type cterm} into a @{ML_type term}. In Line 5 we
   import these variables into the context @{text "ctxt'"}, in order to
   export them again in Line 15.  In Line 8 we certify the list of variables,
   which in turn we apply to the @{ML_text lhs} of the definition using the