polished
authorChristian Urban <urbanc@in.tum.de>
Thu, 26 Feb 2009 12:16:24 +0000
changeset 149 253ea99c1441
parent 148 84d1392186d3
child 150 cb39c41548bd
child 169 d3fcc1a0272c
polished
CookBook/FirstSteps.thy
CookBook/Parsing.thy
CookBook/Tactical.thy
cookbook.pdf
--- a/CookBook/FirstSteps.thy	Wed Feb 25 22:13:41 2009 +0000
+++ b/CookBook/FirstSteps.thy	Thu Feb 26 12:16:24 2009 +0000
@@ -141,9 +141,9 @@
    Syntax.string_of_term ctxt (term_of t)*}
 
 text {*
-  The function @{ML term_of} extracts the @{ML_type term} from a @{ML_type cterm}.
-  If there are more than one @{ML_type cterm}s to be printed, you can use the 
-  function @{ML commas} to separate them.
+  In this example the function @{ML term_of} extracts the @{ML_type term} from
+  a @{ML_type cterm}.  If there are more than one @{ML_type cterm}s to be
+  printed, you can use the function @{ML commas} to separate them.
 *} 
 
 ML{*fun str_of_cterms ctxt ts =  
@@ -214,7 +214,7 @@
   common when dealing with theories (for example by adding a definition, followed by
   lemmas and so on). The reverse application allows you to read what happens in 
   a top-down manner. This kind of coding should also be familiar, 
-  if you used to Haskell's do-notation. Writing the function @{ML inc_by_five} using 
+  if you have been exposed to Haskell's do-notation. Writing the function @{ML inc_by_five} using 
   the reverse application is much clearer than writing
 *}
 
@@ -389,11 +389,11 @@
 (?P \<and> ?Q \<and> ?R) = (?Q \<and> ?P \<and> ?R)
 ((?P \<and> ?Q) \<and> ?R) = (?P \<and> ?Q \<and> ?R)"}  
 
-  You can also refer to the current simpset. To illustrate this we use the
+  You can also refer to the current simpset. To illustrate this we implement the
   function that extracts the theorem names stored in a simpset.
 *}
 
-ML{*fun get_thm_names simpset =
+ML{*fun get_thm_names_from_ss simpset =
 let
   val ({rules,...}, _) = MetaSimplifier.rep_ss simpset
 in
@@ -405,13 +405,13 @@
   information about the simpset. The rules of a simpset are stored in a
   \emph{discrimination net} (a data structure for fast indexing). From this
   net you can extract the entries using the function @{ML Net.entries}.
-  You can now use @{ML get_thm_names} to obtain all names of theorems stored
+  You can now use @{ML get_thm_names_from_ss} to obtain all names of theorems stored
   in the current simpset---this simpset can be referred to using the antiquotation
-  @{text "@{simpset}"}.\footnote
-  {FIXME: you cannot cleanly match against lists with ommited parts}
+  @{text "@{simpset}"}.
 
   @{ML_response_fake [display,gray] 
-"get_thm_names @{simpset}" "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
+  "get_thm_names_from_ss @{simpset}" 
+  "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
 
   \begin{readmore}
   The infrastructure for simpsets is implemented in @{ML_file "Pure/meta_simplifier.ML"}
@@ -431,7 +431,7 @@
   packages. Antiquotations solve this problem, since they are ``linked''
   statically at compile-time.  However, this static linkage also limits their
   usefulness in cases where data needs to be build up dynamically. In the
-  course of this introduction, you will learn more about these antiquotations:
+  course of this chapter you will learn more about these antiquotations:
   they can simplify Isabelle programming since one can directly access all
   kinds of logical elements from th ML-level.
 
@@ -446,7 +446,7 @@
   @{ML_response [display,gray] 
 "@{term \"(a::nat) + b = c\"}" 
 "Const (\"op =\", \<dots>) $ 
-   (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
+  (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
 
   This will show the term @{term "(a::nat) + b = c"}, but printed using the internal
   representation of this term. This internal representation corresponds to the 
@@ -456,7 +456,7 @@
   variables are represented by the constructor @{ML Bound}. The index in @{ML Bound} refers to
   the number of Abstractions (@{ML Abs}) we have to skip until we hit the @{ML Abs} that
   binds the corresponding variable. However, in Isabelle the names of bound variables are 
-  kept at abstractions for printing purposes, and so should be treated only as comments. 
+  kept at abstractions for printing purposes, and so should be treated only as ``comments''. 
   Application in Isabelle is realised with the term-constructor @{ML $}.
 
   \begin{readmore}
@@ -492,7 +492,7 @@
 
 @{ML_response [display,gray] "(@{term \"P x\"}, @{prop \"P x\"})" 
 "(Free (\"P\", \<dots>) $ Free (\"x\", \<dots>),
-  Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
+ Const (\"Trueprop\", \<dots>) $ (Free (\"P\", \<dots>) $ Free (\"x\", \<dots>)))"}
  
   where a coercion is inserted in the second component and 
 
@@ -617,7 +617,7 @@
   \end{exercise}
 
   There are a few subtle issues with constants. They usually crop up when
-  pattern matching terms or types, or constructing them. While it is perfectly ok
+  pattern matching terms or types, or when constructing them. While it is perfectly ok
   to write the function @{text is_true} as follows
 *}
 
@@ -651,7 +651,8 @@
 
 @{ML_response [display,gray] "is_all @{term \"\<forall>x::nat. P x\"}" "true"}
 
-  matches correctly (the first wildcard in the pattern matches any type).
+  matches correctly (the first wildcard in the pattern matches any type and the
+  second any term).
 
   However there is still a problem: consider the similar function that
   attempts to pick out @{text "Nil"}-terms:
@@ -666,7 +667,7 @@
   @{ML_response [display,gray] "is_nil @{term \"Nil\"}" "false"}
 
   The problem is that on the ML-level the name of a constant is more
-  subtle as you might expect. The function @{ML is_all} worked correctly,
+  subtle than you might expect. The function @{ML is_all} worked correctly,
   because @{term "All"} is such a fundamental constant, which can be referenced
   by @{ML "Const (\"All\", some_type)" for some_type}. However, if you look at
 
@@ -732,7 +733,7 @@
   the @{ML "@{theory}"} antiquotation to get hold of the current theory.
   For example you can write:
 
-  @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"a + b = c\"}" "a + b = c"}
+  @{ML_response_fake [display,gray] "cterm_of @{theory} @{term \"(a::nat) + b = c\"}" "a + b = c"}
 
   This can also be written with an antiquotation:
 
@@ -770,15 +771,15 @@
 
   To calculate the type, this function traverses the whole term and will
   detect any typing inconsistency. For examle changing the type of the variable 
-  from @{typ "nat"} to @{typ "int"} will result in the error message: 
+  @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: 
 
   @{ML_response_fake [display,gray] 
   "type_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" 
   "*** Exception- TYPE (\"type_of: type mismatch in application\" \<dots>"}
 
   Since the complete traversal might sometimes be too costly and
-  not necessary, there is also the function @{ML fastype_of}, which 
-  returns the type of a term.
+  not necessary, there is the function @{ML fastype_of}, which 
+  also returns the type of a term.
 
   @{ML_response [display,gray] 
   "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::nat\"})" "bool"}
@@ -789,7 +790,7 @@
    @{ML_response [display,gray] 
   "fastype_of (@{term \"f::nat \<Rightarrow> bool\"} $ @{term \"x::int\"})" "bool"}
 
-  where no error is raised.
+  where no error is detected.
 
   Sometimes it is a bit inconvenient to construct a term with 
   complete typing annotations, especially in cases where the typing 
@@ -809,7 +810,7 @@
   Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"}
 
   Instead of giving explicitly the type for the constant @{text "plus"} and the free 
-  variable @{text "x"}, the type-inference filled in the missing information.
+  variable @{text "x"}, the type-inference filles in the missing information.
 
   \begin{readmore}
   See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
@@ -842,7 +843,6 @@
 
 @{ML_response_fake [display,gray]
 "let
-
   val assm1 = @{cprop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
   val assm2 = @{cprop \"(P::nat\<Rightarrow>bool) t\"}
 
@@ -916,9 +916,9 @@
 
 text {* 
   where the function @{ML "Thm.rule_attribute"} expects a function taking a 
-  context (we ignore it in the code above) and a theorem (@{text thm}) and 
-  returns another theorem (namely @{text thm} resolved with the rule 
-  @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns 
+  context (which we ignore in the code above) and a theorem (@{text thm}), and 
+  returns another theorem (namely @{text thm} resolved with the lemma 
+  @{thm [source] sym}: @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns 
   an attribute (of type @{ML_type "attribute"}).
 
   Before we can use the attribute, we need to set it up. This can be done
@@ -931,10 +931,11 @@
 *}
 
 text {*
-  The attribute does not expect any further arguments (like @{text "[OF \<dots>]"} that 
-  can take a list of theorems as argument). Therefore we use the function
-  @{ML Attrib.no_args}. Later on we will also consider attributes taking further
-  arguments. An example for the attribute @{text "[my_sym]"} is the proof
+  The attribute does not expect any further arguments (unlike @{text "[OF
+  \<dots>]"}, for example, which can take a list of theorems as argument). Therefore
+  we use the function @{ML Attrib.no_args}. Later on we will also consider
+  attributes taking further arguments. An example for the attribute @{text
+  "[my_sym]"} is the proof
 *} 
 
 lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
--- a/CookBook/Parsing.thy	Wed Feb 25 22:13:41 2009 +0000
+++ b/CookBook/Parsing.thy	Thu Feb 26 12:16:24 2009 +0000
@@ -45,7 +45,7 @@
 
   @{ML_response [display,gray] "($$ \"w\") (explode \"world\")" "(\"w\", [\"o\", \"r\", \"l\", \"d\"])"}
 
-  This function will either succeed (as in the two examples above) or raise the exception 
+  The function @{ML "$$"} will either succeed (as in the two examples above) or raise the exception 
   @{text "FAIL"} if no string can be consumed. For example trying to parse
 
   @{ML_response_fake [display,gray] "($$ \"x\") (explode \"world\")" 
@@ -318,14 +318,6 @@
   where the single-character strings in the parsed output are transformed
   back into one string.
  
-  \begin{exercise}\label{ex:scancmts}
-  Write a parser that parses an input string so that any comment enclosed
-  inside @{text "(*\<dots>*)"} is replaced by a the same comment but enclosed inside
-  @{text "(**\<dots>**)"} in the output string. To enclose a string, you can use the
-  function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML
-  "s1 ^ s ^ s2" for s1 s2 s}.
-  \end{exercise}
-
   The function @{ML Scan.ahead} parses some input, but leaves the original
   input unchanged. For example:
 
@@ -342,6 +334,14 @@
 "((\"h\", \"e\"), (1, [\"l\", \"l\", \"o\"]))"}
 
   (FIXME: In which situations is this useful? Give examples.) 
+
+  \begin{exercise}\label{ex:scancmts}
+  Write a parser that parses an input string so that any comment enclosed
+  inside @{text "(*\<dots>*)"} is replaced by a the same comment but enclosed inside
+  @{text "(**\<dots>**)"} in the output string. To enclose a string, you can use the
+  function @{ML "enclose s1 s2 s" for s1 s2 s} which produces the string @{ML
+  "s1 ^ s ^ s2" for s1 s2 s}.
+  \end{exercise}
 *}
 
 section {* Parsing Theory Syntax *}
@@ -354,7 +354,7 @@
 ML{*type 'a parser = OuterLex.token list -> 'a * OuterLex.token list*}
 
 text {*
-  This reason for using token parsers is that theory syntax, as well as the
+  The reason for using token parsers is that theory syntax, as well as the
   parsers for the arguments of proof methods, use the type @{ML_type
   OuterLex.token} (which is identical to the type @{ML_type
   OuterParse.token}).  However, there are also handy parsers for
@@ -576,7 +576,7 @@
   "YXML.parse \"\\^E\\^Ftoken\\^Efoo\\^E\\^F\\^E\""
   "XML.Elem (\"token\", [], [XML.Text \"foo\"])"}
 
-  This function returns an XML-tree. You can see better what is going on if
+  The result of the decoding is an XML-tree. You can see better what is going on if
   you replace @{ML Position.none} by @{ML "Position.line 42"}, say:
 
 @{ML_response [display,gray]
@@ -587,8 +587,8 @@
 end"
 "XML.Elem (\"token\", [(\"line\", \"42\"), (\"end_line\", \"42\")], [XML.Text \"foo\"])"}
   
-  The positional information is stored so that code called later on will be
-  able to give more precise error messages. 
+  The positional information is stored as part of an XML-tree so that code 
+  called later on will be able to give more precise error messages. 
 
   \begin{readmore}
   The functions to do with input and output of XML and YXML are defined 
@@ -692,12 +692,12 @@
 *}
 
 text {*
-  Whenever types are given, they are stored in the @{ML SOME}s. Since types
-  are part of the inner syntax they are strings with some encoded information 
-  (see previous section). 
-  If a syntax translation is present for a variable, then it is
-  stored in the @{ML Mixfix} datastructure; no syntax translation is
-  indicated by @{ML NoSyn}.
+  Whenever types are given, they are stored in the @{ML SOME}s. They types are
+  not yet given to the variable: this must be done by type inference later
+  on. Since types are part of the inner syntax they are strings with some
+  encoded information (see previous section). If a syntax translation is
+  present for a variable, then it is stored in the @{ML Mixfix} datastructure;
+  no syntax translation is indicated by @{ML NoSyn}.
 
   \begin{readmore}
   The datastructre for sytax annotations is defined in @{ML_file "Pure/Syntax/mixfix.ML"}.
@@ -929,7 +929,9 @@
   and then ``open up'' a proof in order to prove the proposition (for example
   \isacommand{lemma}) or prove some other properties (for example
   \isacommand{function}). To achieve this kind of behaviour, you have to use the kind
-  indicator @{ML thy_goal in OuterKeyword}.
+  indicator @{ML thy_goal in OuterKeyword}.  Note, however, once you change the 
+  ``kind'' of a command from @{ML thy_decl in OuterKeyword} to @{ML thy_goal in OuterKeyword} 
+  then the keyword file needs to be re-created.
 
   Below we change \isacommand{foobar} so that it takes a proposition as
   argument and then starts a proof in order to prove it. Therefore in Line 13, 
@@ -981,8 +983,7 @@
   \isacommand{done}
   \end{isabelle}
 
-  However, once you change the ``kind'' of a command from @{ML thy_decl in OuterKeyword}
-  to @{ML thy_goal in OuterKeyword} then the keyword file needs to be re-created.
+ 
   
   (FIXME What do @{ML "Toplevel.theory"} 
   @{ML "Toplevel.print"} 
--- a/CookBook/Tactical.thy	Wed Feb 25 22:13:41 2009 +0000
+++ b/CookBook/Tactical.thy	Thu Feb 26 12:16:24 2009 +0000
@@ -311,7 +311,7 @@
   "(C)"}. Since the goal @{term C} can potentially be an implication, there is
   a ``protector'' wrapped around it (in from of an outermost constant @{text
   "Const (\"prop\", bool \<Rightarrow> bool)"}; however this constant
-  is invisible in the figure). This prevents that premises of @{text C} are
+  is invisible in the figure). This wrapper prevents that premises of @{text C} are
   mis-interpreted as open subgoals. While tactics can operate on the subgoals
   (the @{text "A\<^isub>i"} above), they are expected to leave the conclusion
   @{term C} intact, with the exception of possibly instantiating schematic
@@ -453,7 +453,7 @@
 (*<*)oops(*>*)
 
 text {*
-  where the application of Rule @{text bspec} generates two subgoals involving the
+  where the application of rule @{text bspec} generates two subgoals involving the
   meta-variable @{text "?x"}. Now, if you are not careful, tactics 
   applied to the first subgoal might instantiate this meta-variable in such a 
   way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"}
@@ -576,7 +576,7 @@
   \end{tabular}
   \end{quote}
 
-  Note in the actual output the brown colour of the variables @{term x} and 
+  Notice in the actual output the brown colour of the variables @{term x} and 
   @{term y}. Although they are parameters in the original goal, they are fixed inside
   the subproof. By convention these fixed variables are printed in brown colour.
   Similarly the schematic variable @{term z}. The assumption, or premise, 
@@ -727,12 +727,13 @@
   the first goal. To do this can result in quite messy code. In contrast, 
   the ``reverse application'' is easy to implement.
 
-  (FIXME: mention @{ML "CSUBGOAL"})
+  The tactic @{ML "CSUBGOAL"} is similar to @{ML SUBGOAL} except that it takes
+  a @{ML_type cterm} instead of a @{ML_type term}.  Of course, this example is
+  contrived: there are much simpler methods available in Isabelle for
+  implementing a proof procedure analysing a goal according to its topmost
+  connective. These simpler methods use tactic combinators, which we will
+  explain in the next section.
 
-  Of course, this example is contrived: there are much simpler methods available in 
-  Isabelle for implementing a proof procedure analysing a goal according to its topmost
-  connective. These simpler methods use tactic combinators, which we will explain 
-  in the next section.
 *}
 
 section {* Tactic Combinators *}
@@ -1027,10 +1028,10 @@
 
 text {*
   In Isabelle you can also implement custom simplification procedures, called
-  \emph{simprocs}. Simprocs can be triggered on a specified term-pattern and
-  rewrite a term according to a theorem. They are useful in cases where
-  a rewriting rule must be produced on ``demand'' or when rewriting by
-  simplification is too unpredictable and potentially loops.
+  \emph{simprocs}. Simprocs can be triggered by the simplifier on a specified
+  term-pattern and rewrite a term according to a theorem. They are useful in
+  cases where a rewriting rule must be produced on ``demand'' or when
+  rewriting by simplification is too unpredictable and potentially loops.
 
   To see how simprocs work, let us first write a simproc that just prints out
   the pattern which triggers it and otherwise does nothing. For this
@@ -1052,7 +1053,7 @@
   returns @{ML NONE} (standing for an optional @{ML_type thm}) since at the
   moment we are \emph{not} interested in actually rewriting anything. We want
   that the simproc is triggered by the pattern @{term "Suc n"}. This can be
-  done by adding the simproc to the current simproc as follows
+  done by adding the simproc to the current simpset as follows
 *}
 
 simproc_setup fail_sp ("Suc n") = {* K fail_sp_aux *}
@@ -1114,7 +1115,8 @@
 text {*
   Here the redex is given as a @{ML_type term}, instead of a @{ML_type cterm}
   (therefore we printing it out using the function @{ML string_of_term in Syntax}).
-  We can turn this function into a proper simproc using
+  We can turn this function into a proper simproc using the function 
+  @{ML Simplifier.simproc_i}:
 *}
 
 
@@ -1130,7 +1132,7 @@
   Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}).
   The function also takes a list of patterns that can trigger the simproc.
   Now the simproc is set up and can be explicitly added using
-  {ML addsimprocs} to a simpset whenerver
+  @{ML addsimprocs} to a simpset whenerver
   needed. 
 
   Simprocs are applied from inside to outside and from left to right. You can
@@ -1138,7 +1140,7 @@
 *}
 
 lemma shows "Suc (Suc 0) = (Suc 1)"
-  apply(tactic {* simp_tac (HOL_ss addsimprocs [fail_sp']) 1*})
+  apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*})
 (*<*)oops(*>*)
 
 text {*
@@ -1190,7 +1192,7 @@
 *}
 
 lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)"
-  apply(tactic {* simp_tac (HOL_ss addsimprocs [plus_one_sp]) 1*})
+  apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*})
 txt{*
   where the simproc produces the goal state
 
@@ -1208,7 +1210,6 @@
   Next let us implement a simproc that replaces terms of the form @{term "Suc n"}
   with the number @{text n} increase by one. First we implement a function that
   takes a term and produces the corresponding integer value.
-
 *}
 
 ML{*fun dest_suc_trm ((Const (@{const_name "Suc"}, _)) $ t) = 1 + dest_suc_trm t
@@ -1412,7 +1413,7 @@
 
   @{ML_response_fake [display,gray]
 "let 
-  val ctrm =  @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
+  val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"}
 in
   Conv.rewr_conv @{thm true_conj1} ctrm
 end"
@@ -1439,7 +1440,8 @@
   "(\<lambda>x. x \<and> False) True \<equiv> False"}
 
   where we first beta-reduce the term and then rewrite according to
-  @{thm [source] true_conj1}. 
+  @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
+  normalising all terms.)
 
   The conversion combinator @{ML else_conv} tries out the 
   first one, and if it does not apply, tries the second. For example 
@@ -1459,11 +1461,11 @@
   does not fail, however, because the combinator @{ML else_conv} will then 
   try out @{ML Conv.all_conv}, which always succeeds.
 
-  Apart from the function @{ML beta_conversion in Thm}, which able to fully
-  beta-normalise a term, the restriction of conversions so far is that they
+  Apart from the function @{ML beta_conversion in Thm}, which is able to fully
+  beta-normalise a term, the conversions so far are restricted in that they
   only apply to the outer-most level of a @{ML_type cterm}. In what follows we
   will lift this restriction. The combinator @{ML Conv.arg_conv} will apply
-  the conversion on the first argument of an application, that is the term
+  the conversion to the first argument of an application, that is the term
   must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
   to @{text t2}.  For example
 
@@ -1477,7 +1479,7 @@
 "P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
 
   The reason for this behaviour is that @{text "(op \<or>)"} expects two
-  arguments.  So the term must be of the form @{text "Const \<dots> $ t1 $ t2"}. The
+  arguments.  So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
   conversion is then applied to @{text "t2"} which in the example above
   stands for @{term "True \<and> Q"}.
 
@@ -1515,8 +1517,10 @@
 text {*
   This functions descends under applications (Line 6 and 7) and abstractions 
   (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"}
-  (Lines 3-5); otherwise it leaves the term unchanged (Line 9). To see this 
-  conversion in action, consider the following example.
+  (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2
+  we need to transform the @{ML_type cterm} into a @{ML_type term} in order
+  to be able to pattern-match the term. To see this 
+  conversion in action, consider the following example
 
 @{ML_response_fake [display,gray]
 "let
@@ -1527,9 +1531,11 @@
 end"
   "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
 
-  To see how much control you have about rewriting via conversions, let us 
+  where the conversion is applied ``deep'' inside the term.
+
+  To see how much control you have about rewriting by using conversions, let us 
   make the task a bit more complicated by rewriting according to the rule
-  @{text true_conj1}, but only in the first arguments of @{term If}s. The 
+  @{text true_conj1}, but only in the first arguments of @{term If}s. Then 
   the conversion should be as follows.
 *}
 
@@ -1543,7 +1549,7 @@
   | _ => Conv.all_conv ctrm *}
 
 text {*
-  Here is an application of this conversion:
+  Here is an example for this conversion:
 
   @{ML_response_fake [display,gray]
 "let
@@ -1560,7 +1566,7 @@
 text {*
   So far we only applied conversions to @{ML_type cterm}s. Conversions can, however, 
   also work on theorems using the function @{ML "Conv.fconv_rule"}. As an example, 
-  consider @{ML all_true1_conv} and the lemma:
+  consider the conversion @{ML all_true1_conv} and the lemma:
 *}
 
 lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
@@ -1583,7 +1589,7 @@
   the conclusion of a goal.
 
   Assume we want to apply @{ML all_true1_conv} only in the conclusion
-  of the goal, and @{ML if_true1_conv} should only be applied in the premises.
+  of the goal, and @{ML if_true1_conv} should only be applied to the premises.
   Here is a tactic doing exactly that:
 *}
 
@@ -1611,9 +1617,7 @@
 apply(tactic {* true1_tac 1 *})
 txt {* where the tactic yields the goal state
 
-       \begin{minipage}{\textwidth}
-       @{subgoals [display]} 
-       \end{minipage} *}
+       @{subgoals [display]}*}
 (*<*)oops(*>*)
 
 text {*
Binary file cookbook.pdf has changed