polished
authorChristian Urban <urbanc@in.tum.de>
Sun, 08 Feb 2009 08:45:25 +0000
changeset 104 5dcad9348e4d
parent 103 fe10da5354a3
child 105 f49dc7e96235
polished
CookBook/FirstSteps.thy
CookBook/Parsing.thy
CookBook/Readme.thy
CookBook/Tactical.thy
cookbook.pdf
--- a/CookBook/FirstSteps.thy	Sat Feb 07 14:21:33 2009 +0000
+++ b/CookBook/FirstSteps.thy	Sun Feb 08 08:45:25 2009 +0000
@@ -116,8 +116,9 @@
 
   @{ML_response_fake [display,gray] "if 0=1 then 1 else (error \"foo\")" "\"foo\""}
 
-  See leter on in Section~\ref{sec:printing} for information about printing 
-  out data of type @{ML_type term}, @{ML_type cterm} and @{ML_type thm}.
+  Section~\ref{sec:printing} will give more information about printing 
+  the main datas tructures of Isabelle, namely @{ML_type term}, @{ML_type cterm} 
+  and @{ML_type thm}.
 *}
 
 
@@ -173,7 +174,7 @@
   current simpset.  We get hold of the current simpset with the antiquotation 
   @{text "@{simpset}"}.  The function @{ML rep_ss in MetaSimplifier} returns a record
   containing all information about the simpset. The rules of a simpset are
-  stored in a \emph{discrimination net} (a datastructure for fast
+  stored in a \emph{discrimination net} (a data structure for fast
   indexing). From this net we can extract the entries using the function @{ML
   Net.entries}.
 
@@ -244,7 +245,7 @@
 
   Hint: The third term is already quite big, and the pretty printer
   may omit parts of it by default. If you want to see all of it, you
-  can use the following ML function to set the limit to a value high 
+  can use the following ML-function to set the limit to a value high 
   enough:
 
   @{ML [display,gray] "print_depth 50"}
@@ -297,7 +298,7 @@
 
 text {*
   The reason is that you cannot pass the arguments @{term P}, @{term Q} and 
-  @{term "tau"} into an antiquotation. For example the following does \emph{not} work:
+  @{term "tau"} into an antiquotation. For example the following does \emph{not} work.
 *}
 
 ML{*fun make_wrong_imp P Q tau = @{prop "\<And>x. P x \<Longrightarrow> Q x"} *}
@@ -344,14 +345,16 @@
 
   (FIXME: Is it useful to explain @{text "@{const_syntax}"}?)
 
-  Similarly, you occasionally need to construct types manually. For example 
-  the function returning a function type is as follows:
+  Although to some extend types of terms can be inferred, there are many
+  situations where you need to construct types manually, especially  
+  when defining constants. For example the function returning a function 
+  type is as follows:
 
 *} 
 
 ML{*fun make_fun_type tau1 tau2 = Type ("fun",[tau1,tau2]) *}
 
-text {* This can be equally written as *}
+text {* This can be equally written as: *}
 
 ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
 
@@ -483,7 +486,6 @@
   
   val Qt = implies_elim Pt_implies_Qt (assume assm2);
 in
-
   Qt 
   |> implies_intr assm2
   |> implies_intr assm1
@@ -520,20 +522,20 @@
 
 section {* Theorem Attributes *}
 
-section {* Printing Terms, CTerms and Theorems\label{sec:printing} *}
+section {* Printing Terms and Theorems\label{sec:printing} *}
 
 text {* 
-  During development, you will occationally feel the need to inspect terms, cterms 
-  or theorems. Isabelle contains elaborate pretty-printing functions for that, but 
-  for quick-and-dirty solutions they are way too unwieldy. A simple way to transform 
+  During development, you often want to inspect terms, cterms 
+  or theorems. Isabelle contains elaborate pretty-printing functions for printing them, 
+  but  for quick-and-dirty solutions they are far too unwieldy. A simple way to transform 
   a term into a string is to use the function @{ML Syntax.string_of_term}.
 
   @{ML_response_fake [display,gray]
   "Syntax.string_of_term @{context} @{term \"1::nat\"}"
   "\"\\^E\\^Fterm\\^E\\^E\\^Fconst\\^Fname=HOL.one_class.one\\^E1\\^E\\^F\\^E\\^E\\^F\\^E\""}
 
-  This produces a string, though with printing directions encoded in it. The string
-  can be properly printed, when enclosed in a @{ML warning}.
+  This produces a string with some printing directions encoded in it. The string
+  can be properly printed by using @{ML warning} foe example.
 
   @{ML_response_fake [display,gray]
   "warning (Syntax.string_of_term @{context} @{term \"1::nat\"})"
@@ -608,10 +610,10 @@
   \begin{readmore}
   The most frequently used combinator are defined in the files @{ML_file "Pure/library.ML"}
   and @{ML_file "Pure/General/basics.ML"}. Also \isccite{sec:ML-linear-trans} 
-  contains further information about them.
+  contains further information about combinators.
   \end{readmore}
 
-  The simplest combinator is @{ML I}, which is just the identity function.
+  The simplest combinator is @{ML I}, which is just the identity function defined as
 *}
 
 ML{*fun I x = x*}
@@ -623,7 +625,7 @@
 text {*
   @{ML K} ``wraps'' a function around the argument @{text "x"}. However, this 
   function ignores its argument. As a result, @{ML K} defines a constant function 
-  returning @{text x}.
+  always returning @{text x}.
 
   The next combinator is reverse application, @{ML "|>"}, defined as: 
 *}
@@ -641,7 +643,7 @@
     |> (fn x => x + 4)*}
 
 text {*
-  which increments the argument @{text x} by 5. It does this by first incrementing 
+  which increments its argument @{text x} by 5. It does this by first incrementing 
   the argument by 1 (Line 2); then storing the result in a pair (Line 3); taking 
   the first component of the pair (Line 4) and finally incrementing the first 
   component by 4 (Line 5). This kind of cascading manipulations of values is quite
--- a/CookBook/Parsing.thy	Sat Feb 07 14:21:33 2009 +0000
+++ b/CookBook/Parsing.thy	Sun Feb 08 08:45:25 2009 +0000
@@ -131,7 +131,7 @@
 
   The parser @{ML "Scan.optional p x" for p x} returns the result of the parser 
   @{text "p"}, if it succeeds; otherwise it returns 
-  the default value @{text "x"}. For example
+  the default value @{text "x"}. For example:
 
 @{ML_response [display,gray]
 "let 
@@ -158,7 +158,7 @@
   The function @{ML "(op !!)"} helps to produce appropriate error messages
   during parsing. For example if you want to parse that @{text p} is immediately 
   followed by @{text q}, or start a completely different parser @{text r},
-  you might write
+  you might write:
 
   @{ML [display,gray] "(p -- q) || r" for p q r}
 
@@ -171,8 +171,8 @@
   circumstance this will be the wrong parser for the input ``p-followed-by-q''
   and therefore will also fail. The error message is then caused by the
   failure of @{text r}, not by the absence of @{text q} in the input. This
-  kind of situation can be avoided when using the function @{ML "(op
-  !!)"}. This function aborts the whole process of parsing in case of a
+  kind of situation can be avoided when using the function @{ML "(op !!)"}. 
+  This function aborts the whole process of parsing in case of a
   failure and prints an error message. For example if you invoke the parser
 
   
@@ -190,7 +190,7 @@
                                "Exception ABORT raised"}
 
   then the parsing aborts and the error message @{text "foo"} is printed out. In order to
-  see the error message, we need to prefix the parser with the function 
+  see the error message properly, we need to prefix the parser with the function 
   @{ML "Scan.error"}. For example:
 
   @{ML_response_fake [display,gray] "Scan.error (!! (fn _ => \"foo\") ($$ \"h\"))"
@@ -293,7 +293,7 @@
  ([\"f\", \"o\", \"o\"], [\"*\", \"o\", \"o\", \"o\"]))"}
 
   After parsing is done, you nearly always want to apply a function on the parsed 
-  items. To do this the function @{ML "(p >> f)" for p f} can be employed, which runs 
+  items. One way to do this is the function @{ML "(p >> f)" for p f}, which runs 
   first the parser @{text p} and upon successful completion applies the 
   function @{text f} to the result. For example
 
@@ -305,14 +305,14 @@
 end"
 "((\"hh\", \"ee\"), [\"l\", \"l\", \"o\"])"}
 
-  doubles the two parsed input strings. Or
+  doubles the two parsed input strings; or
 
   @{ML_response [display,gray] 
 "let 
-   val p = Scan.repeat (Scan.one Symbol.not_eof) >> implode
+   val p = Scan.repeat (Scan.one Symbol.not_eof)
    val input = (explode \"foo bar foo\") 
 in
-   Scan.finite Symbol.stopper p input
+   Scan.finite Symbol.stopper (p >> implode) input
 end" 
 "(\"foo bar foo\",[])"}
 
@@ -344,10 +344,9 @@
 text {*
   Most of the time, however, Isabelle developers have to deal with parsing
   tokens, not strings.  This is because the parsers for the theory syntax, as
-  well as the parsers for the argument syntax of proof methods and attributes
-  use the type @{ML_type OuterLex.token} (which is identical to the type
-  @{ML_type OuterParse.token}).  There are also parsers for ML-expressions and
-  ML-files, which can be sometimes handy.
+  well as the parsers for the arguments of proof methods the type @{ML_type OuterLex.token} 
+  (which is identical to the type @{ML_type OuterParse.token}).  There are also handy 
+  parsers for ML-expressions and ML-files.
 
   \begin{readmore}
   The parser functions for the theory syntax are contained in the structure
@@ -362,8 +361,8 @@
 *}  
 
 text {*
-  With the examples below, you can generate a token list out of a string using
-  the function @{ML "OuterSyntax.scan"}, which is given @{ML "Position.none"}
+  The first example shows how to generate a token list out of a string using
+  the function @{ML "OuterSyntax.scan"}. It is given below @{ML "Position.none"}
   as argument since, at the moment, we are not interested in generating
   precise error messages. The following code
 
@@ -408,7 +407,7 @@
  Token (\<dots>,(Keyword, \"for\"),\<dots>)]"}
 
   you obtain a list consisting of only a command and two keyword tokens.
-  If you want to see which keywords and commands are currently known, type in
+  If you want to see which keywords and commands are currently known to Isabelle, type in
   the following code (you might have to adjust the @{ML print_depth} in order to
   see the complete list):
 
@@ -500,6 +499,7 @@
   the function @{ML OuterParse.type_ident}.
   \end{exercise}
 
+  (FIXME: or give parser for numbers)
 
 *}
 
@@ -700,6 +700,7 @@
   the log file for the theory @{text "Command.thy"}, which contains the new
   \isacommand{foobar}-command. If you target other logics besides HOL, such
   as Nominal or ZF, then you need to adapt the log files appropriately.
+  
   @{text Pure} and @{text HOL} are usually compiled during the installation of
   Isabelle. So log files for them should be already available. If not, then
   they can be conveniently compiled with the help of the build-script from the Isabelle
@@ -820,7 +821,7 @@
   commands are expected to parse some arguments, for example a proposition,
   and then ``open up'' a proof in order to prove the proposition (for example
   \isacommand{lemma}) or prove some other properties (for example
-  \isacommand{function}). To achieve this kind of behaviour, we have to use the kind
+  \isacommand{function}). To achieve this kind of behaviour, you have to use the kind
   indicator @{ML thy_goal in OuterKeyword}.
 
   Below we change \isacommand{foobar} so that it takes a proposition as
@@ -875,8 +876,6 @@
   \isacommand{done}
   \end{isabelle}
 
-  Similarly for the other function composition combinators.
-
   
   (FIXME What do @{ML "Toplevel.theory"} 
   @{ML "Toplevel.print"} 
--- a/CookBook/Readme.thy	Sat Feb 07 14:21:33 2009 +0000
+++ b/CookBook/Readme.thy	Sun Feb 08 08:45:25 2009 +0000
@@ -140,8 +140,6 @@
   @{text "@{ML \"\\\"foo\\\" ^ \\\"bar\\\"\"}"} \;\;produces only\;\; @{text "foobar"}
   \end{center}
   
-
-
   \item Functions and value bindings cannot be defined inside antiquotations; they need
   to be included inside \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}
   environments. In this way they are also checked by the compiler. Some \LaTeX-hack in 
--- a/CookBook/Tactical.thy	Sat Feb 07 14:21:33 2009 +0000
+++ b/CookBook/Tactical.thy	Sun Feb 08 08:45:25 2009 +0000
@@ -100,8 +100,8 @@
 done
 
 text {*
-  By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} in the apply-step,
-  you can call from the user level of Isabelle the tactic @{ML foo_tac} or 
+  By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the 
+  user level of Isabelle the tactic @{ML foo_tac} or 
   any other function that returns a tactic.
 
   The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
@@ -196,10 +196,10 @@
   instead.
   \end{readmore}
 
-  For a beginner it might be surprising that tactics, which transform
+  It might be surprising that tactics, which transform
   one proof state to the next, are functions from theorems to theorem 
   (sequences). The surprise resolves by knowing that every 
-  proof state is indeed a theorem. To shed more light on this,
+  goal state is indeed a theorem. To shed more light on this,
   let us modify the code of @{ML all_tac} to obtain the following
   tactic
 *}
@@ -216,7 +216,7 @@
   in Section~\ref{sec:printing}) and then behaves like @{ML all_tac}. We
   now can inspect every proof state in the follwing proof. On the left-hand
   side we show the goal state as shown by the system; on the right-hand
-  side the print out from our @{ML my_print_tac}.
+  side the print out from @{ML my_print_tac}.
 *}
 
 lemma shows "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"
@@ -275,9 +275,12 @@
   @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> (C)"}
 
   where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are the  
-  subgoals. Since the goal @{term C} can potentially be an implication, 
-  there is a protector (invisible in the print out above) wrapped around 
-  it. This prevents that premises are misinterpreted as open subgoals. 
+  subgoals. So in the first step the goal state is always of the form
+  @{text "C \<Longrightarrow> (C)"}. Since the goal @{term C} can potentially be an implication, 
+  there is a ``protector'' wrapped around it (in from of an outermost constant 
+  @{text "Const (\"prop\", bool \<Rightarrow> bool)"} applied to each goal;
+  however this constant is invisible in the print out above). This 
+  prevents that premises are misinterpreted as open subgoals. 
   While tactics can operate on the subgoals (the @{text "A\<^isub>i"} above), they 
   are expected to leave the conclusion @{term C} intact, with the 
   exception of possibly instantiating schematic variables. 
@@ -297,23 +300,30 @@
 text {*
   Similarly, @{ML rtac}, @{ML dtac}, @{ML etac} and @{ML ftac} correspond
   to @{text rule}, @{text drule}, @{text erule} and @{text frule}, 
-  respectively. Below are three examples with the resulting goal state. How
-  they work should therefore be self-explanatory.  
+  respectively. Each of them takes a theorem as argument. Below are three 
+  examples with the resulting goal state. How
+  they work should be self-explanatory.  
 *}
 
 lemma shows "P \<and> Q"
 apply(tactic {* rtac @{thm conjI} 1 *})
-txt{*@{subgoals [display]}*}
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
 (*<*)oops(*>*)
 
 lemma shows "P \<and> Q \<Longrightarrow> False"
 apply(tactic {* etac @{thm conjE} 1 *})
-txt{*@{subgoals [display]}*}
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
 (*<*)oops(*>*)
 
 lemma shows "False \<and> True \<Longrightarrow> False"
 apply(tactic {* dtac @{thm conjunct2} 1 *})
-txt{*@{subgoals [display]}*}
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]}
+     \end{minipage}*}
 (*<*)oops(*>*)
 
 text {*
@@ -323,7 +333,9 @@
 
 lemma shows "Foo" and "P \<and> Q"
 apply(tactic {* rtac @{thm conjI} 2 *})
-txt {*@{subgoals [display]}*}
+txt {*\begin{minipage}{\textwidth}
+      @{subgoals [display]}
+      \end{minipage}*}
 (*<*)oops(*>*)
 
 text {* 
@@ -343,7 +355,9 @@
 lemma shows "C \<longrightarrow> (A \<and> B)" and "(A \<longrightarrow> B) \<and> C"
 apply(tactic {* resolve_tac_xmp 1 *})
 apply(tactic {* resolve_tac_xmp 2 *})
-txt{* @{subgoals [display]} *}
+txt{*\begin{minipage}{\textwidth}
+     @{subgoals [display]} 
+     \end{minipage}*}
 (*<*)oops(*>*)
 
 text {* 
@@ -359,15 +373,12 @@
 (*<*)oops(*>*)
 
 text {*
-  Also for useful for debugging purposes are the tactics @{ML all_tac} and
-  @{ML no_tac}. The former always succeeds (but does not change the goal state), and
-  the latter always fails.
-
+  
   (FIXME explain RS MRS)
 
-  Often proofs involve elaborate operations on assumptions and variables
-  @{text "\<And>"}-quantified in the goal state. To do such operations on the ML-level 
-  using the basic tactics, is very unwieldy and brittle. Some convenience and
+  Often proofs involve elaborate operations on assumptions and 
+  @{text "\<And>"}-quantified variables. To do such operations on the ML-level 
+  using the basic tactics is very unwieldy and brittle. Some convenience and
   safety is provided by the tactic @{ML SUBPROOF}. This tactic fixes the parameters 
   and binds the various components of a proof state into a record. 
 *}
@@ -426,15 +437,15 @@
   Note in the actual output the brown colour of the variables @{term x} and 
   @{term y}. Although parameters in the original goal, they are fixed inside
   the subproof. Similarly the schematic variable @{term z}. The assumption 
-  is bound once as @{ML_type cterm} to the record-variable @{text asms} and 
-  another time as @{ML_type thm} to @{text prems}.
+  @{prop "A x y"} is bound once as @{ML_type cterm} to the record-variable 
+  @{text asms} and another time as @{ML_type thm} to @{text prems}.
 
   Notice also that we had to append @{text "?"} to \isacommand{apply}. The 
   reason is that @{ML SUBPROOF} normally expects that the subgoal is solved completely.
   Since in the function @{ML sp_tac} we returned the tactic @{ML no_tac}, the subproof
   obviously fails. The question-mark allows us to recover from this failure
   in a graceful manner so that the warning messages are not overwritten
-  by the error message.
+  by an error message.
 
   If we continue the proof script by applying the @{text impI}-rule
 *}
@@ -458,7 +469,7 @@
 (*<*)oops(*>*)
 
 text {*
-  where we now also have @{term "B y x"} as assumption.
+  where we now also have @{term "B y x"} as an assumption.
 
   One convenience of @{ML SUBPROOF} is that we can apply assumption
   using the usual tactics, because the parameter @{text prems} 
@@ -466,23 +477,42 @@
   implement a tactic that almost behaves like @{ML atac}:
 *}
 
+ML{*val atac' = SUBPROOF (fn {prems, ...} => resolve_tac prems 1)*}
 lemma shows "\<And>x y. \<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
-apply(tactic 
-       {* SUBPROOF (fn {prems, ...} => resolve_tac prems 1) @{context} 1 *})
+apply(tactic {* atac' @{context} 1 *})
 txt{* yields
       @{subgoals [display]} *}
 (*<*)oops(*>*)
 
+text {*
+  The restriction in this tactic is that it cannot instantiate any
+  schematic variables. This might be seen as a defect, but is actually
+  an advantage in the situations for which @{ML SUBPROOF} was designed: 
+  the reason is that instantiation of schematic variables can affect 
+  several goals and can render them unprovable. @{ML SUBPROOF} is meant 
+  to avoid this.
+
+  Notice that @{ML atac'} calls @{ML resolve_tac} with the subgoal
+  number @{text "1"} and also the ``outer'' call to @{ML SUBPROOF} in 
+  the \isacommand{apply}-step uses @{text "1"}. Another advantage 
+  of @{ML SUBGOAL} is that the addressing  inside it is completely 
+  local to the proof inside. It is therefore possible to also apply 
+  @{ML atac'} to the second goal:
+*}
+
+lemma shows "True" and "\<And>x y. \<lbrakk>B x y; A x y; C x y\<rbrakk> \<Longrightarrow> A x y"
+apply(tactic {* atac' @{context} 2 *})
+txt{* This gives:
+      @{subgoals [display]} *}
+(*<*)oops(*>*)
+
 
 text {*
-  The restriction in this tactic is that it cannot instantiate any
-  schematic variables. This might be seen as a defect, but is actually
-  an advantage in the situations for which @{ML SUBPROOF} was designed: 
-  the reason is that instantiation of schematic variables can potentially 
-  has affect several goals and can render them unprovable.  
-
-  A similar but less powerful function is @{ML SUBGOAL}. It allows you to 
-  inspect a subgoal specified by a number. 
+  A similar but less powerful function than @{ML SUBPROOF} is @{ML SUBGOAL}. 
+  It allows you to inspect a subgoal specified by a number. With this we can 
+  implement a little tactic that applies a rule corresponding to its 
+  topmost connective. The tactic should only apply ``safe'' rules (that is
+  which do not render the goal unprovable). For this we can write:
 *}
 
 ML %linenumbers{*fun select_tac (t,i) =
@@ -492,12 +522,13 @@
    | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
    | @{term "Not"} $ _ => rtac @{thm notI} i
    | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i
-   | _ => no_tac*}
+   | _ => all_tac*}
 
-lemma shows "A \<and> B" "A \<longrightarrow> B" "\<forall>x. C x"
+lemma shows "A \<and> B" "A \<longrightarrow> B" "\<forall>x. C x" "D \<Longrightarrow> E"
+apply(tactic {* SUBGOAL select_tac 4 *})
+apply(tactic {* SUBGOAL select_tac 3 *})
+apply(tactic {* SUBGOAL select_tac 2 *})
 apply(tactic {* SUBGOAL select_tac 1 *})
-apply(tactic {* SUBGOAL select_tac 3 *})
-apply(tactic {* SUBGOAL select_tac 4 *})
 txt{* @{subgoals [display]} *}
 (*<*)oops(*>*)
 
@@ -506,6 +537,8 @@
   to implement a proof procedure like the one above. They will be explained
   in the next section.
 
+  (Notice that we applied the goals in reverse order)
+
   A variant of @{ML SUBGOAL} is @{ML CSUBGOAL} which allows access to the goal
   as @{ML_type cterm} instead of a @{ML_type term}.
 *}
Binary file cookbook.pdf has changed