merged
authorChristian Urban <urbanc@in.tum.de>
Mon, 23 Mar 2009 18:35:58 +0100
changeset 204 3857d987576a
parent 203 abdac57dfd9a (diff)
parent 202 16ec70218d26 (current diff)
child 205 f8d4393d6fdd
merged
ProgTutorial/FirstSteps.thy
progtutorial.pdf
--- a/ProgTutorial/FirstSteps.thy	Mon Mar 23 14:20:14 2009 +0100
+++ b/ProgTutorial/FirstSteps.thy	Mon Mar 23 18:35:58 2009 +0100
@@ -20,7 +20,7 @@
   \end{center}
 
   We also generally assume you are working with HOL. The given examples might
-  need to be adapted slightly if you work in a different logic.
+  need to be adapted if you work in a different logic.
 *}
 
 section {* Including ML-Code *}
@@ -41,8 +41,17 @@
   Like normal Isabelle proof scripts, \isacommand{ML}-commands can be
   evaluated by using the advance and undo buttons of your Isabelle
   environment. The code inside the \isacommand{ML}-command can also contain
-  value and function bindings, and even those can be undone when the proof
-  script is retracted. As mentioned in the Introduction, we will drop the
+  value and function bindings, for example
+*}
+
+ML %gray {* 
+  val r = ref 0
+  fun f n = n + 1 
+*}
+
+text {*
+  and even those can be undone when the proof
+  script is retracted.  As mentioned in the Introduction, we will drop the
   \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
   show code. The lines prefixed with @{text [quotes] ">"} are not part of the
   code, rather they indicate what the response is when the code is evaluated.
@@ -123,12 +132,12 @@
 *}
 
 (*
+ML {* set Toplevel.debug *}
+
 ML {*
 fun dodgy_fun () = (raise (ERROR "bar"); 1) 
 *}
 
-ML {* set Toplevel.debug *}
-
 ML {* fun f1 () = dodgy_fun () *}
 
 ML {* f1 () *}
@@ -171,7 +180,7 @@
   into a @{ML_type cterm} using the function @{ML crep_thm}. 
 *}
 
-ML{*fun str_of_thm_raw ctxt thm =
+ML{*fun str_of_thm ctxt thm =
   str_of_cterm ctxt (#prop (crep_thm thm))*}
 
 text {*
@@ -179,7 +188,7 @@
   @{text "?Q"} and so on. 
 
   @{ML_response_fake [display, gray]
-  "warning (str_of_thm_raw @{context} @{thm conjI})"
+  "warning (str_of_thm @{context} @{thm conjI})"
   "\<lbrakk>?P; ?Q\<rbrakk> \<Longrightarrow> ?P \<and> ?Q"}
 
   In order to improve the readability of theorems we convert
@@ -194,14 +203,14 @@
   thm'
 end
 
-fun str_of_thm ctxt thm =
+fun str_of_thm_no_vars ctxt thm =
   str_of_cterm ctxt (#prop (crep_thm (no_vars ctxt thm)))*}
 
 text {* 
-  Theorem @{thm [source] conjI} looks now as follows
+  Theorem @{thm [source] conjI} looks now as follows:
 
   @{ML_response_fake [display, gray]
-  "warning (str_of_thm @{context} @{thm conjI})"
+  "warning (str_of_thm_no_vars @{context} @{thm conjI})"
   "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> P \<and> Q"}
   
   Again the function @{ML commas} helps with printing more than one theorem. 
@@ -210,8 +219,8 @@
 ML{*fun str_of_thms ctxt thms =  
   commas (map (str_of_thm ctxt) thms)
 
-fun str_of_thms_raw ctxt thms =  
-  commas (map (str_of_thm_raw ctxt) thms)*}
+fun str_of_thms_no_vars ctxt thms =  
+  commas (map (str_of_thm_no_vars ctxt) thms) *}
 
 section {* Combinators\label{sec:combinators} *}
 
@@ -287,18 +296,18 @@
   the waterfall fashion might be the following code:
 *}
 
-ML %linenosgray{*fun apply_fresh_args pred ctxt =
-  pred |> fastype_of
-       |> binder_types 
-       |> map (pair "z")
-       |> Variable.variant_frees ctxt [pred]
-       |> map Free  
-       |> (curry list_comb) pred *}
+ML %linenosgray{*fun apply_fresh_args f ctxt =
+    f |> fastype_of
+      |> binder_types 
+      |> map (pair "z")
+      |> Variable.variant_frees ctxt [f]
+      |> map Free  
+      |> (curry list_comb) f *}
 
 text {*
-  This code extracts the argument types of a given
-  predicate and then generates for each argument type a distinct variable; finally it
-  applies the generated variables to the predicate. For example
+  This code extracts the argument types of a given function @{text "f"} and then generates 
+  for each argument type a distinct variable; finally it applies the generated 
+  variables to the function. For example:
 
   @{ML_response_fake [display,gray]
 "apply_fresh_args @{term \"P::nat \<Rightarrow> int \<Rightarrow> unit \<Rightarrow> bool\"} @{context} 
@@ -308,14 +317,14 @@
 
   You can read off this behaviour from how @{ML apply_fresh_args} is
   coded: in Line 2, the function @{ML fastype_of} calculates the type of the
-  predicate; @{ML binder_types} in the next line produces the list of argument
+  function; @{ML binder_types} in the next line produces the list of argument
   types (in the case above the list @{text "[nat, int, unit]"}); Line 4 
   pairs up each type with the string  @{text "z"}; the
   function @{ML variant_frees in Variable} generates for each @{text "z"} a
-  unique name avoiding the given @{text pred}; the list of name-type pairs is turned
+  unique name avoiding the given @{text f}; the list of name-type pairs is turned
   into a list of variable terms in Line 6, which in the last line is applied
-  by the function @{ML list_comb} to the predicate. We have to use the
-  function @{ML curry}, because @{ML list_comb} expects the predicate and the
+  by the function @{ML list_comb} to the function. In this last step we have to 
+  use the function @{ML curry}, because @{ML list_comb} expects the function and the
   variables list as a pair.
   
   The combinator @{ML "#>"} is the reverse function composition. It can be
@@ -480,9 +489,9 @@
 
 text {*
   The function @{ML dest_ss in MetaSimplifier} returns a record containing all
-  information stored in the simpset. We are only interested in the names of the
+  information stored in the simpset, but we are only interested in the names of the
   simp-rules. So now you can feed in the current simpset into this function. 
-  The simpset can be referred to using the antiquotation @{text "@{simpset}"}.
+  The current simpset can be referred to using the antiquotation @{text "@{simpset}"}.
 
   @{ML_response_fake [display,gray] 
   "get_thm_names_from_ss @{simpset}" 
@@ -511,7 +520,18 @@
     ((@{binding "TrueConj"}, NoSyn), 
      (Attrib.empty_binding, @{term "True \<and> True"})) *}
 
-text {*
+<<<<<<< local
+text {* 
+  Now querying the definition you obtain:
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "TrueConj_def"}\\
+  @{text "> "}@{thm TrueConj_def}
+  \end{isabelle}
+
+  (FIXME give a better example why bindings are important; maybe
+  give a pointer to \isacommand{local\_setup})
+
   While antiquotations have many applications, they were originally introduced
   in order to avoid explicit bindings of theorems such as:
 *}
@@ -555,7 +575,19 @@
   Terms are described in detail in \isccite{sec:terms}. Their
   definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}.
   \end{readmore}
+  
+  Constructing terms via antiquotations has the advantage that only typable
+  terms can be constructed. For example
 
+  @{ML_response_fake_both [display,gray]
+  "@{term \"(x::nat) x\"}"
+  "Type unification failed \<dots>"}
+
+  raises a typing error, while it perfectly ok to construct the term
+  
+  @{ML [display,gray] "Free (\"x\", @{typ nat}) $ Free (\"x\", @{typ nat})"}
+
+  with the raw ML-constructors.
   Sometimes the internal representation of terms can be surprisingly different
   from what you see at the user-level, because the layers of
   parsing/type-checking/pretty printing can be quite elaborate. 
@@ -593,7 +625,8 @@
 
   where it is not (since it is already constructed by a meta-implication). 
 
-  Types can be constructed using the antiquotation @{text "@{typ \<dots>}"}. For example:
+  As already seen above, types can be constructed using the antiquotation 
+  @{text "@{typ \<dots>}"}. For example:
 
   @{ML_response_fake [display,gray] "@{typ \"bool \<Rightarrow> nat\"}" "bool \<Rightarrow> nat"}
 
@@ -631,8 +664,8 @@
 ML{*fun make_wrong_imp P Q = @{prop "\<And>(x::nat). P x \<Longrightarrow> Q x"} *}
 
 text {*
-  To see this apply @{text "@{term S}"} and @{text "@{term T}"} 
-  to both functions. With @{ML make_imp} we obtain the intended term involving 
+  To see this, apply @{text "@{term S}"} and @{text "@{term T}"} 
+  to both functions. With @{ML make_imp} you obtain the intended term involving 
   the given arguments
 
   @{ML_response [display,gray] "make_imp @{term S} @{term T}" 
@@ -640,7 +673,7 @@
   Abs (\"x\", Type (\"nat\",[]),
     Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
 
-  whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"} 
+  whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} 
   and @{text "Q"} from the antiquotation.
 
   @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" 
@@ -650,7 +683,7 @@
                 (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
 
   There are a number of handy functions that are frequently used for 
-  constructing terms. One is the function @{ML list_comb} which takes a term
+  constructing terms. One is the function @{ML list_comb}, which takes a term
   and a list of terms as arguments, and produces as output the term
   list applied to the term. For example
 
@@ -658,15 +691,47 @@
 "list_comb (@{term \"P::nat\"}, [@{term \"True\"}, @{term \"False\"}])"
 "Free (\"P\", \"nat\") $ Const (\"True\", \"bool\") $ Const (\"False\", \"bool\")"}
 
-  (FIXME: handy functions for constructing terms:  @{ML lambda}, 
-  @{ML subst_free})
-*}
+  Another handy function is @{ML lambda}, which abstracts a variable 
+  in a term. For example
+  
+  @{ML_response_fake [display,gray]
+  "lambda @{term \"x::nat\"} @{term \"(P::nat\<Rightarrow>bool) x\"}"
+  "Abs (\"x\", \"nat\", Free (\"P\", \"bool \<Rightarrow> bool\") $ Bound 0)"}
+
+  In the example, @{ML lambda} produces a de Bruijn index (i.e.~@{ML "Bound  0"}), 
+  and an abstraction. It also records the type of the abstracted
+  variable and for printing purposes also its name.  Note that because of the
+  typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"}
+  is of the same type as the abstracted variable. If it is of different type,
+  as in
+
+  @{ML_response_fake [display,gray]
+  "lambda @{term \"x::nat\"} @{term \"(P::bool\<Rightarrow>bool) x\"}"
+  "Abs (\"x\", \"nat\", Free (\"P\", \"bool \<Rightarrow> bool\") $ Free (\"x\", \"bool\"))"}
 
-ML {* lambda @{term "x::nat"} @{term "P x"}*}
+  then the variable @{text "Free (\"x\", \"bool\")"} is \emph{not} abstracted. 
+  This is a fundamental principle
+  of Church-style typing, where variables with the same name still differ, if they 
+  have different type.
 
+  There is also the function @{ML subst_free} with which terms can 
+  be replaced by other terms. For example below, we will replace in  
+  @{term "(f::nat\<Rightarrow>nat\<Rightarrow>nat) 0 x"} 
+  the subterm @{term "(f::nat\<Rightarrow>nat\<Rightarrow>nat) 0"} by @{term y}, and @{term x} by @{term True}.
 
-text {*
-  As can be seen this function does not take the typing annotation into account.
+  @{ML_response_fake [display,gray]
+"subst_free [(@{term \"(f::nat\<Rightarrow>nat\<Rightarrow>nat) 0\"}, @{term \"y::nat\<Rightarrow>nat\"}),
+            (@{term \"x::nat\"}, @{term \"True\"})] 
+  @{term \"((f::nat\<Rightarrow>nat\<Rightarrow>nat) 0) x\"}"
+  "Free (\"y\", \"nat \<Rightarrow> nat\") $ Const (\"True\", \"bool\")"}
+
+  As can be seen, @{ML subst_free} does not take typability into account.
+  However it takes alpha-equivalence into account:
+
+  @{ML_response_fake [display, gray]
+  "subst_free [(@{term \"(\<lambda>y::nat. y)\"}, @{term \"x::nat\"})] 
+  @{term \"(\<lambda>x::nat. x)\"}"
+  "Free (\"x\", \"nat\")"}
 
   \begin{readmore}
   There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and
@@ -674,9 +739,6 @@
   and types easier.\end{readmore}
 
   Have a look at these files and try to solve the following two exercises:
-*}
-
-text {*
 
   \begin{exercise}\label{fun:revsum}
   Write a function @{text "rev_sum : term -> term"} that takes a
@@ -1010,28 +1072,36 @@
   @{ML_file "Pure/thm.ML"}. 
   \end{readmore}
 
-  (FIXME: how to add case-names to goal states - maybe in the next section)
+  (FIXME: handy functions working on theorems; how to add case-names to goal 
+  states - maybe in the next section)
 *}
 
 section {* Theorem Attributes *}
 
 text {*
-  Theorem attributes are @{text "[simp]"}, @{text "[OF \<dots>]"}, @{text
-  "[symmetric]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
+  Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \<dots>]"}, @{text
+  "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags
   annotated to theorems, but functions that do further processing once a
-  theorem is proven. In particular, it is not possible to find out
+  theorem is proved. In particular, it is not possible to find out
   what are all theorems that have a given attribute in common, unless of course
   the function behind the attribute stores the theorems in a retrievable 
   data structure. 
 
-  If you want to print out all currently known attributes a theorem 
-  can have, you can use the function:
+  If you want to print out all currently known attributes a theorem can have, 
+  you can use the Isabelle command
 
-  @{ML_response_fake [display,gray] "Attrib.print_attributes @{theory}" 
-"COMP:  direct composition with rules (no lifting)
-HOL.dest:  declaration of Classical destruction rule
-HOL.elim:  declaration of Classical elimination rule 
-\<dots>"}
+  \begin{isabelle}
+  \isacommand{print\_attributes}\\
+  @{text "> COMP:  direct composition with rules (no lifting)"}\\
+  @{text "> HOL.dest:  declaration of Classical destruction rule"}\\
+  @{text "> HOL.elim:  declaration of Classical elimination rule"}\\
+  @{text "> \<dots>"}
+  \end{isabelle}
+  
+  The theorem attributes fall roughly into two categories: the first category manipulates
+  the proved theorem (like @{text "[symmetric]"} and @{text "[THEN \<dots>]"}), and the second
+  stores the proved theorem somewhere as data (like @{text "[simp]"}, which adds
+  the theorem to the current simpset).
 
   To explain how to write your own attribute, let us start with an extremely simple 
   version of the attribute @{text "[symmetric]"}. The purpose of this attribute is
@@ -1049,15 +1119,15 @@
   an attribute.
 
   Before we can use the attribute, we need to set it up. This can be done
-  using the function @{ML Attrib.setup} as follows.
+  using the Isabelle command \isacommand{attribute\_setup} as follows:
 *}
 
-setup %gray {* Attrib.setup @{binding "my_sym"} 
-  (Scan.succeed my_symmetric) "applying the sym rule"*}
+attribute_setup %gray my_sym = {* Scan.succeed my_symmetric *} 
+  "applying the sym rule"
 
 text {*
-  The attribute does not expect any further arguments (unlike @{text "[OF
-  \<dots>]"}, for example, which can take a list of theorems as argument). Therefore
+  The attribute does not expect any further arguments (unlike @{text "[THEN
+  \<dots>]"}, for example). Therefore
   we use the parser @{ML Scan.succeed}. Later on we will also consider
   attributes taking further arguments. An example for the attribute @{text
   "[my_sym]"} is the proof
@@ -1066,29 +1136,92 @@
 lemma test[my_sym]: "2 = Suc (Suc 0)" by simp
 
 text {*
-  which stores the theorem @{thm test} under the name @{thm [source] test}. We 
-  can also use the attribute when referring to this theorem.
+  which stores the theorem @{thm test} under the name @{thm [source] test}. You
+  can see this, if you query the lemma: 
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test"}\\
+  @{text "> "}~@{thm test}
+  \end{isabelle}
+
+  We can also use the attribute when referring to this theorem:
 
   \begin{isabelle}
   \isacommand{thm}~@{text "test[my_sym]"}\\
   @{text "> "}~@{thm test[my_sym]}
   \end{isabelle}
 
+  As an example of a slightly more complicated theorem attribute, we implement 
+  our version of @{text "[THEN \<dots>]"}. This attribute will take a list of theorems
+  as argument and resolve the proved theorem with this list (one theorem 
+  after another). The code for this attribute is
+*}
+
+ML{*fun MY_THEN thms = 
+  Thm.rule_attribute (fn _ => fn thm => foldl ((op RS) o swap) thm thms)*}
+
+text {* 
+  where @{ML swap} swaps the components of a pair (@{ML RS} is explained
+  later on in Section~\ref{sec:simpletacs}). The setup of this theorem
+  attribute uses the parser @{ML Attrib.thms}, which parses a list of
+  theorems. 
+*}
+
+attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} 
+  "resolving the list of theorems with the proved theorem"
+
+text {* 
+  You can, for example, use this theorem attribute to turn an equation into a 
+  meta-equation:
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\
+  @{text "> "}~@{thm test[MY_THEN eq_reflection]}
+  \end{isabelle}
+
+  If you need the symmetric version as a meta-equation, you can write
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\
+  @{text "> "}~@{thm test[MY_THEN sym eq_reflection]}
+  \end{isabelle}
+
+  It is also possible to combine different theorem attributes, as in:
+  
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\
+  @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]}
+  \end{isabelle}
+  
+  However, here also a weakness of the concept
+  of theorem attributes shows through: since theorem attributes can be
+  arbitrary functions, they do not in general commute. If you try
+
+  \begin{isabelle}
+  \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\
+  @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"}
+  \end{isabelle}
+
+  you get an exception indicating that the theorem @{thm [source] sym}
+  does not resolve with meta-equations. 
+
   The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems.
-  Another usage of attributes is to add and delete theorems from stored data.
-  For example the attribute @{text "[simp]"} adds or deletes a theorem from the
+  Another usage of theorem attributes is to add and delete theorems from stored data.
+  For example the theorem attribute @{text "[simp]"} adds or deletes a theorem from the
   current simpset. For these applications, you can use @{ML Thm.declaration_attribute}. 
   To illustrate this function, let us introduce a reference containing a list
   of theorems.
 *}
 
-ML{*val my_thms = ref ([]:thm list)*}
+ML{*val my_thms = ref ([] : thm list)*}
 
 text {* 
   A word of warning: such references must not be used in any code that
   is meant to be more than just for testing purposes! Here it is only used 
   to illustrate matters. We will show later how to store data properly without 
-  using references. The function @{ML Thm.declaration_attribute} expects us to 
+  using references.
+ 
+  The function @{ML Thm.declaration_attribute} expects us to 
   provide two functions that add and delete theorems from this list. 
   For this we use the two functions:
 *}
@@ -1104,12 +1237,13 @@
   here it is sufficient that they just return the context unchanged. They change
   however the reference @{ML my_thms}, whereby the function @{ML Thm.add_thm}
   adds a theorem if it is not already included in the list, and @{ML
-  Thm.del_thm} deletes one. Both functions use the predicate @{ML
-  Thm.eq_thm_prop} which compares theorems according to their proved
-  propositions (modulo alpha-equivalence).
+  Thm.del_thm} deletes one (both functions use the predicate @{ML
+  Thm.eq_thm_prop}, which compares theorems according to their proved
+  propositions modulo alpha-equivalence).
 
 
-  You can turn both functions into attributes using
+  You can turn functions @{ML my_thms_add} and @{ML my_thms_del} into 
+  attributes with the code
 *}
 
 ML{*val my_add = Thm.declaration_attribute my_thms_add
@@ -1119,17 +1253,20 @@
   and set up the attributes as follows
 *}
 
-setup %gray {* Attrib.setup @{binding "my_thms"}
-  (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *}
+attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} 
+  "maintaining a list of my_thms" 
 
 text {*
-  Now if you prove the lemma attaching the attribute @{text "[my_thms]"}
+  The parser @{ML Attrib.add_del} is a predefined parser for 
+  adding and deleting lemmas. Now if you prove the next lemma 
+  and attach the attribute 
+  @{text "[my_thms]"}
 *}
 
 lemma trueI_2[my_thms]: "True" by simp
 
 text {*
-  then you can see the lemma is added to the initially empty list.
+  then you can see it is added to the initially empty list.
 
   @{ML_response_fake [display,gray]
   "!my_thms" "[\"True\"]"}
@@ -1141,8 +1278,8 @@
 
 text {*
   The @{text "add"} is the default operation and does not need
-  to be given. This declaration will cause the theorem list to be 
-  updated as follows.
+  to be explicitly given. These two declarations will cause the 
+  theorem list to be updated as:
 
   @{ML_response_fake [display,gray]
   "!my_thms"
@@ -1167,10 +1304,11 @@
 
   However, as said at the beginning of this example, using references for storing theorems is
   \emph{not} the received way of doing such things. The received way is to 
-  start a ``data slot'' in a context by using the functor @{text GenericDataFun}:
+  start a ``data slot'', below called @{text MyThmsData}, by using the functor 
+  @{text GenericDataFun}:
 *}
 
-ML {*structure Data = GenericDataFun
+ML {*structure MyThmsData = GenericDataFun
  (type T = thm list
   val empty = []
   val extend = I
@@ -1181,19 +1319,71 @@
   @{ML my_thms_del} to:
 *}
 
-ML{*val thm_add = Data.map o Thm.add_thm
-val thm_del = Data.map o Thm.del_thm*}
+ML{*val thm_add = MyThmsData.map o Thm.add_thm
+val thm_del = MyThmsData.map o Thm.del_thm*}
+
+text {*
+  where @{ML MyThmsData.map} updates the data appropriately. The
+  corresponding theorem addtributes are
+*}
+
+ML{*val add = Thm.declaration_attribute thm_add
+val del = Thm.declaration_attribute thm_del *}
+
+text {*
+  and the setup is as follows
+*}
+
+attribute_setup %gray my_thms2 = {* Attrib.add_del add del *} 
+  "properly maintaining a list of my_thms"
 
 text {*
-  where @{ML Data.map} updates the data appropriately in the context. Since storing
-  theorems in a special list is such a common task, there is the functor @{text NamedThmsFun},
-  which does most of the work for you. To obtain such a named theorem lists, you just
-  declare 
+  Initially, the data slot is empty
+
+  @{ML_response_fake [display,gray]
+  "MyThmsData.get (Context.Proof @{context})"
+  "[]"}
+
+  but if you prove
+*}
+
+lemma three[my_thms2]: "3 = Suc (Suc (Suc 0))" by simp
+
+text {*
+  the lemma is now included
+
+  @{ML_response_fake [display,gray]
+  "MyThmsData.get (Context.Proof @{context})"
+  "[\"3 = Suc (Suc (Suc 0))\"]"}
+
+  With @{text my_thms2} you can also nicely see why it is important to 
+  store data in a ``data slot'' and \emph{not} in a reference. Backtrack
+  to the point just before the lemma @{thm [source] three} is proved and 
+  check the the content of @{ML_struct "MyThmsData"}: it is now again 
+  empty. The addition has been properly retracted. Now consider the proof:
+*}
+
+lemma four[my_thms]: "4 = Suc (Suc (Suc (Suc 0)))" by simp
+
+text {*
+  Checking the content of @{ML my_thms} gives
+
+  @{ML_response_fake [display,gray]
+  "!my_thms"
+  "[\"4 = Suc (Suc (Suc (Suc 0)))\", \"True\"]"}
+
+  as expected, but if we backtrack before the lemma @{thm [source] four}, the
+  content of @{ML my_thms} is unchanged. The backtracking mechanism
+  of Isabelle is completely oblivious about what to do with references!
+
+  Since storing theorems in a special list is such a common task, there is the
+  functor @{text NamedThmsFun}, which does most of the work for you. To obtain
+  such a named theorem lists, you just declare
 *}
 
 ML{*structure FooRules = NamedThmsFun 
  (val name = "foo" 
-  val description = "Rules for foo"); *}
+  val description = "Rules for foo") *}
 
 text {*
   and set up the @{ML_struct FooRules} with the command
--- a/ProgTutorial/Intro.thy	Mon Mar 23 14:20:14 2009 +0100
+++ b/ProgTutorial/Intro.thy	Mon Mar 23 18:35:58 2009 +0100
@@ -144,6 +144,9 @@
 
   \item {\bf Alexander Krauss} wrote the first version of the ``first-steps'' 
   chapter and also contributed the material on @{text NamedThmsFun}.
+
+  \item {\bf Christian Sternagel} proof read the tutorial and made 
+  comments. 
   \end{itemize}
 
   Please let me know of any omissions. Responsibility for any remaining
--- a/ProgTutorial/Package/Ind_Code.thy	Mon Mar 23 14:20:14 2009 +0100
+++ b/ProgTutorial/Package/Ind_Code.thy	Mon Mar 23 18:35:58 2009 +0100
@@ -97,7 +97,7 @@
   val arg =  ((@{binding "MyTrue"}, NoSyn), @{term True})
   val (def, lthy') = make_defs arg lthy 
 in
-  warning (str_of_thm lthy' def); lthy'
+  warning (str_of_thm_no_vars lthy' def); lthy'
 end *}
 
 text {*
@@ -218,7 +218,7 @@
   val arg_tyss = [[@{typ "nat"}], [@{typ "nat"}]]
   val (defs, lthy') = definitions rules preds prednames syns arg_tyss lthy
 in
-  warning (str_of_thms lthy' defs); lthy'
+  warning (str_of_thms_no_vars lthy' defs); lthy'
 end *}
 
 text {*
@@ -413,7 +413,7 @@
   val intro = 
     prove_induction lthy defs srules cnewpreds ((pred, newpred), arg_tys)
 in
-  warning (str_of_thm_raw lthy intro); lthy
+  warning (str_of_thm lthy intro); lthy
 end *} 
 
 text {*
@@ -477,7 +477,7 @@
   val tyss = [[@{typ "nat"}], [@{typ "nat"}]]
   val ind_thms = inductions rules defs preds tyss lthy
 in
-  warning (str_of_thms_raw lthy ind_thms); lthy
+  warning (str_of_thms lthy ind_thms); lthy
 end *}
 
 
@@ -530,7 +530,7 @@
   val ctrms = [@{cterm \"a::nat\"}, @{cterm \"b::nat\"}, @{cterm \"c::nat\"}]
   val new_thm = all_elims ctrms @{thm all_elims_test}
 in
-  warning (str_of_thm @{context} new_thm)
+  warning (str_of_thm_no_vars @{context} new_thm)
 end"
   "P a b c"}
 
@@ -538,7 +538,7 @@
   For example 
 
   @{ML_response_fake [display, gray]
-"warning (str_of_thm @{context} 
+"warning (str_of_thm_no_vars @{context} 
             (imp_elims @{thms imp_elims_test'} @{thm imp_elims_test}))"
   "C"}
 *}
@@ -571,7 +571,7 @@
 ML {*
 fun chop_print_tac2 ctxt prems =
 let
-  val _ = warning (commas (map (str_of_thm ctxt) prems))
+  val _ = warning (commas (map (str_of_thm_no_vars ctxt) prems))
 in
    all_tac
 end
--- a/ProgTutorial/Parsing.thy	Mon Mar 23 14:20:14 2009 +0100
+++ b/ProgTutorial/Parsing.thy	Mon Mar 23 18:35:58 2009 +0100
@@ -16,7 +16,7 @@
   parsing combinators. These combinators are a well-established technique for parsing, 
   which has, for example, been described in Paulson's classic ML-book \cite{paulson-ml2}.
   Isabelle developers are usually concerned with writing these outer syntax parsers, 
-  either for new definitional packages or for calling tactics with specific arguments. 
+  either for new definitional packages or for calling methods with specific arguments. 
 
   \begin{readmore}
   The library 
@@ -298,7 +298,7 @@
 
 @{ML_response [display,gray]
 "let 
-  fun double (x,y) = (x ^ x, y ^ y) 
+  fun double (x, y) = (x ^ x, y ^ y) 
 in
   (($$ \"h\") -- ($$ \"e\") >> double) (explode \"hello\")
 end"
@@ -318,6 +318,8 @@
   where the single-character strings in the parsed output are transformed
   back into one string.
  
+  (FIXME:  move to an earlier place)
+
   The function @{ML Scan.ahead} parses some input, but leaves the original
   input unchanged. For example:
 
@@ -492,8 +494,9 @@
 text {*
 
   The function @{ML "OuterParse.!!!"} can be used to force termination of the
-  parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section), 
-  except that the error message is fixed to be @{text [quotes] "Outer syntax error"}
+  parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section). 
+  Except that the error message of @{ML "OuterParse.!!!"} is fixed to be 
+  @{text [quotes] "Outer syntax error"}
   with a relatively precise description of the failure. For example:
 
 @{ML_response_fake [display,gray]
@@ -554,6 +557,12 @@
 
 *}
 
+section {* Context Parser (TBD) *}
+
+text {*
+  Used for example in \isacommand{attribute\_setup} and \isacommand{method\_setup}.
+*}
+
 section {* Parsing Inner Syntax *}
 
 text {*
@@ -711,6 +720,29 @@
   \end{readmore}
 *}
 
+text_raw {*
+  \begin{exercise}
+  If you look closely how @{ML SpecParse.where_alt_specs} is implemented
+  in file @{ML_file "Pure/Isar/spec_parse.ML"} and compare it to our @{ML "spec_parser"}, 
+  you can notice an additional ``tail'' (Lines 8 to 10):
+  \begin{isabelle}
+*}
+ML %linenosgray{*val spec_parser' = 
+     OuterParse.fixes -- 
+     Scan.optional
+     (OuterParse.$$$ "where" |-- 
+        OuterParse.!!! 
+          (OuterParse.enum1 "|" 
+             ((SpecParse.opt_thm_name ":" -- OuterParse.prop) --| 
+                  Scan.option (Scan.ahead (OuterParse.name || 
+                  OuterParse.$$$ "[") -- 
+                  OuterParse.!!! (OuterParse.$$$ "|"))))) [] *}
+text_raw {*
+  \end{isabelle}
+  What is the purpose of this additional ``tail''?
+  \end{exercise}
+*}
+
 section {* New Commands and Keyword Files\label{sec:newcommand} *}
 
 text {*
@@ -968,7 +1000,6 @@
   @{ML Toplevel.local_theory} do?)
 
   (FIXME read a name and show how to store theorems)
-
 *}
 
 section {* Methods *}
@@ -982,7 +1013,7 @@
 print_methods
 
 text {*
-  An example of a very simple method is the following code.
+  An example of a very simple method is:
 *}
 
 method_setup %gray foobar_meth = 
@@ -1007,12 +1038,20 @@
   \end{minipage} *}
 (*<*)oops(*>*)
 
+
+(*
+ML {* SIMPLE_METHOD *}
+ML {* METHOD *}
+ML {* K (SIMPLE_METHOD ((etac @{thm conjE} THEN' rtac @{thm conjI}) 1)) *}
+ML {* Scan.succeed  *}
+*)
+
 text {*
   (FIXME: explain a version of rule-tac)
 *}
 
 (*<*)
-
+(* THIS IS AN OLD VERSION OF THE PARSING CHAPTER BY JEREMY DAWSON *)
 chapter {* Parsing *}
 
 text {*
--- a/ProgTutorial/Tactical.thy	Mon Mar 23 14:20:14 2009 +0100
+++ b/ProgTutorial/Tactical.thy	Mon Mar 23 18:35:58 2009 +0100
@@ -209,7 +209,7 @@
 
 ML{*fun my_print_tac ctxt thm =
 let
-  val _ = warning (str_of_thm ctxt thm)
+  val _ = warning (str_of_thm_no_vars ctxt thm)
 in 
   Seq.single thm
 end*}
@@ -325,7 +325,7 @@
 
 *}
 
-section {* Simple Tactics *}
+section {* Simple Tactics\label{sec:simpletacs} *}
 
 text {*
   Let us start with explaining the simple tactic @{ML print_tac}, which is quite useful 
@@ -546,7 +546,7 @@
   val str_of_params = str_of_cterms context params
   val str_of_asms = str_of_cterms context asms
   val str_of_concl = str_of_cterm context concl
-  val str_of_prems = str_of_thms context prems   
+  val str_of_prems = str_of_thms_no_vars context prems   
   val str_of_schms = str_of_cterms context (snd schematics)    
  
   val _ = (warning ("params: " ^ str_of_params);
@@ -1143,7 +1143,7 @@
   val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
 
   fun name_thm (nm, thm) =
-      "  " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
+      "  " ^ nm ^ ": " ^ (str_of_thm_no_vars ctxt thm)
   fun name_ctrm (nm, ctrm) =
       "  " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
 
Binary file progtutorial.pdf has changed