ProgTutorial/Advanced.thy
changeset 490 b8a654eabdf0
parent 488 780100cd4060
child 491 94216a7fc1fc
--- a/ProgTutorial/Advanced.thy	Mon Nov 07 16:13:26 2011 +0000
+++ b/ProgTutorial/Advanced.thy	Wed Nov 09 10:16:23 2011 +0000
@@ -65,20 +65,20 @@
   functions acting on contexts or local theories (which will be discussed in
   the next sections). There is a tendency in the Isabelle development to prefer
   ``non-global'' operations, because they have some advantages, as we will also
-  discuss later. However, some basic management of theories will very likely
-  never go away.
+  discuss later. However, some basic understanding of theories is still necessary
+  for effective Isabelle programming.
 
-  In the context of ML-programming, the most important command with theories
-  is \isacommand{setup}. In the previous chapters we used it, for
-  example, to make a theorem attribute known to Isabelle or to register a theorem
-  under a name. What happens behind the scenes is that \isacommand{setup}
-  expects a function of type @{ML_type "theory -> theory"}: the input theory
-  is the current theory and the output the theory where the attribute has been
-  registered or the theorem has been stored.  This is a fundamental principle
-  in Isabelle. A similar situation arises with declaring constants. The
-  function that declares a constant on the ML-level is @{ML_ind declare_const
-  in Sign} in the structure @{ML_struct Sign}. To see how \isacommand{setup}
-  works, consider the following code: 
+  An important command with theories is \isacommand{setup}. In the
+  previous chapters we used it to make, for example, a theorem attribute known
+  to Isabelle or to register a theorem under a name. What happens behind the
+  scenes is that \isacommand{setup} expects a function of type @{ML_type
+  "theory -> theory"}: the input theory is the current theory and the output
+  the theory where the attribute has been registered or the theorem has been
+  stored.  This is a fundamental principle in Isabelle. A similar situation
+  arises with declaring constants. The function that declares a constant on
+  the ML-level is @{ML_ind declare_const in Sign} in the structure @{ML_struct
+  Sign}. To see how \isacommand{setup} works, consider the following code:
+
 *}  
 
 ML{*let
@@ -197,23 +197,48 @@
 text {*
   Contexts are arguably more important than theories, even though they only 
   contain ``short-term memory data''. The reason is that a vast number of
-  functions in Isabelle depend one way or the other on contexts. Even such
-  mundane operation like printing out a term make essential use of contexts.
-  For this consider the following contrived proof which only purpose is to 
-  fix two variables 
+  functions in Isabelle depend in one way or another on contexts. Even such
+  mundane operations like printing out a term make essential use of contexts.
+  For this consider the following contrived proof whose only purpose is to 
+  fix two variables:
 *}
 
 lemma "True"
 proof -
-  txt_raw {*\mbox{}\\[-6mm]*} 
-  ML_prf {* Variable.dest_fixes @{context} *} 
-  txt_raw {*\mbox{}\\[-6mm]*}
+  txt {*\mbox{}\\[-7mm]*} ML_prf {* Variable.dest_fixes @{context} *} txt {*\mbox{}\\[-7mm]*}
   fix x y  
-  txt_raw {*\mbox{}\\[-6mm]*}
+  txt {*\mbox{}\\[-7mm]*}
   ML_prf {* Variable.dest_fixes @{context} *} 
-  txt_raw {*\mbox{}\\[-6mm]*}
+  txt {*\mbox{}\\[-7mm]*}
 oops
 
+text {*
+  The interesting point in this proof is that we injected ML-code before and after
+  the variables are fixed. For this remember that ML-code inside a proof
+  needs to be enclosed in \isacommand{ML\_prf}~@{text "\<verbopen> \<dots> \<verbclose>"},
+  not \isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"}. The function 
+  @{ML_ind dest_fixes in Variable} from the structure @{ML_struct Variable} takes 
+  a context and returns all its currently fixed variable (names). That 
+  means a context has a dataslot containing information about fixed variables.
+  The ML-antiquotation @{text "@{context}"} points to the context that is
+  active at that point of the theory. Consequently, in the first call to 
+  @{ML dest_fixes in Variable} this dataslot is  empty; in the second it is 
+  filled with @{text x} and @{text y}. What is interesting is that contexts
+  can be stacked. For this consider the following proof fraqument
+*}
+
+lemma "True"
+proof -
+  fix x y
+  { fix z w
+  txt {*\mbox{}\\[-7mm]*}
+  ML_prf {* Variable.dest_fixes @{context} *} 
+  txt {*\mbox{}\\[-7mm]*}
+     }
+  txt {*\mbox{}\\[-7mm]*}
+  ML_prf {* Variable.dest_fixes @{context} *} 
+  txt {*\mbox{}\\[-7mm]*}
+oops
 
 (*
 ML{*Proof_Context.debug := true*}