ProgTutorial/Package/Ind_Code.thy
changeset 215 8d1a344a621e
parent 212 ac01ddb285f6
child 217 75154f4d4e2f
--- a/ProgTutorial/Package/Ind_Code.thy	Mon Mar 30 09:33:50 2009 +0100
+++ b/ProgTutorial/Package/Ind_Code.thy	Tue Mar 31 15:48:53 2009 +0100
@@ -5,9 +5,10 @@
 section {* The Gory Details\label{sec:code} *} 
 
 text {*
-  As mentioned before the code falls roughly into three parts: code that deals 
-  with the definitions, the induction principles and the introduction rules, respectively. 
-  In addition there are some administrative functions that string everything together. 
+  As mentioned before the code falls roughly into three parts: code that deals
+  with the definitions, withe the induction principles and the introduction
+  rules, respectively. In addition there are some administrative functions
+  that string everything together.
 *}
 
 subsection {* Definitions *}
@@ -206,8 +207,8 @@
 subsection {* Induction Principles *}
 
 text {*
-  Recall that the proof of the induction principle 
-  for @{text "even"} was:
+  Recall that the manual proof for the induction principle 
+  of @{text "even"} was:
 *}
 
 lemma manual_ind_prin_even: 
@@ -236,7 +237,7 @@
 text {*
   This helper function instantiates the @{text "?x"} in the theorem 
   @{thm spec} with a given @{ML_type cterm}. We call this helper function
-  in the following tactic, called @{text inst_spec_tac}.
+  in the following tactic, called @{text inst_spec_tac}\label{fun:instspectac}.
 *}
 
 ML{*fun inst_spec_tac ctrms = 
@@ -273,12 +274,13 @@
           assume_tac]*}
 
 text {*
-  We have to give it as arguments the definitions, the premise (this premise
-  is @{text "even n"} in lemma @{thm [source] manual_ind_prin_even}) and the
-  instantiations. Compare this tactic with the manual proof for the lemma @{thm
-  [source] manual_ind_prin_even}: as you can see there is almost a one-to-one
-  correspondence between the \isacommand{apply}-script and the @{ML
-  ind_tac}. Two testcases for this tactic are:
+  We have to give it as arguments the definitions, the premise (a list of
+  formulae) and the instantiations. The premise is @{text "even n"} in lemma
+  @{thm [source] manual_ind_prin_even}; in our code it will always be a list
+  consisting of a single formula. Compare this tactic with the manual proof
+  for the lemma @{thm [source] manual_ind_prin_even}: as you can see there is
+  almost a one-to-one correspondence between the \isacommand{apply}-script and
+  the @{ML ind_tac}. Two testcases for this tactic are:
 *}
 
 lemma automatic_ind_prin_even:
@@ -309,11 +311,15 @@
   \end{isabelle}
 
   The variables @{text "z"}, @{text "P"} and @{text "Q"} are schematic
-  variables (since they are not quantified in the lemma). These schematic
-  variables are needed so that they can be instantiated by the user. 
-  It will take a bit of work to generate these schematic variables when
-  constructing the goals for the induction principles.  In general we have 
-  to construct for each predicate @{text "pred"} a goal of the form
+  variables (since they are not quantified in the lemma). These 
+  variables must be schematic, otherwise they cannot be instantiated 
+  by the user. To generate these schematic variables we use a common trick
+  in Isabelle programming: we first declare them as \emph{free}, 
+  \emph{but fixed}, and then use the infrastructure to turn them into 
+  schematic variables.
+
+  In general we have to construct for each predicate @{text "pred"} a goal 
+  of the form
 
   @{text [display] 
   "pred ?zs \<Longrightarrow> rules[preds := ?Ps] \<Longrightarrow> ?P ?zs"}
@@ -321,9 +327,7 @@
   where the predicates @{text preds} are replaced in @{text rules} by new 
   distinct variables @{text "?Ps"}. We also need to generate fresh arguments 
   @{text "?zs"} for the predicate  @{text "pred"} and the @{text "?P"} in 
-  the conclusion. As just mentioned, the crucial point is that the  
-  @{text "?Ps"} and @{text "?zs"} need to be schematic variables that can 
-  be instantiated by the user.
+  the conclusion. 
 
   We generate these goals in two steps. The first function, named @{text prove_ind}, 
   expects that the introduction rules are already appropriately substituted. The argument
@@ -352,7 +356,7 @@
 text {* 
   In Line 3 we produce names @{text "zs"} for each type in the 
   argument type list. Line 4 makes these names unique and declares them as 
-  \emph{free but fixed} variables in the local theory @{text "lthy'"}. 
+  free, but fixed, variables in the local theory @{text "lthy'"}. 
   That means they are not schematic variables (yet).
   In Line 5 we construct the terms corresponding to these variables. 
   The variables are applied to the predicate in Line 7 (this corresponds
@@ -514,6 +518,10 @@
 end"
   "P a b c"}
 
+  Note the difference with @{ML inst_spec_tac} from Page~\pageref{fun:instspectac}:
+  @{ML inst_spec_tac} is a tactic which operates on a goal state; in contrast
+  @{ML all_elims} operates on theorems. 
+
   Similarly, the function @{ML imp_elims} eliminates preconditions from implications. 
   For example we can eliminate the preconditions @{text "A"} and @{text "B"} from
   @{thm [source] imp_elims_test}:
@@ -543,7 +551,7 @@
 
 text {*
   The function in Line 2 ``rulifies'' the lemma. This will turn out to 
-  be important later on. Applying this tactic 
+  be important later on. Applying this tactic in our proof of @{text "fresh_Lem"}
 *}
 
 (*<*)
@@ -553,30 +561,29 @@
 apply(tactic {* expand_tac @{thms fresh_def} *})
 
 txt {*
-  we end up in the goal state
+  gives us the goal state
 
   \begin{isabelle}
   @{subgoals [display]}
   \end{isabelle}
 
-  As you can see, there are parameters (namely @{text "a"}, @{text "b"} 
-  and @{text "t"}) which come from the introduction rule and parameters
-  (in the case above only @{text "fresh"}) which come from the universal
-  quantification in the definition @{term "fresh a (App t s)"}.
-  Similarly, there are assumptions
-  that come from the premises of the rule and assumptions from the
-  definition of the predicate. We need to treat these 
-  parameters and assumptions differently. In the code below
-  we will therefore separate them into @{text "params1"} and @{text params2},
-  respectively @{text "prems1"} and @{text "prems2"}. To do this 
-  separation, it is best to open a subproof with the tactic 
-  @{ML SUBPROOF}, since this tactic provides us
-  with the parameters (as list of @{ML_type cterm}s) and the assumptions
-  (as list of @{ML_type thm}s called @{text prems}). The problem we have 
-  to overcome with @{ML SUBPROOF} is, however, that this tactic always 
-  expects us to completely discharge the goal (see Section~\ref{sec:simpletacs}). 
-  This is inconvenient for our gradual explanation of the proof here. To 
-  circumvent this inconvenience we use the following modified tactic: 
+  As you can see, there are parameters (namely @{text "a"}, @{text "b"} and
+  @{text "t"}) which come from the introduction rule and parameters (in the
+  case above only @{text "fresh"}) which come from the universal
+  quantification in the definition @{term "fresh a (App t s)"}.  Similarly,
+  there are assumptions that come from the premises of the rule (namely the
+  first two) and assumptions from the definition of the predicate (assumption
+  three to six). We need to treat these parameters and assumptions
+  differently. In the code below we will therefore separate them into @{text
+  "params1"} and @{text params2}, respectively @{text "prems1"} and @{text
+  "prems2"}. To do this separation, it is best to open a subproof with the
+  tactic @{ML SUBPROOF}, since this tactic provides us with the parameters (as
+  list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s). 
+  The problem we have to overcome with @{ML SUBPROOF}
+  is, however, that this tactic always expects us to completely discharge the
+  goal (see Section~\ref{sec:simpletacs}). This is inconvenient for our
+  gradual explanation of the proof here. To circumvent this inconvenience we
+  use the following modified tactic:
 *}
 (*<*)oops(*>*)
 ML{*fun SUBPROOF_test tac ctxt = (SUBPROOF tac ctxt 1) ORELSE all_tac*}
@@ -699,7 +706,7 @@
 
 text {* 
   The argument @{text i} corresponds to the number of the 
-  introduction we want to prove. We will later on lat it range
+  introduction we want to prove. We will later on let it range
   from @{text 0} to the number of @{text "rules - 1"}.
   Below we apply this function with @{text 3}, since 
   we are proving the fourth introduction rule. 
@@ -722,9 +729,9 @@
   @{text "2."}~@{prop "fresh a t"}
   \end{isabelle}
 
-  As expected there are two subgoals, where the first comes from a
+  As expected there are two subgoals, where the first comes from the
   non-recursive premise of the introduction rule and the second comes 
-  from a recursive premise. The first goal can be solved immediately 
+  from the recursive one. The first goal can be solved immediately 
   by @{text "prems1"}. The second needs more work. It can be solved 
   with the other premise in @{text "prems1"}, namely
 
@@ -773,7 +780,7 @@
 
 text {*
   Note that the tactic is now @{ML SUBPROOF}, not @{ML SUBPROOF_test}. 
-  The full proof of the introduction rule now as follows:
+  The full proof of the introduction rule is as follows:
 *}
 
 lemma fresh_Lam:
@@ -783,7 +790,7 @@
 done
 
 text {* 
-  Phew!  Unfortunately, not everything is done yet. If you look closely
+  Phew!  ...Unfortunately, not everything is done yet. If you look closely
   at the general principle outlined for the introduction rules in 
   Section~\ref{sec:nutshell}, we have  not yet dealt with the case where 
   recursive premises have preconditions. The introduction rule
@@ -889,8 +896,8 @@
           prove_intro_tac i preds rules ctxt]*}
 
 text {*
-  Lines 2 to 4 correspond to the function @{ML expand_tac}. Some testcases
-  for this tactic are:
+  Lines 2 to 4 in this tactic correspond to the function @{ML expand_tac}. 
+  Some testcases for this tactic are:
 *}
 
 lemma even0_intro:
@@ -907,7 +914,7 @@
   intro_tac @{thms fresh_def} fresh_rules [fresh_pred] 1 @{context} *})
 
 text {*
-  The second function sets up in Line 4 the goals (this is easy
+  The second function sets up in Line 4 the goals to be proved (this is easy
   for the introduction rules since they are exactly the rules 
   given by the user) and iterates @{ML intro_tac} over all 
   introduction rules.
@@ -930,12 +937,34 @@
   code that strings everything together.
 *}
 
-subsection {* Main Function *}
+subsection {* Administrative Functions *}
+
+text {* 
+  We have produced various theorems (definitions, induction principles and
+  introduction rules), but apart from the definitions, we have not yet 
+  registered them with the theorem database. This is what the functions 
+  @{ML LocalTheory.note} does. 
+
+
+  For convenience, we use the following 
+  three wrappers this function:
+*}
 
-text {* main internal function *}
+ML{*fun reg_many qname ((name, attrs), thms) = 
+  LocalTheory.note Thm.theoremK 
+    ((Binding.qualify false qname name, attrs), thms) 
+
+fun reg_single1 qname ((name, attrs), thm) = 
+  reg_many qname ((name, attrs), [thm]) 
 
-ML {* LocalTheory.notes *}
+fun reg_single2 name attrs (qname, thm) = 
+  reg_many (Binding.name_of qname) ((name, attrs), [thm]) *}
 
+text {*
+  The function that ``holds everything together'' is @{text "add_inductive"}. 
+  Its arguments are the specification of the predicates @{text "pred_specs"} 
+  and the introduction rules @{text "rule_spec"}.   
+*}
 
 ML %linenosgray{*fun add_inductive pred_specs rule_specs lthy =
 let
@@ -943,32 +972,74 @@
   val pred_specs' = map fst pred_specs
   val prednames = map fst pred_specs'
   val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
+  val tyss = map (binder_types o fastype_of) preds   
 
-  val tyss = map (binder_types o fastype_of) preds   
-  val (attrs, rules) = split_list rule_specs    
+  val (namesattrs, rules) = split_list rule_specs    
 
   val (defs, lthy') = defns rules preds prednames syns tyss lthy      
-  val ind_rules = inds rules defs preds tyss lthy' 	
+  val ind_prin = inds rules defs preds tyss lthy' 	
   val intro_rules = intros rules preds defs lthy'
 
   val mut_name = space_implode "_" (map Binding.name_of prednames)
-  val case_names = map (Binding.name_of o fst) attrs
+  val case_names = map (Binding.name_of o fst) namesattrs
 in
-    lthy' 
-    |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
-        ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules)) 
-    |-> (fn intross => LocalTheory.note Thm.theoremK
-         ((Binding.qualify false mut_name (@{binding "intros"}), []), maps snd intross)) 
-    |>> snd 
-    ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
-         ((Binding.qualify false (Binding.name_of R) (@{binding "induct"}),
-          [Attrib.internal (K (RuleCases.case_names case_names)),
-           Attrib.internal (K (RuleCases.consumes 1)),
-           Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
-          (pred_specs ~~ ind_rules)) #>> maps snd) 
-    |> snd
+  lthy' |> reg_many mut_name ((@{binding "intros"}, []), intro_rules) 
+        ||>> fold_map (reg_single1 mut_name) (namesattrs ~~ intro_rules)  
+        ||>> fold_map (reg_single2 @{binding "induct"} 
+              [Attrib.internal (K (RuleCases.case_names case_names)),
+               Attrib.internal (K (RuleCases.consumes 1)),
+               Attrib.internal (K (Induct.induct_pred ""))]) 
+             (prednames ~~ ind_prin) 
+        |> snd
 end*}
 
+text {*
+  In Line 3 the function extracts the syntax annotations from the predicates. 
+  Lines 4 to 6 extract the names of the predicates and generate
+  the variables terms (with types) corresponding to the predicates. 
+  Line 7 produces the argument types for each predicate. 
+
+  Line 9 extracts the introduction rules from the specifications
+  and stores also in @{text namesattrs} the names and attributes the
+  user may have attached to these rules.
+
+  Line 11 produces the definitions and also registers the definitions
+  in the local theory @{text "lthy'"}. The next two lines produce
+  the induction principles and the introduction rules (all of them
+  as theorems). Both need the local theory @{text lthy'} in which
+  the definitions have been registered.
+
+  Lines 15 produces the name that is used to register the introduction
+  rules. It is costum to collect all introduction rules under 
+  @{text "string.intros"}, whereby @{text "string"} stands for the 
+  @{text [quotes] "_"}-separated list of predicate names (for example
+  @{text "even_odd"}. Also by custom, the case names in intuction 
+  proofs correspond to the names of the introduction rules. These
+  are generated in Line 16.
+
+  Line 18 now adds to @{text "lthy'"} all the introduction rules 
+  under the name @{text "mut_name.intros"} (see previous paragraph).
+  Line 19 add further every introduction rule under its own name
+  (given by the user).\footnote{FIXME: what happens if the user did not give
+  any name.} Line 20 registers the induction principles. For this we have
+  to use some specific attributes. The first @{ML "case_names" in RuleCases} 
+  corresponds to the case names that are used by Isar to reference the proof
+  obligations in the induction. The second @{ML "consumes 1" in RuleCases}
+  indicates that the first premise of the induction principle (namely
+  the predicate over which the induction proceeds) is eliminated. 
+
+  (FIXME: What does @{ML Induct.induct_pred} do?)
+
+  (FIXME: why the mut-name?)
+
+  (FIXME: What does @{ML Binding.qualify} do?)
+
+
+  This completes all the code and fits in with the ``front end'' described
+  in Section \ref{sec:interface}
+*}
+
+
 ML{*fun add_inductive_cmd pred_specs rule_specs lthy =
 let
   val ((pred_specs', rule_specs'), _) = 
@@ -993,6 +1064,9 @@
               "define inductive predicates"
                  OuterKeyword.thy_decl specification*}
 
+
+section {* Extensions *}
+
 text {*
   Things to include at the end:
 
@@ -1016,4 +1090,15 @@
 | EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
 | OddS: "Even n \<Longrightarrow> Odd (Suc n)"
 
+thm Even0
+thm EvenS
+thm OddS
+
+thm Even_Odd.intros
+thm Even.induct
+thm Odd.induct
+
+thm Even_def
+thm Odd_def
+
 end