diff -r de49d5780f57 -r 74f0a06f751f ProgTutorial/Package/Ind_Code.thy --- a/ProgTutorial/Package/Ind_Code.thy Thu Aug 20 14:19:39 2009 +0200 +++ b/ProgTutorial/Package/Ind_Code.thy Thu Aug 20 22:30:20 2009 +0200 @@ -21,7 +21,7 @@ and then ``register'' the definition inside a local theory. To do the latter, we use the following wrapper for the function - @{ML_ind [index] define in LocalTheory}. The wrapper takes a predicate name, a syntax + @{ML_ind define in LocalTheory}. The wrapper takes a predicate name, a syntax annotation and a term representing the right-hand side of the definition. *} @@ -35,12 +35,12 @@ text {* It returns the definition (as a theorem) and the local theory in which the - definition has been made. In Line 4, @{ML_ind [index] internalK in Thm} is a flag - attached to the theorem (other possibile flags are @{ML_ind [index] definitionK in Thm} - and @{ML_ind [index] axiomK in Thm}). These flags just classify theorems and have no + definition has been made. In Line 4, @{ML_ind internalK in Thm} is a flag + attached to the theorem (other possibile flags are @{ML_ind definitionK in Thm} + and @{ML_ind axiomK in Thm}). These flags just classify theorems and have no significant meaning, except for tools that, for example, find theorems in the theorem database.\footnote{FIXME: put in the section about theorems.} We - also use @{ML_ind [index] empty_binding in Attrib} in Line 3, since the definitions for + also use @{ML_ind empty_binding in Attrib} in Line 3, since the definitions for our inductive predicates are not meant to be seen by the user and therefore do not need to have any theorem attributes. A testcase for this function is *} @@ -173,9 +173,9 @@ meta-quanti\-fications. In Line 4, we transform these introduction rules into the object logic (since definitions cannot be stated with meta-connectives). To do this transformation we have to obtain the theory - behind the local theory using the function @{ML_ind [index] theory_of in ProofContext} + behind the local theory using the function @{ML_ind theory_of in ProofContext} (Line 3); with this theory we can use the function - @{ML_ind [index] atomize_term in ObjectLogic} to make the transformation (Line 4). The call + @{ML_ind atomize_term in ObjectLogic} to make the transformation (Line 4). The call to @{ML defn_aux} in Line 5 produces all right-hand sides of the definitions. The actual definitions are then made in Line 7. The result of the function is a list of theorems and a local theory (the theorems are @@ -379,7 +379,7 @@ the Isabelle's goal mechanism will fail.\footnote{FIXME: check with Stefan...is this so?} - In Line 11 we set up the goal to be proved using the function @{ML_ind [index] + In Line 11 we set up the goal to be proved using the function @{ML_ind prove in Goal}; in the next line we call the tactic for proving the induction principle. As mentioned before, this tactic expects the definitions, the premise and the (certified) predicates with which the @@ -447,7 +447,7 @@ the new predicates. In Line 8, we construct the types of these new predicates using the given argument types. Next we turn them into terms and subsequently certify them (Line 9 and 10). We can now produce the substituted introduction rules - (Line 11) using the function @{ML_ind [index] subst_free}. Line 14 and 15 just iterate + (Line 11) using the function @{ML_ind subst_free}. Line 14 and 15 just iterate the proofs for all predicates. From this we obtain a list of theorems. Finally we need to export the fixed variables @{text "Ps"} to obtain the schematic variables @{text "?Ps"} @@ -593,12 +593,12 @@ differently. In the code below we will therefore separate them into @{text "params1"} and @{text params2}, respectively @{text "prems1"} and @{text "prems2"}. To do this separation, it is best to open a subproof with the - tactic @{ML_ind [index] SUBPROOF}, since this tactic provides us with the parameters (as + tactic @{ML_ind SUBPROOF}, since this tactic provides us with the parameters (as list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s). The problem with @{ML SUBPROOF}, however, is that it always expects us to completely discharge the goal (see Section~\ref{sec:simpletacs}). This is a bit inconvenient for our gradual explanation of the proof here. Therefore - we use first the function @{ML_ind [index] FOCUS in Subgoal}, which does s + we use first the function @{ML_ind FOCUS in Subgoal}, which does s ame as @{ML SUBPROOF} but does not require us to completely discharge the goal. *} @@ -632,7 +632,7 @@ going in our example, we will print out these values using the printing function in Figure~\ref{fig:chopprint}. Since @{ML FOCUS in Subgoal} will supply us the @{text "params"} and @{text "prems"} as lists, we can - separate them using the function @{ML_ind [index] chop}. + separate them using the function @{ML_ind chop}. *} ML %linenosgray{*fun chop_test_tac preds rules = @@ -699,7 +699,7 @@ To use this premise with @{ML rtac}, we need to instantiate its quantifiers (with @{text params1}) and transform it into rule - format (using @{ML_ind [index] rulify in ObjectLogic}). So we can modify the + format (using @{ML_ind rulify in ObjectLogic}). So we can modify the code as follows: *} @@ -863,7 +863,7 @@ In Line 4 we use the @{text prems} from the @{ML SUBPROOF} and resolve them with @{text prem}. In the simple cases, that is where the @{text prem} comes from a non-recursive premise of the rule, @{text prems} will be - just the empty list and the function @{ML_ind [index] MRS} does nothing. Similarly, in the + just the empty list and the function @{ML_ind MRS} does nothing. Similarly, in the cases where the recursive premises of the rule do not have preconditions. In case there are preconditions, then Line 4 discharges them. After that we can proceed as before, i.e., check whether the outermost @@ -942,7 +942,7 @@ end*} text {* - The iteration is done with the function @{ML_ind [index] map_index} since we + The iteration is done with the function @{ML_ind map_index} since we need the introduction rule together with its number (counted from @{text 0}). This completes the code for the functions deriving the reasoning infrastructure. It remains to implement some administrative @@ -955,7 +955,7 @@ We have produced various theorems (definitions, induction principles and introduction rules), but apart from the definitions, we have not yet registered them with the theorem database. This is what the functions - @{ML_ind [index] note in LocalTheory} does. + @{ML_ind note in LocalTheory} does. For convenience, we use the following @@ -1037,7 +1037,7 @@ Line 20 add further every introduction rule under its own name (given by the user).\footnote{FIXME: what happens if the user did not give any name.} Line 21 registers the induction principles. For this we have - to use some specific attributes. The first @{ML_ind [index] case_names in RuleCases} + to use some specific attributes. The first @{ML_ind case_names in RuleCases} corresponds to the case names that are used by Isar to reference the proof obligations in the induction. The second @{ML "consumes 1" in RuleCases} indicates that the first premise of the induction principle (namely