--- a/ProgTutorial/Package/Ind_Code.thy Sat May 30 11:12:46 2009 +0200
+++ b/ProgTutorial/Package/Ind_Code.thy Sat May 30 17:40:20 2009 +0200
@@ -21,7 +21,7 @@
and then ``register'' the definition inside a local theory.
To do the latter, we use the following wrapper for the function
- @{ML LocalTheory.define}. The wrapper takes a predicate name, a syntax
+ @{ML [index] define in LocalTheory}. The wrapper takes a predicate name, a syntax
annotation and a term representing the right-hand side of the definition.
*}
@@ -35,12 +35,12 @@
text {*
It returns the definition (as a theorem) and the local theory in which the
- definition has been made. In Line 4, @{ML internalK in Thm} is a flag
- attached to the theorem (others possibile flags are @{ML definitionK in Thm}
- and @{ML axiomK in Thm}). These flags just classify theorems and have no
+ definition has been made. In Line 4, @{ML [index] internalK in Thm} is a flag
+ attached to the theorem (others possibile flags are @{ML [index] definitionK in Thm}
+ and @{ML [index] axiomK in Thm}). These flags just classify theorems and have no
significant meaning, except for tools that, for example, find theorems in
the theorem database.\footnote{FIXME: put in the section about theorems.} We
- also use @{ML empty_binding in Attrib} in Line 3, since the definitions for
+ also use @{ML [index] empty_binding in Attrib} in Line 3, since the definitions for
our inductive predicates are not meant to be seen by the user and therefore
do not need to have any theorem attributes. A testcase for this function is
*}
@@ -170,7 +170,7 @@
into the object logic (since definitions cannot be stated with
meta-connectives). To do this transformation we have to obtain the theory
behind the local theory (Line 3); with this theory we can use the function
- @{ML ObjectLogic.atomize_term} to make the transformation (Line 4). The call
+ @{ML [index] atomize_term in ObjectLogic} to make the transformation (Line 4). The call
to @{ML defn_aux} in Line 5 produces all right-hand sides of the
definitions. The actual definitions are then made in Line 7. The result of
the function is a list of theorems and a local theory (the theorems are
@@ -437,7 +437,7 @@
the new predicates. In Line 8, we construct the types of these new predicates
using the given argument types. Next we turn them into terms and subsequently
certify them (Line 9 and 10). We can now produce the substituted introduction rules
- (Line 11) using the function @{ML subst_free}. Line 14 and 15 just iterate
+ (Line 11) using the function @{ML [index] subst_free}. Line 14 and 15 just iterate
the proofs for all predicates.
From this we obtain a list of theorems. Finally we need to export the
fixed variables @{text "Ps"} to obtain the schematic variables @{text "?Ps"}
@@ -579,7 +579,7 @@
differently. In the code below we will therefore separate them into @{text
"params1"} and @{text params2}, respectively @{text "prems1"} and @{text
"prems2"}. To do this separation, it is best to open a subproof with the
- tactic @{ML SUBPROOF}, since this tactic provides us with the parameters (as
+ tactic @{ML [index] SUBPROOF}, since this tactic provides us with the parameters (as
list of @{ML_type cterm}s) and the assumptions (as list of @{ML_type thm}s).
The problem we have to overcome with @{ML SUBPROOF}
is, however, that this tactic always expects us to completely discharge the
@@ -627,7 +627,7 @@
going in our example, we will print out these values using the printing
function in Figure~\ref{fig:chopprint}. Since the tactic @{ML SUBPROOF} will
supply us the @{text "params"} and @{text "prems"} as lists, we can
- separate them using the function @{ML chop}.
+ separate them using the function @{ML [index] chop}.
*}
ML{*fun chop_test_tac preds rules =
@@ -691,7 +691,7 @@
To use this premise with @{ML rtac}, we need to instantiate its
quantifiers (with @{text params1}) and transform it into rule
- format (using @{ML "ObjectLogic.rulify"}. So we can modify the
+ format (using @{ML [index] rulify in ObjectLogic}. So we can modify the
subproof as follows:
*}
@@ -854,7 +854,7 @@
In Line 4 we use the @{text prems} from the @{ML SUBPROOF} and resolve
them with @{text prem}. In the simple cases, that is where the @{text prem}
comes from a non-recursive premise of the rule, @{text prems} will be
- just the empty list and the function @{ML MRS} does nothing. Similarly, in the
+ just the empty list and the function @{ML [index] MRS} does nothing. Similarly, in the
cases where the recursive premises of the rule do not have preconditions.
In case there are preconditions, then Line 4 discharges them. After
that we can proceed as before, i.e., check whether the outermost
@@ -932,7 +932,7 @@
end*}
text {*
- The iteration is done with the function @{ML map_index} since we
+ The iteration is done with the function @{ML [index] map_index} since we
need the introduction rule together with its number (counted from
@{text 0}). This completes the code for the functions deriving the
reasoning infrastructure. It remains to implement some administrative
@@ -945,7 +945,7 @@
We have produced various theorems (definitions, induction principles and
introduction rules), but apart from the definitions, we have not yet
registered them with the theorem database. This is what the functions
- @{ML LocalTheory.note} does.
+ @{ML [index] note in LocalTheory} does.
For convenience, we use the following
@@ -1027,7 +1027,7 @@
Line 20 add further every introduction rule under its own name
(given by the user).\footnote{FIXME: what happens if the user did not give
any name.} Line 21 registers the induction principles. For this we have
- to use some specific attributes. The first @{ML "case_names" in RuleCases}
+ to use some specific attributes. The first @{ML [index] case_names in RuleCases}
corresponds to the case names that are used by Isar to reference the proof
obligations in the induction. The second @{ML "consumes 1" in RuleCases}
indicates that the first premise of the induction principle (namely