Added new chapter about writing packages.
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/Ind_Examples.thy Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,274 @@
+theory Ind_Examples
+imports Main
+begin
+
+section{* Examples of inductive definitions *}
+
+text {*
+\label{sec:ind-examples}
+In this section, we will give three examples showing how to define inductive
+predicates by hand and prove characteristic properties such as introduction
+rules and an induction rule. From these examples, we will then figure out a
+general method for defining inductive predicates, which will be described in
+\S\ref{sec:ind-general-method}. This description will serve as a basis for the
+actual implementation to be developed in \S\ref{sec:ind-interface} -- \S\ref{sec:ind-proofs}.
+It should be noted that our aim in this section is not to write proofs that
+are as beautiful as possible, but as close as possible to the ML code producing
+the proofs that we will develop later.
+As a first example, we consider the \emph{transitive closure} @{text "trcl R"}
+of a relation @{text R}. It is characterized by the following
+two introduction rules
+\[
+\begin{array}{l}
+@{term "trcl R x x"} \\
+@{term [mode=no_brackets] "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"}
+\end{array}
+\]
+Note that the @{text trcl} predicate has two different kinds of parameters: the
+first parameter @{text R} stays \emph{fixed} throughout the definition, whereas
+the second and third parameter changes in the ``recursive call''.
+Since an inductively defined predicate is the \emph{least} predicate closed under
+a collection of introduction rules, we define the predicate @{text "trcl R x y"} in
+such a way that it holds if and only if @{text "P x y"} holds for every predicate
+@{text P} closed under the above rules. This gives rise to a definition containing
+a universal quantifier over the predicate @{text P}:
+*}
+
+definition "trcl \<equiv> \<lambda>R x y.
+ \<forall>P. (\<forall>x. P x x) \<longrightarrow> (\<forall>x y z. R x y \<longrightarrow> P y z \<longrightarrow> P x z) \<longrightarrow> P x y"
+
+text {*
+\noindent
+Since the predicate @{term "trcl R x y"} yields an element of the type of object
+level truth values @{text bool}, the meta-level implications @{text "\<Longrightarrow>"} in the
+above introduction rules have to be converted to object-level implications
+@{text "\<longrightarrow>"}. Moreover, we use object-level universal quantifiers @{text "\<forall>"}
+rather than meta-level universal quantifiers @{text "\<And>"} for quantifying over
+the variable parameters of the introduction rules. Isabelle already offers some
+infrastructure for converting between meta-level and object-level connectives,
+which we will use later on.
+
+With this definition of the transitive closure, the proof of the (weak) induction
+theorem is almost immediate. It suffices to convert all the meta-level connectives
+in the induction rule to object-level connectives using the @{text atomize} proof
+method, expand the definition of @{text trcl}, eliminate the universal quantifier
+contained in it, and then solve the goal by assumption.
+*}
+
+lemma trcl_induct:
+ assumes trcl: "trcl R x y"
+ shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
+ apply (atomize (full))
+ apply (cut_tac trcl)
+ apply (unfold trcl_def)
+ apply (drule spec [where x=P])
+ apply assumption
+ done
+(*<*)
+lemma "trcl R x x"
+ apply (unfold trcl_def)
+ apply (rule allI impI)+
+(*>*)
+txt {*
+\noindent
+The above induction rule is \emph{weak} in the sense that the induction step may
+only be proved using the assumptions @{term "R x y"} and @{term "P y z"}, but not
+using the additional assumption \mbox{@{term "trcl R y z"}}. A stronger induction rule
+containing this additional assumption can be derived from the weaker one with the
+help of the introduction rules for @{text trcl}.
+
+We now turn to the proofs of the introduction rules, which are slightly more complicated.
+In order to prove the first introduction rule, we again unfold the definition and
+then apply the introdution rules for @{text "\<forall>"} and @{text "\<longrightarrow>"} as often as possible.
+We then end up in a proof state of the following form:
+@{subgoals [display]}
+The two assumptions correspond to the introduction rules, where @{term "trcl R"} has been
+replaced by @{term "P"}. Thus, all we have to do is to eliminate the universal quantifier
+in front of the first assumption, and then solve the goal by assumption:
+*}
+(*<*)oops(*>*)
+lemma trcl_base: "trcl R x x"
+ apply (unfold trcl_def)
+ apply (rule allI impI)+
+ apply (drule spec)
+ apply assumption
+ done
+(*<*)
+lemma "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+ apply (unfold trcl_def)
+ apply (rule allI impI)+
+(*>*)
+txt {*
+\noindent
+Since the second introduction rule has premises, its proof is not as easy as the previous
+one. After unfolding the definitions and applying the introduction rules for @{text "\<forall>"}
+and @{text "\<longrightarrow>"}, we get the proof state
+@{subgoals [display]}
+The third and fourth assumption corresponds to the first and second introduction rule,
+respectively, whereas the first and second assumption corresponds to the premises of
+the introduction rule. Since we want to prove the second introduction rule, we apply
+the fourth assumption to the goal @{term "P x z"}. In order for the assumption to
+be applicable, we have to eliminate the universal quantifiers and turn the object-level
+implications into meta-level ones. This can be accomplished using the @{text rule_format}
+attribute. Applying the assumption produces two new subgoals, which can be solved using
+the first and second assumption. The second assumption again involves a quantifier and
+implications that have to be eliminated before it can be applied. To avoid problems
+with higher order unification, it is advisable to provide an instantiation for the
+universally quantified predicate variable in the assumption.
+*}
+(*<*)oops(*>*)
+lemma trcl_step: "R x y \<Longrightarrow> trcl R y z \<Longrightarrow> trcl R x z"
+ apply (unfold trcl_def)
+ apply (rule allI impI)+
+ proof -
+ case goal1
+ show ?case
+ apply (rule goal1(4) [rule_format])
+ apply (rule goal1(1))
+ apply (rule goal1(2) [THEN spec [where x=P], THEN mp, THEN mp,
+ OF goal1(3-4)])
+ done
+ qed
+
+text {*
+\noindent
+This method of defining inductive predicates easily generalizes to mutually inductive
+predicates, like the predicates @{text even} and @{text odd} characterized by the
+following introduction rules:
+\[
+\begin{array}{l}
+@{term "even (0::nat)"} \\
+@{term "odd m \<Longrightarrow> even (Suc m)"} \\
+@{term "even m \<Longrightarrow> odd (Suc m)"}
+\end{array}
+\]
+Since the predicates are mutually inductive, each of the definitions contain two
+quantifiers over the predicates @{text P} and @{text Q}.
+*}
+
+definition "even \<equiv> \<lambda>n.
+ \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m)) \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> P n"
+
+definition "odd \<equiv> \<lambda>n.
+ \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m)) \<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
+
+text {*
+\noindent
+For proving the induction rule, we use exactly the same technique as in the transitive
+closure example:
+*}
+
+lemma even_induct:
+ assumes even: "even n"
+ shows "P 0 \<Longrightarrow> (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+ apply (atomize (full))
+ apply (cut_tac even)
+ apply (unfold even_def)
+ apply (drule spec [where x=P])
+ apply (drule spec [where x=Q])
+ apply assumption
+ done
+
+text {*
+\noindent
+A similar induction rule having @{term "Q n"} as a conclusion can be proved for
+the @{text odd} predicate.
+The proofs of the introduction rules are also very similar to the ones in the
+previous example. We only show the proof of the second introduction rule,
+since it is almost the same as the one for the third introduction rule,
+and the proof of the first rule is trivial.
+*}
+
+lemma evenS: "odd m \<Longrightarrow> even (Suc m)"
+ apply (unfold odd_def even_def)
+ apply (rule allI impI)+
+ proof -
+ case goal1
+ show ?case
+ apply (rule goal1(3) [rule_format])
+ apply (rule goal1(1) [THEN spec [where x=P], THEN spec [where x=Q],
+ THEN mp, THEN mp, THEN mp, OF goal1(2-4)])
+ done
+ qed
+(*<*)
+lemma even0: "even 0"
+ apply (unfold even_def)
+ apply (rule allI impI)+
+ apply assumption
+ done
+
+lemma oddS: "even m \<Longrightarrow> odd (Suc m)"
+ apply (unfold odd_def even_def)
+ apply (rule allI impI)+
+ proof -
+ case goal1
+ show ?case
+ apply (rule goal1(4) [rule_format])
+ apply (rule goal1(1) [THEN spec [where x=P], THEN spec [where x=Q],
+ THEN mp, THEN mp, THEN mp, OF goal1(2-4)])
+ done
+ qed
+(*>*)
+text {*
+\noindent
+As a final example, we will consider the definition of the accessible
+part of a relation @{text R} characterized by the introduction rule
+\[
+@{term "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"}
+\]
+whose premise involves a universal quantifier and an implication. The
+definition of @{text accpart} is as follows:
+*}
+
+definition "accpart \<equiv> \<lambda>R x. \<forall>P. (\<forall>x. (\<forall>y. R y x \<longrightarrow> P y) \<longrightarrow> P x) \<longrightarrow> P x"
+
+text {*
+\noindent
+The proof of the induction theorem is again straightforward:
+*}
+
+lemma accpart_induct:
+ assumes acc: "accpart R x"
+ shows "(\<And>x. (\<And>y. R y x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"
+ apply (atomize (full))
+ apply (cut_tac acc)
+ apply (unfold accpart_def)
+ apply (drule spec [where x=P])
+ apply assumption
+ done
+(*<*)
+lemma accpartI: "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+ apply (unfold accpart_def)
+ apply (rule allI impI)+
+(*>*)
+txt {*
+\noindent
+Proving the introduction rule is a little more complicated, due to the quantifier
+and the implication in the premise. We first convert the meta-level universal quantifier
+and implication to their object-level counterparts. Unfolding the definition of
+@{text accpart} and applying the introduction rules for @{text "\<forall>"} and @{text "\<longrightarrow>"}
+yields the following proof state:
+@{subgoals [display]}
+Applying the second assumption produces a proof state with the new local assumption
+@{term "R y x"}, which will then be used to solve the goal @{term "P y"} using the
+first assumption.
+*}
+(*<*)oops(*>*)
+lemma accpartI: "(\<And>y. R y x \<Longrightarrow> accpart R y) \<Longrightarrow> accpart R x"
+ apply (unfold accpart_def)
+ apply (rule allI impI)+
+ proof -
+ case goal1
+ note goal1' = this
+ show ?case
+ apply (rule goal1'(2) [rule_format])
+ proof -
+ case goal1
+ show ?case
+ apply (rule goal1'(1) [OF goal1, THEN spec [where x=P],
+ THEN mp, OF goal1'(2)])
+ done
+ qed
+ qed
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/Ind_General_Scheme.thy Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,71 @@
+theory Ind_General_Scheme
+imports Main
+begin
+
+section{* The general construction principle *}
+
+text {*
+\label{sec:ind-general-method}
+Before we start with the implementation, it is useful to describe the general
+form of inductive definitions that our package should accept. We closely follow
+the notation for inductive definitions introduced by Schwichtenberg
+\cite{Schwichtenberg-MLCF} for the Minlog system.
+Let $R_1,\ldots,R_n$ be mutually inductive predicates and $\vec{p}$ be
+parameters. Then the introduction rules for $R_1,\ldots,R_n$ may have
+the form
+\[
+\bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ R_{k_{ij}}~\vec{p}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow R_{l_i}~\vec{p}~\vec{t}_i
+\qquad \mbox{for\ } i=1,\ldots,r
+\]
+where $\vec{A}_i$ and $\vec{B}_{ij}$ are formulae not containing $R_1,\ldots,R_n$.
+Note that by disallowing the inductive predicates to occur in $\vec{B}_{ij}$ we make sure
+that all occurrences of the predicates in the premises of the introduction rules are
+\emph{strictly positive}. This condition guarantees the existence of predicates
+that are closed under the introduction rules shown above. The inductive predicates
+$R_1,\ldots,R_n$ can then be defined as follows:
+\[
+\begin{array}{l@ {\qquad}l}
+ R_i \equiv \lambda\vec{p}~\vec{z}_i.~\forall P_1 \ldots P_n.~K_1 \longrightarrow \cdots \longrightarrow K_r \longrightarrow P_i~\vec{z}_i &
+ \mbox{for\ } i=1,\ldots,n \\[1.5ex]
+ \mbox{where} \\
+ K_i \equiv \forall\vec{x}_i.~\vec{A}_i \longrightarrow \left(\forall\vec{y}_{ij}.~\vec{B}_{ij} \longrightarrow
+ P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \longrightarrow P_{l_i}~\vec{t}_i &
+ \mbox{for\ } i=1,\ldots,r
+\end{array}
+\]
+The (weak) induction rules for the inductive predicates $R_1,\ldots,R_n$ are
+\[
+\begin{array}{l@ {\qquad}l}
+ R_i~\vec{p}~\vec{z}_i \Longrightarrow I_1 \Longrightarrow \cdots \Longrightarrow I_r \Longrightarrow P_i~\vec{z}_i &
+ \mbox{for\ } i=1,\ldots,n \\[1.5ex]
+ \mbox{where} \\
+ I_i \equiv \bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow P_{l_i}~\vec{t}_i &
+ \mbox{for\ } i=1,\ldots,r
+\end{array}
+\]
+Since $K_i$ and $I_i$ are equivalent modulo conversion between meta-level and object-level
+connectives, it is clear that the proof of the induction theorem is straightforward. We will
+therefore focus on the proof of the introduction rules. When proving the introduction rule
+shown above, we start by unfolding the definition of $R_1,\ldots,R_n$, which yields
+\[
+\bigwedge\vec{x}_i.~\vec{A}_i \Longrightarrow \left(\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij}\right)_{j=1,\ldots,m_i} \Longrightarrow \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{l_i}~\vec{t}_i
+\]
+where $\vec{K}$ abbreviates $K_1,\ldots,K_r$. Applying the introduction rules for
+$\forall$ and $\longrightarrow$ yields a proof state in which we have to prove $P_{l_i}~\vec{t}_i$
+from the additional assumptions $\vec{K}$. When using $K_{l_i}$ (converted to meta-logic format)
+to prove $P_{l_i}~\vec{t}_i$, we get subgoals $\vec{A}_i$ that are trivially solvable by assumption,
+as well as subgoals of the form
+\[
+\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{for\ } j=1,\ldots,m_i
+\]
+that can be solved using the assumptions
+\[
+\bigwedge\vec{y}_{ij}.~\vec{B}_{ij} \Longrightarrow
+ \forall P_1 \ldots P_n.~\vec{K} \longrightarrow P_{k_{ij}}~\vec{s}_{ij} \qquad \mbox{and} \qquad \vec{K}
+\]
+*}
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/Ind_Interface.thy Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,449 @@
+theory Ind_Interface
+imports Base Simple_Inductive_Package
+begin
+
+(*<*)
+ML {*
+structure SIP = SimpleInductivePackage
+*}
+(*>*)
+
+section{* The interface *}
+
+text {*
+\label{sec:ind-interface}
+In order to add a new inductive predicate to a theory with the help of our package, the user
+must \emph{invoke} it. For every package, there are essentially two different ways of invoking
+it, which we will refer to as \emph{external} and \emph{internal}. By external
+invocation we mean that the package is called from within a theory document. In this case,
+the type of the inductive predicate, as well as its introduction rules, are given as strings
+by the user. Before the package can actually make the definition, the type and introduction
+rules have to be parsed. In contrast, internal invocation means that the package is called
+by some other package. For example, the function definition package \cite{Krauss-IJCAR06}
+calls the inductive definition package to define the graph of the function. However, it is
+not a good idea for the function definition package to pass the introduction rules for the
+function graph to the inductive definition package as strings. In this case, it is better
+to directly pass the rules to the package as a list of terms, which is more robust than
+handling strings that are lacking the additional structure of terms. These two ways of
+invoking the package are reflected in its ML programming interface, which consists of two
+functions:
+@{ML_chunk [display] SIMPLE_INDUCTIVE_PACKAGE}
+The function for external invocation of the package is called @{ML_open add_inductive (SIP)},
+whereas the one for internal invocation is called @{ML_open add_inductive_i (SIP)}. Both
+of these functions take as arguments the names and types of the inductive predicates, the
+names and types of their parameters, the actual introduction rules and a \emph{local theory}.
+They return a local theory containing the definition, together with a tuple containing
+the introduction and induction rules, which are stored in the local theory, too.
+In contrast to an ordinary theory, which simply consists of a type signature, as
+well as tables for constants, axioms and theorems, a local theory also contains
+additional context information, such as locally fixed variables and local assumptions
+that may be used by the package. The type @{ML_type local_theory} is identical to the
+type of \emph{proof contexts} @{ML_type "Proof.context"}, although not every proof context
+constitutes a valid local theory.
+Note that @{ML_open add_inductive_i (SIP)} expects the types
+of the predicates and parameters to be specified using the datatype @{ML_type typ} of Isabelle's
+logical framework, whereas @{ML_open add_inductive (SIP)}
+expects them to be given as optional strings. If no string is
+given for a particular predicate or parameter, this means that the type should be
+inferred by the package. Additional \emph{mixfix syntax} may be associated with
+the predicates and parameters as well. Note that @{ML_open add_inductive_i (SIP)} does not
+allow mixfix syntax to be associated with parameters, since it can only be used
+for parsing. The names of the predicates, parameters and rules are represented by the
+type @{ML_type Name.binding}. Strings can be turned into elements of the type
+@{ML_type Name.binding} using the function
+@{ML [display] "Name.binding : string -> Name.binding"}
+Each introduction rule is given as a tuple containing its name, a list of \emph{attributes}
+and a logical formula. Note that the type @{ML_type Attrib.binding} used in the list of
+introduction rules is just a shorthand for the type @{ML_type "Name.binding * Attrib.src list"}.
+The function @{ML_open add_inductive_i (SIP)} expects the formula to be specified using the datatype
+@{ML_type term}, whereas @{ML_open add_inductive (SIP)} expects it to be given as a string.
+An attribute specifies additional actions and transformations that should be applied to
+a theorem, such as storing it in the rule databases used by automatic tactics
+like the simplifier. The code of the package, which will be described in the following
+section, will mostly treat attributes as a black box and just forward them to other
+functions for storing theorems in local theories.
+The implementation of the function @{ML_open add_inductive (SIP)} for external invocation
+of the package is quite simple. Essentially, it just parses the introduction rules
+and then passes them on to @{ML_open add_inductive_i (SIP)}:
+@{ML_chunk [display] add_inductive}
+For parsing and type checking the introduction rules, we use the function
+@{ML_open [display] "Specification.read_specification:
+ (Name.binding * string option * mixfix) list -> (*{variables}*)
+ (Attrib.binding * string list) list list -> (*{rules}*)
+ local_theory ->
+ (((Name.binding * typ) * mixfix) list *
+ (Attrib.binding * term list) list) *
+ local_theory"}
+During parsing, both predicates and parameters are treated as variables, so
+the lists \verb!preds_syn! and \verb!params_syn! are just appended
+before being passed to @{ML_open read_specification (Specification)}. Note that the format
+for rules supported by @{ML_open read_specification (Specification)} is more general than
+what is required for our package. It allows several rules to be associated
+with one name, and the list of rules can be partitioned into several
+sublists. In order for the list \verb!intro_srcs! of introduction rules
+to be acceptable as an input for @{ML_open read_specification (Specification)}, we first
+have to turn it into a list of singleton lists. This transformation
+has to be reversed later on by applying the function
+@{ML [display] "the_single: 'a list -> 'a"}
+to the list \verb!specs! containing the parsed introduction rules.
+The function @{ML_open read_specification (Specification)} also returns the list \verb!vars!
+of predicates and parameters that contains the inferred types as well.
+This list has to be chopped into the two lists \verb!preds_syn'! and
+\verb!params_syn'! for predicates and parameters, respectively.
+All variables occurring in a rule but not in the list of variables passed to
+@{ML_open read_specification (Specification)} will be bound by a meta-level universal
+quantifier.
+Finally, @{ML_open read_specification (Specification)} also returns another local theory,
+but we can safely discard it. As an example, let us look at how we can use this
+function to parse the introduction rules of the @{text trcl} predicate:
+@{ML_response [display]
+"Specification.read_specification
+ [(Name.binding \"trcl\", NONE, NoSyn),
+ (Name.binding \"r\", SOME \"'a \<Rightarrow> 'a \<Rightarrow> bool\", NoSyn)]
+ [[((Name.binding \"base\", []), [\"trcl r x x\"])],
+ [((Name.binding \"step\", []), [\"trcl r x y \<Longrightarrow> r y z \<Longrightarrow> trcl r x z\"])]]
+ @{context}"
+"((\<dots>,
+ [(\<dots>,
+ [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
+ Const (\"Trueprop\", \<dots>) $
+ (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 0 $ Bound 0))]),
+ (\<dots>,
+ [Const (\"all\", \<dots>) $ Abs (\"x\", TFree (\"'a\", \<dots>),
+ Const (\"all\", \<dots>) $ Abs (\"y\", TFree (\"'a\", \<dots>),
+ Const (\"all\", \<dots>) $ Abs (\"z\", TFree (\"'a\", \<dots>),
+ Const (\"==>\", \<dots>) $
+ (Const (\"Trueprop\", \<dots>) $
+ (Free (\"trcl\", \<dots>) $ Free (\"r\", \<dots>) $ Bound 2 $ Bound 1)) $
+ (Const (\"==>\", \<dots>) $ \<dots> $ \<dots>))))])]),
+ \<dots>)
+: (((Name.binding * typ) * mixfix) list *
+ (Attrib.binding * term list) list) * local_theory"}
+In the list of variables passed to @{ML_open read_specification (Specification)}, we have
+used the mixfix annotation @{ML NoSyn} to indicate that we do not want to associate any
+mixfix syntax with the variable. Moreover, we have only specified the type of \texttt{r},
+whereas the type of \texttt{trcl} is computed using type inference.
+The local variables \texttt{x}, \texttt{y} and \texttt{z} of the introduction rules
+are turned into bound variables with the de Bruijn indices,
+whereas \texttt{trcl} and \texttt{r} remain free variables.
+
+\paragraph{Parsers for theory syntax}
+
+Although the function @{ML_open add_inductive (SIP)} parses terms and types, it still
+cannot be used to invoke the package directly from within a theory document.
+In order to do this, we have to write another parser. Before we describe
+the process of writing parsers for theory syntax in more detail, we first
+show some examples of how we would like to use the inductive definition
+package.
+
+\noindent
+The definition of the transitive closure should look as follows:
+*}
+
+simple_inductive
+ trcl for r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl r x x"
+| step: "trcl r x y \<Longrightarrow> r y z \<Longrightarrow> trcl r x z"
+(*<*)
+thm trcl_def
+thm trcl.induct
+thm base
+thm step
+thm trcl.intros
+
+lemma trcl_strong_induct:
+ assumes trcl: "trcl r x y"
+ and I1: "\<And>x. P x x"
+ and I2: "\<And>x y z. P x y \<Longrightarrow> trcl r x y \<Longrightarrow> r y z \<Longrightarrow> P x z"
+ shows "P x y"
+proof -
+ from trcl
+ have "P x y \<and> trcl r x y"
+ proof induct
+ case (base x)
+ from I1 and trcl.base show ?case ..
+ next
+ case (step x y z)
+ then have "P x y" and "trcl r x y" by simp_all
+ from `P x y` `trcl r x y` `r y z` have "P x z"
+ by (rule I2)
+ moreover from `trcl r x y` `r y z` have "trcl r x z"
+ by (rule trcl.step)
+ ultimately show ?case ..
+ qed
+ then show ?thesis ..
+qed
+(*>*)
+
+text {*
+\noindent
+Even and odd numbers can be defined by
+*}
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+(*<*)
+thm even_def odd_def
+thm even.induct odd.induct
+thm even0
+thm evenS
+thm oddS
+thm even_odd.intros
+(*>*)
+
+text {*
+\noindent
+The accessible part of a relation can be introduced as follows:
+*}
+
+simple_inductive
+ accpart for r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<And>y. r y x \<Longrightarrow> accpart r y) \<Longrightarrow> accpart r x"
+(*<*)
+thm accpart_def
+thm accpart.induct
+thm accpartI
+(*>*)
+
+text {*
+\noindent
+Moreover, it should also be possible to define the accessible part
+inside a locale fixing the relation @{text r}:
+*}
+
+locale rel =
+ fixes r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+
+simple_inductive (in rel) accpart'
+where
+ accpartI': "\<And>x. (\<And>y. r y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
+(*<*)
+context rel
+begin
+
+thm accpartI'
+thm accpart'.induct
+
+end
+
+thm rel.accpartI'
+thm rel.accpart'.induct
+
+ML {*
+val (result, lthy) = SimpleInductivePackage.add_inductive
+ [(Name.binding "trcl'", NONE, NoSyn)] [(Name.binding "r", SOME "'a \<Rightarrow> 'a \<Rightarrow> bool", NoSyn)]
+ [((Name.binding "base", []), "\<And>x. trcl' r x x"), ((Name.binding "step", []), "\<And>x y z. trcl' r x y \<Longrightarrow> r y z \<Longrightarrow> trcl' r x z")]
+ (TheoryTarget.init NONE @{theory})
+*}
+(*>*)
+
+text {*
+\noindent
+In this context, it is important to note that Isabelle distinguishes
+between \emph{outer} and \emph{inner} syntax. Theory commands such as
+\isa{\isacommand{simple{\isacharunderscore}inductive} $\ldots$ \isacommand{for} $\ldots$ \isacommand{where} $\ldots$}
+belong to the outer syntax, whereas items in quotation marks, in particular
+terms such as @{text [source] "trcl r x x"} and types such as
+@{text [source] "'a \<Rightarrow> 'a \<Rightarrow> bool"} belong to the inner syntax.
+Separating the two layers of outer and inner syntax greatly simplifies
+matters, because the parser for terms and types does not have to know
+anything about the possible syntax of theory commands, and the parser
+for theory commands need not be concerned about the syntactic structure
+of terms and types.
+
+\medskip
+\noindent
+The syntax of the \isa{\isacommand{simple{\isacharunderscore}inductive}} command
+can be described by the following railroad diagram:
+\begin{rail}
+ 'simple\_inductive' target? fixes ('for' fixes)? \\
+ ('where' (thmdecl? prop + '|'))?
+ ;
+\end{rail}
+
+\paragraph{Functional parsers}
+
+For parsing terms and types, Isabelle uses a rather general and sophisticated
+algorithm due to Earley, which is driven by \emph{priority grammars}.
+In contrast, parsers for theory syntax are built up using a set of combinators.
+Functional parsing using combinators is a well-established technique, which
+has been described by many authors, including Paulson \cite{paulson-ML-91}
+and Wadler \cite{Wadler-AFP95}.
+The central idea is that a parser is a function of type @{ML_type "'a list -> 'b * 'a list"},
+where @{ML_type "'a"} is a type of \emph{tokens}, and @{ML_type "'b"} is a type for
+encoding items that the parser has recognized. When a parser is applied to a
+list of tokens whose prefix it can recognize, it returns an encoding of the
+prefix as an element of type @{ML_type "'b"}, together with the suffix of the list
+containing the remaining tokens. Otherwise, the parser raises an exception
+indicating a syntax error. The library for writing functional parsers in
+Isabelle can roughly be split up into two parts. The first part consists of a
+collection of generic parser combinators that are contained in the structure
+@{ML_struct Scan} defined in the file @{ML_file "Pure/General/scan.ML"} in the Isabelle
+sources. While these combinators do not make any assumptions about the concrete
+structure of the tokens used, the second part of the library consists of combinators
+for dealing with specific token types.
+The following is an excerpt from the signature of @{ML_struct Scan}:
+\begin{mytable}
+@{ML "|| : ('a -> 'b) * ('a -> 'b) -> 'a -> 'b"} \\
+@{ML "-- : ('a -> 'b * 'c) * ('c -> 'd * 'e) -> 'a -> ('b * 'd) * 'e"} \\
+@{ML "|-- : ('a -> 'b * 'c) * ('c -> 'd * 'e) -> 'a -> 'd * 'e"} \\
+@{ML "--| : ('a -> 'b * 'c) * ('c -> 'd * 'e) -> 'a -> 'b * 'e"} \\
+@{ML_open "optional: ('a -> 'b * 'a) -> 'b -> 'a -> 'b * 'a" (Scan)} \\
+@{ML_open "repeat: ('a -> 'b * 'a) -> 'a -> 'b list * 'a" (Scan)} \\
+@{ML_open "repeat1: ('a -> 'b * 'a) -> 'a -> 'b list * 'a" (Scan)} \\
+@{ML ">> : ('a -> 'b * 'c) * ('b -> 'd) -> 'a -> 'd * 'c"} \\
+@{ML "!! : ('a * string option -> string) -> ('a -> 'b) -> 'a -> 'b"}
+\end{mytable}
+Interestingly, the functions shown above are so generic that they do not
+even rely on the input and output of the parser being a list of tokens.
+If \texttt{p} succeeds, i.e.\ does not raise an exception, the parser
+@{ML_open "p || q" for p q} returns the result of \texttt{p}, otherwise it returns
+the result of \texttt{q}. The parser @{ML_open "p -- q" for p q} first parses an
+item of type @{ML_type "'b"} using \texttt{p}, then passes the remaining tokens
+of type @{ML_type "'c"} to \texttt{q}, which parses an item of type @{ML_type "'d"}
+and returns the remaining tokens of type @{ML_type "'e"}, which are finally
+returned together with a pair of type @{ML_type "'b * 'd"} containing the two
+parsed items. The parsers @{ML_open "p |-- q" for p q} and @{ML_open "p --| q" for p q}
+work in a similar way as the previous one, with the difference that they
+discard the item parsed by the first and the second parser, respectively.
+If \texttt{p} succeeds, the parser @{ML_open "optional p x" for p x (Scan)} returns the result
+of \texttt{p}, otherwise it returns the default value \texttt{x}. The parser
+@{ML_open "repeat p" for p (Scan)} applies \texttt{p} as often as it can, returning a possibly
+empty list of parsed items. The parser @{ML_open "repeat1 p" for p (Scan)} is similar,
+but requires \texttt{p} to succeed at least once. The parser
+@{ML_open "p >> f" for p f} uses \texttt{p} to parse an item of type @{ML_type "'b"}, to which
+it applies the function \texttt{f} yielding a value of type @{ML_type "'d"}, which
+is returned together with the remaining tokens of type @{ML_type "'c"}.
+Finally, @{ML "!!"} is used for transforming exceptions produced by parsers.
+If \texttt{p} raises an exception indicating that it cannot parse a given input,
+then an enclosing parser such as
+@{ML_open [display] "q -- p || r" for p q r}
+will try the alternative parser \texttt{r}. By writing
+@{ML_open [display] "q -- !! err p || r" for err p q r}
+instead, one can achieve that a failure of \texttt{p} causes the whole parser to abort.
+The @{ML "!!"} operator is similar to the \emph{cut} operator in Prolog, which prevents
+the interpreter from backtracking. The \texttt{err} function supplied as an argument
+to @{ML "!!"} can be used to produce an error message depending on the current
+state of the parser, as well as the optional error message returned by \texttt{p}.
+
+So far, we have only looked at combinators that construct more complex parsers
+from simpler parsers. In order for these combinators to be useful, we also need
+some basic parsers. As an example, we consider the following two parsers
+defined in @{ML_struct Scan}:
+\begin{mytable}
+@{ML_open "one: ('a -> bool) -> 'a list -> 'a * 'a list" (Scan)} \\
+@{ML_open "$$ : string -> string list -> string * string list"}
+\end{mytable}
+The parser @{ML_open "one pred" for pred (Scan)} parses exactly one token that
+satisfies the predicate \texttt{pred}, whereas @{ML_open "$$ s" for s} only
+accepts a token that equals the string \texttt{s}. Note that we can easily
+express @{ML_open "$$ s" for s} using @{ML_open "one" (Scan)}:
+@{ML_open [display] "one (fn s' => s' = s)" for s (Scan)}
+As an example, let us look at how we can use @{ML "$$"} and @{ML "--"} to parse
+the prefix ``\texttt{hello}'' of the character list ``\texttt{hello world}'':
+@{ML_response [display]
+"($$ \"h\" -- $$ \"e\" -- $$ \"l\" -- $$ \"l\" -- $$ \"o\")
+[\"h\", \"e\", \"l\", \"l\", \"o\", \" \", \"w\", \"o\", \"r\", \"l\", \"d\"]"
+"(((((\"h\", \"e\"), \"l\"), \"l\"), \"o\"), [\" \", \"w\", \"o\", \"r\", \"l\", \"d\"])
+: ((((string * string) * string) * string) * string) * string list"}
+Most of the time, however, we will have to deal with tokens that are not just strings.
+The parsers for the theory syntax, as well as the parsers for the argument syntax
+of proof methods and attributes use the token type @{ML_type OuterParse.token},
+which is identical to the type @{ML_type OuterLex.token}.
+The parser functions for the theory syntax are contained in the structure
+@{ML_struct OuterParse} defined in the file @{ML_file "Pure/Isar/outer_parse.ML"}.
+In our parser, we will use the following functions:
+\begin{mytable}
+@{ML_open "$$$ : string -> token list -> string * token list" (OuterParse)} \\
+@{ML_open "enum1: string -> (token list -> 'a * token list) -> token list ->
+ 'a list * token list" (OuterParse)} \\
+@{ML_open "prop: token list -> string * token list" (OuterParse)} \\
+@{ML_open "opt_target: token list -> string option * token list" (OuterParse)} \\
+@{ML_open "fixes: token list ->
+ (Name.binding * string option * mixfix) list * token list" (OuterParse)} \\
+@{ML_open "for_fixes: token list ->
+ (Name.binding * string option * mixfix) list * token list" (OuterParse)} \\
+@{ML_open "!!! : (token list -> 'a) -> token list -> 'a" (OuterParse)}
+\end{mytable}
+The parsers @{ML_open "$$$" (OuterParse)} and @{ML_open "!!!" (OuterParse)} are
+defined using the parsers @{ML_open "one" (Scan)} and @{ML "!!"} from
+@{ML_struct Scan}.
+The parser @{ML_open "enum1 s p" for s p (OuterParse)} parses a non-emtpy list of items
+recognized by the parser \texttt{p}, where the items are separated by \texttt{s}.
+A proposition can be parsed using the function @{ML_open prop (OuterParse)}.
+Essentially, a proposition is just a string or an identifier, but using the
+specific parser function @{ML_open prop (OuterParse)} leads to more instructive
+error messages, since the parser will complain that a proposition was expected
+when something else than a string or identifier is found.
+An optional locale target specification of the form \isa{(\isacommand{in}\ $\ldots$)}
+can be parsed using @{ML_open opt_target (OuterParse)}.
+The lists of names of the predicates and parameters, together with optional
+types and syntax, are parsed using the functions @{ML_open "fixes" (OuterParse)}
+and @{ML_open for_fixes (OuterParse)}, respectively.
+In addition, the following function from @{ML_struct SpecParse} for parsing
+an optional theorem name and attribute, followed by a delimiter, will be useful:
+\begin{mytable}
+@{ML_open "opt_thm_name:
+ string -> token list -> Attrib.binding * token list" (SpecParse)}
+\end{mytable}
+We now have all the necessary tools to write the parser for our
+\isa{\isacommand{simple{\isacharunderscore}inductive}} command:
+@{ML_chunk [display] syntax}
+The definition of the parser \verb!ind_decl! closely follows the railroad
+diagram shown above. In order to make the code more readable, the structures
+@{ML_struct OuterParse} and @{ML_struct OuterKeyword} are abbreviated by
+\texttt{P} and \texttt{K}, respectively. Note how the parser combinator
+@{ML_open "!!!" (OuterParse)} is used: once the keyword \texttt{where}
+has been parsed, a non-empty list of introduction rules must follow.
+Had we not used the combinator @{ML_open "!!!" (OuterParse)}, a
+\texttt{where} not followed by a list of rules would have caused the parser
+to respond with the somewhat misleading error message
+\begin{verbatim}
+ Outer syntax error: end of input expected, but keyword where was found
+\end{verbatim}
+rather than with the more instructive message
+\begin{verbatim}
+ Outer syntax error: proposition expected, but terminator was found
+\end{verbatim}
+Once all arguments of the command have been parsed, we apply the function
+@{ML_open add_inductive (SimpleInductivePackage)}, which yields a local theory
+transformer of type @{ML_type "local_theory -> local_theory"}. Commands in
+Isabelle/Isar are realized by transition transformers of type
+@{ML_type [display] "Toplevel.transition -> Toplevel.transition"}
+We can turn a local theory transformer into a transition transformer by using
+the function
+@{ML [display] "Toplevel.local_theory : string option ->
+ (local_theory -> local_theory) ->
+ Toplevel.transition -> Toplevel.transition"}
+which, apart from the local theory transformer, takes an optional name of a locale
+to be used as a basis for the local theory. The whole parser for our command has type
+@{ML_type [display] "OuterLex.token list ->
+ (Toplevel.transition -> Toplevel.transition) * OuterLex.token list"}
+which is abbreviated by @{ML_type OuterSyntax.parser_fn}. The new command can be added
+to the system via the function
+@{ML [display] "OuterSyntax.command :
+ string -> string -> OuterKeyword.T -> OuterSyntax.parser_fn -> unit"}
+which imperatively updates the parser table behind the scenes. In addition to the parser, this
+function takes two strings representing the name of the command and a short description,
+as well as an element of type @{ML_type OuterKeyword.T} describing which \emph{kind} of
+command we intend to add. Since we want to add a command for declaring new concepts,
+we choose the kind @{ML "OuterKeyword.thy_decl"}. Other kinds include
+@{ML "OuterKeyword.thy_goal"}, which is similar to @{ML_open thy_decl (OuterKeyword)},
+but requires the user to prove a goal before making the declaration, or
+@{ML "OuterKeyword.diag"}, which corresponds to a purely diagnostic command that does
+not change the context. For example, the @{ML_open thy_goal (OuterKeyword)} kind is used
+by the \isa{\isacommand{function}} command \cite{Krauss-IJCAR06}, which requires the user
+to prove that a given set of equations is non-overlapping and covers all cases. The kind
+of the command should be chosen with care, since selecting the wrong one can cause strange
+behaviour of the user interface, such as failure of the undo mechanism.
+*}
+
+(*<*)
+end
+(*>*)
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/Ind_Intro.thy Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,104 @@
+theory Ind_Intro
+imports Main
+begin
+
+chapter {* How to write a definitional package *}
+
+section{* Introduction *}
+
+text {*
+\begin{flushright}
+ {\em
+ ``My thesis is that programming is not at the bottom of the intellectual \\
+ pyramid, but at the top. It's creative design of the highest order. It \\
+ isn't monkey or donkey work; rather, as Edsger Dijkstra famously \\
+ claimed, it's amongst the hardest intellectual tasks ever attempted.''} \\[1ex]
+ Richard Bornat, In defence of programming
+\end{flushright}
+
+\medskip
+
+\noindent
+Higher order logic, as implemented in Isabelle/HOL, is based on just a few primitive
+constants, like equality, implication, and the description operator, whose properties are
+described as axioms. All other concepts, such as inductive predicates, datatypes, or
+recursive functions are \emph{defined} using these constants, and the desired
+properties, for example induction theorems, or recursion equations are \emph{derived}
+from the definitions by a \emph{formal proof}. Since it would be very tedious
+for the average user to define complex inductive predicates or datatypes ``by hand''
+just using the primitive operators of higher order logic, Isabelle/HOL already contains
+a number of \emph{packages} automating such tedious work. Thanks to those packages,
+the user can give a high-level specification, like a list of introduction rules or
+constructors, and the package then does all the low-level definitions and proofs
+behind the scenes. The packages are written in Standard ML, the implementation
+language of Isabelle, and can be invoked by the user from within theory documents
+written in the Isabelle/Isar language via specific commands. Most of the time,
+when using Isabelle for applications, users do not have to worry about the inner
+workings of packages, since they can just use the packages that are already part
+of the Isabelle distribution. However, when developing a general theory that is
+intended to be applied by other users, one may need to write a new package from
+scratch. Recent examples of such packages include the verification environment
+for sequential imperative programs by Schirmer \cite{Schirmer-LPAR04}, the
+package for defining general recursive functions by Krauss \cite{Krauss-IJCAR06},
+as well as the nominal datatype package by Berghofer and Urban \cite{Urban-Berghofer06}.
+
+The scientific value of implementing a package should not be underestimated:
+it is often more than just the automation of long-established scientific
+results. Of course, a carefully-developed theory on paper is indispensable
+as a basis. However, without an implementation, such a theory will only be of
+very limited practical use, since only an implementation enables other users
+to apply the theory on a larger scale without too much effort. Moreover,
+implementing a package is a bit like formalizing a paper proof in a theorem
+prover. In the literature, there are many examples of paper proofs that
+turned out to be incomplete or even faulty, and doing a formalization is
+a good way of uncovering such errors and ensuring that a proof is really
+correct. The same applies to the theory underlying definitional packages.
+For example, the general form of some complicated induction rules for nominal
+datatypes turned out to be quite hard to get right on the first try, so
+an implementation is an excellent way to find out whether the rules really
+work in practice.
+
+Writing a package is a particularly difficult task for users that are new
+to Isabelle, since its programming interface consists of thousands of functions.
+Rather than just listing all those functions, we give a step-by-step tutorial
+for writing a package, using an example that is still simple enough to be
+easily understandable, but at the same time sufficiently complex to demonstrate
+enough of Isabelle's interesting features. As a running example, we have
+chosen a rather simple package for defining inductive predicates. To keep
+things simple, we will not use the general Knaster-Tarski fixpoint
+theorem on complete lattices, which forms the basis of Isabelle's standard
+inductive definition package originally due to Paulson \cite{Paulson-ind-defs}.
+Instead, we will use a simpler \emph{impredicative} (i.e.\ involving
+quantification on predicate variables) encoding of inductive predicates
+suggested by Melham \cite{Melham:1992:PIR}. Due to its simplicity, this
+package will necessarily have a reduced functionality. It does neither
+support introduction rules involving arbitrary monotone operators, nor does
+it prove case analysis (or inversion) rules. Moreover, it only proves a weaker
+form of the rule induction theorem.
+
+Reading this article does not require any prior knowledge of Isabelle's programming
+interface. However, we assume the reader to already be familiar with writing
+proofs in Isabelle/HOL using the Isar language. For further information on
+this topic, consult the book by Nipkow, Paulson, and Wenzel
+\cite{isa-tutorial}. Moreover, in order to understand the pieces of
+code given in this tutorial, some familiarity with the basic concepts of the Standard
+ML programming language, as described for example in the textbook by Paulson
+\cite{paulson-ml2}, is required as well.
+
+The rest of this article is structured as follows. In \S\ref{sec:ind-examples},
+we will illustrate the ``manual'' definition of inductive predicates using
+some examples. Starting from these examples, we will describe in
+\S\ref{sec:ind-general-method} how the construction works in general.
+The following sections are then dedicated to the implementation of a
+package that carries out the construction of such inductive predicates.
+First of all, a parser for a high-level notation for specifying inductive
+predicates via a list of introduction rules is developed in \S\ref{sec:ind-interface}.
+Having parsed the specification, a suitable primitive definition must be
+added to the theory, which will be explained in \S\ref{sec:ind-definition}.
+Finally, \S\ref{sec:ind-proofs} will focus on methods for proving introduction
+and induction rules from the definitions introduced in \S\ref{sec:ind-definition}.
+
+\nocite{Bornat-lecture}
+*}
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/Simple_Inductive_Package.thy Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,8 @@
+theory Simple_Inductive_Package
+imports Main
+uses ("simple_inductive_package.ML")
+begin
+
+use_chunks "simple_inductive_package.ML"
+
+end
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Package/simple_inductive_package.ML Fri Oct 10 17:13:21 2008 +0200
@@ -0,0 +1,170 @@
+(* @chunk SIMPLE_INDUCTIVE_PACKAGE *)
+signature SIMPLE_INDUCTIVE_PACKAGE =
+sig
+ val add_inductive_i:
+ ((Name.binding * typ) * mixfix) list -> (*{predicates}*)
+ (Name.binding * typ) list -> (*{parameters}*)
+ (Attrib.binding * term) list -> (*{rules}*)
+ local_theory -> (thm list * thm list) * local_theory
+ val add_inductive:
+ (Name.binding * string option * mixfix) list -> (*{predicates}*)
+ (Name.binding * string option * mixfix) list -> (*{parameters}*)
+ (Attrib.binding * string) list -> (*{rules}*)
+ local_theory -> (thm list * thm list) * local_theory
+end;
+(* @end *)
+
+structure SimpleInductivePackage: SIMPLE_INDUCTIVE_PACKAGE =
+struct
+
+fun add_inductive_i preds_syn params intrs lthy =
+ let
+ val params' = map (fn (p, T) => Free (Name.name_of p, T)) params;
+ val preds = map (fn ((R, T), _) =>
+ list_comb (Free (Name.name_of R, T), params')) preds_syn;
+ val Tss = map (binder_types o fastype_of) preds;
+
+ (* making the definition *)
+
+ val intrs' = map
+ (ObjectLogic.atomize_term (ProofContext.theory_of lthy) o snd) intrs;
+
+ fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P;
+
+ val (defs, lthy1) = fold_map (fn ((((R, _), syn), pred), Ts) =>
+ let val zs = map Free (Variable.variant_frees lthy intrs'
+ (map (pair "z") Ts))
+ in
+ LocalTheory.define Thm.internalK
+ ((R, syn), (Attrib.no_binding, fold_rev lambda (params' @ zs)
+ (fold_rev mk_all preds (fold_rev (curry HOLogic.mk_imp)
+ intrs' (list_comb (pred, zs)))))) #>> snd #>> snd
+ end) (preds_syn ~~ preds ~~ Tss) lthy;
+
+ val (_, lthy2) = Variable.add_fixes (map (Name.name_of o fst) params) lthy1;
+
+
+ (* proving the induction rules *)
+
+ val (Pnames, lthy3) =
+ Variable.variant_fixes (replicate (length preds) "P") lthy2;
+ val Ps = map (fn (s, Ts) => Free (s, Ts ---> HOLogic.boolT))
+ (Pnames ~~ Tss);
+ val cPs = map (cterm_of (ProofContext.theory_of lthy3)) Ps;
+ val intrs'' = map (subst_free (preds ~~ Ps) o snd) intrs;
+
+ fun inst_spec ct = Drule.instantiate'
+ [SOME (ctyp_of_term ct)] [NONE, SOME ct] spec;
+
+ fun prove_indrule ((R, P), Ts) =
+ let
+ val (znames, lthy4) =
+ Variable.variant_fixes (replicate (length Ts) "z") lthy3;
+ val zs = map Free (znames ~~ Ts)
+ in
+ Goal.prove lthy4 []
+ [HOLogic.mk_Trueprop (list_comb (R, zs))]
+ (Logic.list_implies (intrs'',
+ HOLogic.mk_Trueprop (list_comb (P, zs))))
+ (fn {prems, ...} => EVERY
+ ([ObjectLogic.full_atomize_tac 1,
+ cut_facts_tac prems 1,
+ rewrite_goals_tac defs] @
+ map (fn ct => dtac (inst_spec ct) 1) cPs @
+ [assume_tac 1])) |>
+ singleton (ProofContext.export lthy4 lthy1)
+ end;
+
+ val indrules = map prove_indrule (preds ~~ Ps ~~ Tss);
+
+
+ (* proving the introduction rules *)
+
+ val all_elims = fold (fn ct => fn th => th RS inst_spec ct);
+ val imp_elims = fold (fn th => fn th' => [th', th] MRS mp);
+
+ fun prove_intr (i, (_, r)) =
+ Goal.prove lthy2 [] [] r
+ (fn {prems, context = ctxt} => EVERY
+ [ObjectLogic.rulify_tac 1,
+ rewrite_goals_tac defs,
+ REPEAT (resolve_tac [allI, impI] 1),
+ SUBPROOF (fn {params, prems, context = ctxt', ...} =>
+ let
+ val (prems1, prems2) =
+ chop (length prems - length intrs) prems;
+ val (params1, params2) =
+ chop (length params - length preds) params
+ in
+ rtac (ObjectLogic.rulify
+ (all_elims params1 (nth prems2 i))) 1 THEN
+ EVERY (map (fn prem =>
+ SUBPROOF (fn {prems = prems', concl, ...} =>
+ let
+ val prem' = prems' MRS prem;
+ val prem'' = case prop_of prem' of
+ _ $ (Const (@{const_name All}, _) $ _) =>
+ prem' |> all_elims params2 |>
+ imp_elims prems2
+ | _ => prem'
+ in rtac prem'' 1 end) ctxt' 1) prems1)
+ end) ctxt 1]) |>
+ singleton (ProofContext.export lthy2 lthy1);
+
+ val intr_ths = map_index prove_intr intrs;
+
+
+ (* storing the theorems *)
+
+ val mut_name = space_implode "_" (map (Name.name_of o fst o fst) preds_syn);
+ val case_names = map (Name.name_of o fst o fst) intrs
+ in
+ lthy1 |>
+ LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
+ ((Name.qualified mut_name a, atts), [([th], [])]))
+ (intrs ~~ intr_ths)) |->
+ (fn intr_thss => LocalTheory.note Thm.theoremK
+ ((Name.qualified mut_name (Name.binding "intros"), []), maps snd intr_thss)) |>>
+ snd ||>>
+ (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
+ ((Name.qualified (Name.name_of R) (Name.binding "induct"),
+ [Attrib.internal (K (RuleCases.case_names case_names)),
+ Attrib.internal (K (RuleCases.consumes 1)),
+ Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
+ (preds_syn ~~ indrules)) #>> maps snd)
+ end;
+
+(* @chunk add_inductive *)
+fun add_inductive preds_syn params_syn intro_srcs lthy =
+ let
+ val ((vars, specs), _) = Specification.read_specification
+ (preds_syn @ params_syn) (map (fn (a, s) => [(a, [s])]) intro_srcs)
+ lthy;
+ val (preds_syn', params_syn') = chop (length preds_syn) vars;
+ val intrs = map (apsnd the_single) specs
+ in
+ add_inductive_i preds_syn' (map fst params_syn') intrs lthy
+ end;
+(* @end *)
+
+
+(* outer syntax *)
+
+(* @chunk syntax *)
+local structure P = OuterParse and K = OuterKeyword in
+
+val ind_decl =
+ P.opt_target --
+ P.fixes -- P.for_fixes --
+ Scan.optional (P.$$$ "where" |--
+ P.!!! (P.enum1 "|" (SpecParse.opt_thm_name ":" -- P.prop))) [] >>
+ (fn (((loc, preds), params), specs) =>
+ Toplevel.local_theory loc (add_inductive preds params specs #> snd));
+
+val _ = OuterSyntax.command "simple_inductive" "define inductive predicates"
+ K.thy_decl ind_decl;
+
+end;
+(* @end *)
+
+end;