diff -r 8939b8fd8603 -r 069d525f8f1d ProgTutorial/Package/Ind_Interface.thy --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/ProgTutorial/Package/Ind_Interface.thy Thu Mar 19 13:28:16 2009 +0100 @@ -0,0 +1,454 @@ +theory Ind_Interface +imports "../Base" "../Parsing" Ind_Prelims Simple_Inductive_Package +begin + +section {* Parsing and Typing the Specification *} + +text {* + To be able to write down the specification in Isabelle, we have to introduce + a new command (see Section~\ref{sec:newcommand}). As the keyword for the + new command we chose \simpleinductive{}. In the package we want to support + some ``advanced'' features: First, we want that the package can cope with + specifications inside locales. For example it should be possible to declare +*} + +locale rel = + fixes R :: "'a \ 'a \ bool" + +text {* + and then define the transitive closure and the accessible part as follows: +*} + + +simple_inductive (in rel) + trcl' +where + base: "trcl' x x" +| step: "trcl' x y \ R y z \ trcl' x z" + +simple_inductive (in rel) + accpart' +where + accpartI: "(\y. R y x \ accpart' y) \ accpart' x" + +text {* + Second, we want that the user can specify fixed parameters. + Remember in the previous section we stated that the user can give the + specification for the transitive closure of a relation @{text R} as +*} + +simple_inductive + trcl\\ :: "('a \ 'a \ bool) \ 'a \ 'a \ bool" +where + base: "trcl\\ R x x" +| step: "trcl\\ R x y \ R y z \ trcl\\ R x z" + +text {* + Note that there is no locale given in this specification---the parameter + @{text "R"} therefore needs to be included explicitly in @{term trcl\\}, but + stays fixed throughout the specification. The problem with this way of + stating the specification for the transitive closure is that it derives the + following induction principle. + + \begin{center}\small + \mprset{flushleft} + \mbox{\inferrule{ + @{thm_style prem1 trcl\\.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\ + @{thm_style prem2 trcl\\.induct[where P=P, where z=R, where za=x, where zb=y]}\\\\ + @{thm_style prem3 trcl\\.induct[where P=P, where z=R, where za=x, where zb=y]}} + {@{thm_style concl trcl\\.induct[where P=P, where z=R, where za=x, where zb=y]}}} + \end{center} + + But this does not correspond to the induction principle we derived by hand, which + was + + \begin{center}\small + \mprset{flushleft} + \mbox{\inferrule{ + @{thm_style prem1 trcl_induct[no_vars]}\\\\ + @{thm_style prem2 trcl_induct[no_vars]}\\\\ + @{thm_style prem3 trcl_induct[no_vars]}} + {@{thm_style concl trcl_induct[no_vars]}}} + \end{center} + + The difference is that in the one derived by hand the relation @{term R} is not + a parameter of the proposition @{term P} to be proved and it is also not universally + qunatified in the second and third premise. The point is that the parameter @{term R} + stays fixed thoughout the definition and we do not want to regard it as an ``ordinary'' + argument of the transitive closure, but one that can be freely instantiated. + In order to recognise such parameters, we have to extend the specification + to include a mechanism to state fixed parameters. The user should be able + to write + +*} + +simple_inductive + trcl'' for R :: "'a \ 'a \ bool" +where + base: "trcl'' R x x" +| step: "trcl'' R x y \ R y z \ trcl'' R x z" + +simple_inductive + accpart'' for R :: "'a \ 'a \ bool" +where + accpartI: "(\y. R y x \ accpart'' R y) \ accpart'' R x" + +text {* + \begin{figure}[t] + \begin{isabelle} + \railnontermfont{\rmfamily\itshape} + \railterm{simpleinductive,where,for} + \railalias{simpleinductive}{\simpleinductive{}} + \railalias{where}{\isacommand{where}} + \railalias{for}{\isacommand{for}} + \begin{rail} + simpleinductive target? fixes (for fixes)? \\ + (where (thmdecl? prop + '|'))? + ; + \end{rail} + \end{isabelle} + \caption{A railroad diagram describing the syntax of \simpleinductive{}. + The \emph{target} indicates an optional locale; the \emph{fixes} are an + \isacommand{and}-separated list of names for the inductive predicates (they + can also contain typing- and syntax anotations); similarly the \emph{fixes} + after \isacommand{for} to indicate fixed parameters; \emph{prop} stands for a + introduction rule with an optional theorem declaration (\emph{thmdecl}). + \label{fig:railroad}} + \end{figure} +*} + +text {* + This leads directly to the railroad diagram shown in + Figure~\ref{fig:railroad} for the syntax of \simpleinductive{}. This diagram + more or less translates directly into the parser: + + @{ML_chunk [display,gray] parser} + + which we described in Section~\ref{sec:parsingspecs}. If we feed into the + parser the string (which corresponds to our definition of @{term even} and + @{term odd}): + + @{ML_response [display,gray] +"let + val input = filtered_input + (\"even and odd \" ^ + \"where \" ^ + \" even0[intro]: \\\"even 0\\\" \" ^ + \"| evenS[intro]: \\\"odd n \ even (Suc n)\\\" \" ^ + \"| oddS[intro]: \\\"even n \ odd (Suc n)\\\"\") +in + parse spec_parser input +end" +"(([(even, NONE, NoSyn), (odd, NONE, NoSyn)], + [((even0,\), \"\\^E\\^Ftoken\\^Eeven 0\\^E\\^F\\^E\"), + ((evenS,\), \"\\^E\\^Ftoken\\^Eodd n \ even (Suc n)\\^E\\^F\\^E\"), + ((oddS,\), \"\\^E\\^Ftoken\\^Eeven n \ odd (Suc n)\\^E\\^F\\^E\")]), [])"} +*} + + +text {* + then we get back a locale (in this case @{ML NONE}), the predicates (with type + and syntax annotations), the parameters (similar as the predicates) and + the specifications of the introduction rules. + + + + This is all the information we + need for calling the package and setting up the keyword. The latter is + done in Lines 6 and 7 in the code below. + + @{ML_chunk [display,gray,linenos] syntax} + + We call @{ML OuterSyntax.command} with the kind-indicator @{ML + OuterKeyword.thy_decl} since the package does not need to open up any goal + state (see Section~\ref{sec:newcommand}). Note that the predicates and + parameters are at the moment only some ``naked'' variables: they have no + type yet (even if we annotate them with types) and they are also no defined + constants yet (which the predicates will eventually be). In Lines 1 to 4 we + gather the information from the parser to be processed further. The locale + is passed as argument to the function @{ML + Toplevel.local_theory}.\footnote{FIXME Is this already described?} The other + arguments, i.e.~the predicates, parameters and intro rule specifications, + are passed to the function @{ML add_inductive in SimpleInductivePackage} + (Line 4). + + We now come to the second subtask of the package, namely transforming the + parser output into some internal datastructures that can be processed further. + Remember that at the moment the introduction rules are just strings, and even + if the predicates and parameters can contain some typing annotations, they + are not yet in any way reflected in the introduction rules. So the task of + @{ML add_inductive in SimpleInductivePackage} is to transform the strings + into properly typed terms. For this it can use the function + @{ML read_spec in Specification}. This function takes some constants + with possible typing annotations and some rule specifications and attempts to + find a type according to the given type constraints and the type constraints + by the surrounding (local theory). However this function is a bit + too general for our purposes: we want that each introduction rule has only + name (for example @{text even0} or @{text evenS}), if a name is given at all. + The function @{ML read_spec in Specification} however allows more + than one rule. Since it is quite convenient to rely on this function (instead of + building your own) we just quick ly write a wrapper function that translates + between our specific format and the general format expected by + @{ML read_spec in Specification}. The code of this wrapper is as follows: + + @{ML_chunk [display,gray,linenos] read_specification} + + It takes a list of constants, a list of rule specifications and a local theory + as input. Does the transformation of the rule specifications in Line 3; calls + the function and transforms the now typed rule specifications back into our + format and returns the type parameter and typed rule specifications. + + + @{ML_chunk [display,gray,linenos] add_inductive} + + + In order to add a new inductive predicate to a theory with the help of our + package, the user must \emph{invoke} it. For every package, there are + essentially two different ways of invoking it, which we will refer to as + \emph{external} and \emph{internal}. By external invocation we mean that the + package is called from within a theory document. In this case, the + specification of the inductive predicate, including type annotations and + introduction rules, are given as strings by the user. Before the package can + actually make the definition, the type and introduction rules have to be + parsed. In contrast, internal invocation means that the package is called by + some other package. For example, the function definition package + calls the inductive definition package to define the + graph of the function. However, it is not a good idea for the function + definition package to pass the introduction rules for the function graph to + the inductive definition package as strings. In this case, it is better to + directly pass the rules to the package as a list of terms, which is more + robust than handling strings that are lacking the additional structure of + terms. These two ways of invoking the package are reflected in its ML + programming interface, which consists of two functions: + + + @{ML_chunk [display,gray] SIMPLE_INDUCTIVE_PACKAGE} +*} + +text {* + (FIXME: explain Binding.binding; Attrib.binding somewhere else) + + + The function for external invocation of the package is called @{ML + add_inductive in SimpleInductivePackage}, whereas the one for internal + invocation is called @{ML add_inductive_i in SimpleInductivePackage}. Both + of these functions take as arguments the names and types of the inductive + predicates, the names and types of their parameters, the actual introduction + rules and a \emph{local theory}. They return a local theory containing the + definition and the induction principle as well introduction rules. + + Note that @{ML add_inductive_i in SimpleInductivePackage} expects + the types of the predicates and parameters to be specified using the + datatype @{ML_type typ} of Isabelle's logical framework, whereas @{ML + add_inductive in SimpleInductivePackage} expects them to be given as + optional strings. If no string is given for a particular predicate or + parameter, this means that the type should be inferred by the + package. + + + Additional \emph{mixfix syntax} may be associated with the + predicates and parameters as well. Note that @{ML add_inductive_i in + SimpleInductivePackage} does not allow mixfix syntax to be associated with + parameters, since it can only be used for parsing.\footnote{FIXME: why ist it there then?} + The names of the + predicates, parameters and rules are represented by the type @{ML_type + Binding.binding}. Strings can be turned into elements of the type @{ML_type + Binding.binding} using the function @{ML [display] "Binding.name : string -> + Binding.binding"} Each introduction rule is given as a tuple containing its + name, a list of \emph{attributes} and a logical formula. Note that the type + @{ML_type Attrib.binding} used in the list of introduction rules is just a + shorthand for the type @{ML_type "Binding.binding * Attrib.src list"}. The + function @{ML add_inductive_i in SimpleInductivePackage} expects the formula + to be specified using the datatype @{ML_type term}, whereas @{ML + add_inductive in SimpleInductivePackage} expects it to be given as a string. + An attribute specifies additional actions and transformations that should be + applied to a theorem, such as storing it in the rule databases used by + automatic tactics like the simplifier. The code of the package, which will + be described in the following section, will mostly treat attributes as a + black box and just forward them to other functions for storing theorems in + local theories. The implementation of the function @{ML add_inductive in + SimpleInductivePackage} for external invocation of the package is quite + simple. Essentially, it just parses the introduction rules and then passes + them on to @{ML add_inductive_i in SimpleInductivePackage}: + + @{ML_chunk [display] add_inductive} + + For parsing and type checking the introduction rules, we use the function + + @{ML [display] "Specification.read_specification: + (Binding.binding * string option * mixfix) list -> (*{variables}*) + (Attrib.binding * string list) list -> (*{rules}*) + local_theory -> + (((Binding.binding * typ) * mixfix) list * + (Attrib.binding * term list) list) * + local_theory"} +*} + +text {* + During parsing, both predicates and parameters are treated as variables, so + the lists \verb!preds_syn! and \verb!params_syn! are just appended + before being passed to @{ML read_spec in Specification}. Note that the format + for rules supported by @{ML read_spec in Specification} is more general than + what is required for our package. It allows several rules to be associated + with one name, and the list of rules can be partitioned into several + sublists. In order for the list \verb!intro_srcs! of introduction rules + to be acceptable as an input for @{ML read_spec in Specification}, we first + have to turn it into a list of singleton lists. This transformation + has to be reversed later on by applying the function + @{ML [display] "the_single: 'a list -> 'a"} + to the list \verb!specs! containing the parsed introduction rules. + The function @{ML read_spec in Specification} also returns the list \verb!vars! + of predicates and parameters that contains the inferred types as well. + This list has to be chopped into the two lists \verb!preds_syn'! and + \verb!params_syn'! for predicates and parameters, respectively. + All variables occurring in a rule but not in the list of variables passed to + @{ML read_spec in Specification} will be bound by a meta-level universal + quantifier. +*} + +text {* + Finally, @{ML read_specification in Specification} also returns another local theory, + but we can safely discard it. As an example, let us look at how we can use this + function to parse the introduction rules of the @{text trcl} predicate: + + @{ML_response [display] +"Specification.read_specification + [(Binding.name \"trcl\", NONE, NoSyn), + (Binding.name \"r\", SOME \"'a \ 'a \ bool\", NoSyn)] + [((Binding.name \"base\", []), [\"trcl r x x\"]), + ((Binding.name \"step\", []), [\"trcl r x y \ r y z \ trcl r x z\"])] + @{context}" +"((\, + [(\, + [Const (\"all\", \) $ Abs (\"x\", TFree (\"'a\", \), + Const (\"Trueprop\", \) $ + (Free (\"trcl\", \) $ Free (\"r\", \) $ Bound 0 $ Bound 0))]), + (\, + [Const (\"all\", \) $ Abs (\"x\", TFree (\"'a\", \), + Const (\"all\", \) $ Abs (\"y\", TFree (\"'a\", \), + Const (\"all\", \) $ Abs (\"z\", TFree (\"'a\", \), + Const (\"==>\", \) $ + (Const (\"Trueprop\", \) $ + (Free (\"trcl\", \) $ Free (\"r\", \) $ Bound 2 $ Bound 1)) $ + (Const (\"==>\", \) $ \ $ \))))])]), + \) +: (((Binding.binding * typ) * mixfix) list * + (Attrib.binding * term list) list) * local_theory"} + + In the list of variables passed to @{ML read_specification in Specification}, we have + used the mixfix annotation @{ML NoSyn} to indicate that we do not want to associate any + mixfix syntax with the variable. Moreover, we have only specified the type of \texttt{r}, + whereas the type of \texttt{trcl} is computed using type inference. + The local variables \texttt{x}, \texttt{y} and \texttt{z} of the introduction rules + are turned into bound variables with the de Bruijn indices, + whereas \texttt{trcl} and \texttt{r} remain free variables. + +*} + +text {* + + \paragraph{Parsers for theory syntax} + + Although the function @{ML add_inductive in SimpleInductivePackage} parses terms and types, it still + cannot be used to invoke the package directly from within a theory document. + In order to do this, we have to write another parser. Before we describe + the process of writing parsers for theory syntax in more detail, we first + show some examples of how we would like to use the inductive definition + package. + + + The definition of the transitive closure should look as follows: +*} + +ML {* SpecParse.opt_thm_name *} + +text {* + + A proposition can be parsed using the function @{ML prop in OuterParse}. + Essentially, a proposition is just a string or an identifier, but using the + specific parser function @{ML prop in OuterParse} leads to more instructive + error messages, since the parser will complain that a proposition was expected + when something else than a string or identifier is found. + An optional locale target specification of the form \isa{(\isacommand{in}\ $\ldots$)} + can be parsed using @{ML opt_target in OuterParse}. + The lists of names of the predicates and parameters, together with optional + types and syntax, are parsed using the functions @{ML "fixes" in OuterParse} + and @{ML for_fixes in OuterParse}, respectively. + In addition, the following function from @{ML_struct SpecParse} for parsing + an optional theorem name and attribute, followed by a delimiter, will be useful: + + \begin{table} + @{ML "opt_thm_name: + string -> Attrib.binding parser" in SpecParse} + \end{table} + + We now have all the necessary tools to write the parser for our + \isa{\isacommand{simple{\isacharunderscore}inductive}} command: + + + Once all arguments of the command have been parsed, we apply the function + @{ML add_inductive in SimpleInductivePackage}, which yields a local theory + transformer of type @{ML_type "local_theory -> local_theory"}. Commands in + Isabelle/Isar are realized by transition transformers of type + @{ML_type [display] "Toplevel.transition -> Toplevel.transition"} + We can turn a local theory transformer into a transition transformer by using + the function + + @{ML [display] "Toplevel.local_theory : string option -> + (local_theory -> local_theory) -> + Toplevel.transition -> Toplevel.transition"} + + which, apart from the local theory transformer, takes an optional name of a locale + to be used as a basis for the local theory. + + (FIXME : needs to be adjusted to new parser type) + + {\it + The whole parser for our command has type + @{text [display] "OuterLex.token list -> + (Toplevel.transition -> Toplevel.transition) * OuterLex.token list"} + which is abbreviated by @{text OuterSyntax.parser_fn}. The new command can be added + to the system via the function + @{text [display] "OuterSyntax.command : + string -> string -> OuterKeyword.T -> OuterSyntax.parser_fn -> unit"} + which imperatively updates the parser table behind the scenes. } + + In addition to the parser, this + function takes two strings representing the name of the command and a short description, + as well as an element of type @{ML_type OuterKeyword.T} describing which \emph{kind} of + command we intend to add. Since we want to add a command for declaring new concepts, + we choose the kind @{ML "OuterKeyword.thy_decl"}. Other kinds include + @{ML "OuterKeyword.thy_goal"}, which is similar to @{ML thy_decl in OuterKeyword}, + but requires the user to prove a goal before making the declaration, or + @{ML "OuterKeyword.diag"}, which corresponds to a purely diagnostic command that does + not change the context. For example, the @{ML thy_goal in OuterKeyword} kind is used + by the \isa{\isacommand{function}} command \cite{Krauss-IJCAR06}, which requires the user + to prove that a given set of equations is non-overlapping and covers all cases. The kind + of the command should be chosen with care, since selecting the wrong one can cause strange + behaviour of the user interface, such as failure of the undo mechanism. +*} + +text {* + Note that the @{text trcl} predicate has two different kinds of parameters: the + first parameter @{text R} stays \emph{fixed} throughout the definition, whereas + the second and third parameter changes in the ``recursive call''. This will + become important later on when we deal with fixed parameters and locales. + + + + The purpose of the package we show next is that the user just specifies the + inductive predicate by stating some introduction rules and then the packages + makes the equivalent definition and derives from it the needed properties. +*} + +text {* + From a high-level perspective the package consists of 6 subtasks: + + + +*} + + +(*<*) +end +(*>*)