# HG changeset patch # User Christian Urban # Date 1237806813 -3600 # Node ID ffd93dcc269d4997d8395a535c447ebde2ff0b92 # Parent 2fff636e1fa0adaa5f86823273d5651b000d918a polishing to the theorem attributes section diff -r 2fff636e1fa0 -r ffd93dcc269d ProgTutorial/FirstSteps.thy --- a/ProgTutorial/FirstSteps.thy Sat Mar 21 12:35:03 2009 +0100 +++ b/ProgTutorial/FirstSteps.thy Mon Mar 23 12:13:33 2009 +0100 @@ -20,7 +20,7 @@ \end{center} We also generally assume you are working with HOL. The given examples might - need to be adapted slightly if you work in a different logic. + need to be adapted if you work in a different logic. *} section {* Including ML-Code *} @@ -41,8 +41,17 @@ Like normal Isabelle proof scripts, \isacommand{ML}-commands can be evaluated by using the advance and undo buttons of your Isabelle environment. The code inside the \isacommand{ML}-command can also contain - value and function bindings, and even those can be undone when the proof - script is retracted. As mentioned in the Introduction, we will drop the + value and function bindings, for example +*} + +ML %gray {* + val r = ref 0 + fun f n = n + 1 +*} + +text {* + and even those can be undone when the proof + script is retracted. As mentioned in the Introduction, we will drop the \isacommand{ML}~@{text "\ \ \"} scaffolding whenever we show code. The lines prefixed with @{text [quotes] ">"} are not part of the code, rather they indicate what the response is when the code is evaluated. @@ -123,12 +132,12 @@ *} (* +ML {* set Toplevel.debug *} + ML {* fun dodgy_fun () = (raise (ERROR "bar"); 1) *} -ML {* set Toplevel.debug *} - ML {* fun f1 () = dodgy_fun () *} ML {* f1 () *} @@ -287,18 +296,18 @@ the waterfall fashion might be the following code: *} -ML %linenosgray{*fun apply_fresh_args pred ctxt = - pred |> fastype_of - |> binder_types - |> map (pair "z") - |> Variable.variant_frees ctxt [pred] - |> map Free - |> (curry list_comb) pred *} +ML %linenosgray{*fun apply_fresh_args f ctxt = + f |> fastype_of + |> binder_types + |> map (pair "z") + |> Variable.variant_frees ctxt [f] + |> map Free + |> (curry list_comb) f *} text {* - This code extracts the argument types of a given - predicate and then generates for each argument type a distinct variable; finally it - applies the generated variables to the predicate. For example + This code extracts the argument types of a given function and then generates + for each argument type a distinct variable; finally it applies the generated + variables to the function. For example @{ML_response_fake [display,gray] "apply_fresh_args @{term \"P::nat \ int \ unit \ bool\"} @{context} @@ -308,14 +317,14 @@ You can read off this behaviour from how @{ML apply_fresh_args} is coded: in Line 2, the function @{ML fastype_of} calculates the type of the - predicate; @{ML binder_types} in the next line produces the list of argument + function; @{ML binder_types} in the next line produces the list of argument types (in the case above the list @{text "[nat, int, unit]"}); Line 4 pairs up each type with the string @{text "z"}; the function @{ML variant_frees in Variable} generates for each @{text "z"} a - unique name avoiding the given @{text pred}; the list of name-type pairs is turned + unique name avoiding the given @{text f}; the list of name-type pairs is turned into a list of variable terms in Line 6, which in the last line is applied - by the function @{ML list_comb} to the predicate. We have to use the - function @{ML curry}, because @{ML list_comb} expects the predicate and the + by the function @{ML list_comb} to the function. We have to use the + function @{ML curry}, because @{ML list_comb} expects the function and the variables list as a pair. The combinator @{ML "#>"} is the reverse function composition. It can be @@ -478,9 +487,9 @@ text {* The function @{ML dest_ss in MetaSimplifier} returns a record containing all - information stored in the simpset. We are only interested in the names of the + information stored in the simpset, but we are only interested in the names of the simp-rules. So now you can feed in the current simpset into this function. - The simpset can be referred to using the antiquotation @{text "@{simpset}"}. + The current simpset can be referred to using the antiquotation @{text "@{simpset}"}. @{ML_response_fake [display,gray] "get_thm_names_from_ss @{simpset}" @@ -492,7 +501,7 @@ On the ML-level of Isabelle, you often have to work with qualified names; these are strings with some additional information, such positional information and qualifiers. Such bindings can be generated with the antiquotation - @{text "@{bindin \}"}. + @{text "@{binding \}"}. @{ML_response [display,gray] "@{binding \"name\"}" @@ -508,7 +517,16 @@ ((@{binding "TrueConj"}, NoSyn), (Attrib.empty_binding, @{term "True \ True"})) *} -text {* +text {* + Now querying the definition you obtain: + + \begin{isabelle} + \isacommand{thm}~@{text "TrueConj_def"}\\ + @{text "> "}@{thm TrueConj_def} + \end{isabelle} + + (FIXME give a better example why bindings are important) + While antiquotations have many applications, they were originally introduced in order to avoid explicit bindings for theorems such as: *} @@ -552,7 +570,19 @@ Terms are described in detail in \isccite{sec:terms}. Their definition and many useful operations are implemented in @{ML_file "Pure/term.ML"}. \end{readmore} + + Constructing terms via antiquotations has the advantage that only typable + terms can be constructed. For example + @{ML_response_fake_both [display,gray] + "@{term \"(x::nat) x\"}" + "Type unification failed \"} + + raises a typing error, while it perfectly ok to construct + + @{ML [display,gray] "Free (\"x\", @{typ nat}) $ Free (\"x\", @{typ nat})"} + + with the raw ML-constructors. Sometimes the internal representation of terms can be surprisingly different from what you see at the user-level, because the layers of parsing/type-checking/pretty printing can be quite elaborate. @@ -590,7 +620,8 @@ where it is not (since it is already constructed by a meta-implication). - Types can be constructed using the antiquotation @{text "@{typ \}"}. For example: + As already seen above, types can be constructed using the antiquotation + @{text "@{typ \}"}. For example: @{ML_response_fake [display,gray] "@{typ \"bool \ nat\"}" "bool \ nat"} @@ -629,7 +660,7 @@ text {* To see this apply @{text "@{term S}"} and @{text "@{term T}"} - to both functions. With @{ML make_imp} we obtain the intended term involving + to both functions. With @{ML make_imp} you obtain the intended term involving the given arguments @{ML_response [display,gray] "make_imp @{term S} @{term T}" @@ -637,7 +668,7 @@ Abs (\"x\", Type (\"nat\",[]), Const \ $ (Free (\"S\",\) $ \) $ (Free (\"T\",\) $ \))"} - whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"} + whereas with @{ML make_wrong_imp} you obtain a term involving the @{term "P"} and @{text "Q"} from the antiquotation. @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T}" @@ -647,23 +678,54 @@ (Const \ $ (Free (\"Q\",\) $ \)))"} There are a number of handy functions that are frequently used for - constructing terms. One is the function @{ML list_comb} which a term - and a list of terms as argument, and produces as output the term + constructing terms. One is the function @{ML list_comb}, which takes + a term and a list of terms as argument, and produces as output the term list applied to the term. For example @{ML_response_fake [display,gray] "list_comb (@{term \"P::nat\"}, [@{term \"True\"}, @{term \"False\"}])" "Free (\"P\", \"nat\") $ Const (\"True\", \"bool\") $ Const (\"False\", \"bool\")"} - (FIXME: handy functions for constructing terms: @{ML lambda}, - @{ML subst_free}) -*} + Another handy function is @{ML lambda}, which abstracts a variable + in a term. For example + + @{ML_response_fake [display,gray] + "lambda @{term \"x::nat\"} @{term \"(P::nat\bool) x\"}" + "Abs (\"x\", \"nat\", Free (\"P\", \"bool \ bool\") $ Bound 0)"} + + In the example @{ML lambda} produces a de Bruijn index (i.e.~@{ML "Bound 0"}), + and an abstraction. It also records the type of the abstracted + variable and for printing purposes also its name. Note that because of the + typing annotation on @{text "P"}, the variable @{text "x"} in @{text "P x"} + is of the same type as the abstracted variable. If it is of different type, + as in + + @{ML_response_fake [display,gray] + "lambda @{term \"x::nat\"} @{term \"(P::bool\bool) x\"}" + "Abs (\"x\", \"nat\", Free (\"P\", \"bool \ bool\") $ Free (\"x\", \"bool\"))"} -ML {* lambda @{term "x::nat"} @{term "P x"}*} + then the variable @{text "Free (\"x\", \"bool\")"} is \emph{not} abstracted. + This is a fundamental principle + of Church-style typing, where variables with the same name still differ, if they + have different type. + There is also the function @{ML subst_free} with which terms can + be replaced by other terms. For example below we replace in @{term "f 0 x"} + the subterm @{term "f 0"} by @{term y} and @{term x} by @{term True}. -text {* - As can be seen this function does not take the typing annotation into account. + @{ML_response_fake [display,gray] +"subst_free [(@{term \"(f::nat\nat\nat) 0\"}, @{term \"y::nat\nat\"}), + (@{term \"x::nat\"}, @{term \"True\"})] + @{term \"((f::nat\nat\nat) 0) x\"}" + "Free (\"y\", \"nat \ nat\") $ Const (\"True\", \"bool\")"} + + As can be seen, @{ML subst_free} does not take typability into account. + However it takes alpha-equivalence into account: + + @{ML_response_fake [display, gray] + "subst_free [(@{term \"(\y::nat. y)\"}, @{term \"x::nat\"})] + @{term \"(\x::nat. x)\"}" + "Free (\"x\", \"nat\")"} \begin{readmore} There are many functions in @{ML_file "Pure/term.ML"}, @{ML_file "Pure/logic.ML"} and @@ -671,9 +733,6 @@ and types easier.\end{readmore} Have a look at these files and try to solve the following two exercises: -*} - -text {* \begin{exercise}\label{fun:revsum} Write a function @{text "rev_sum : term -> term"} that takes a @@ -865,7 +924,7 @@ end" "0 + 0"} In Isabelle not just terms need to be certified, but also types. For example, - you obtain the certified type for the Isablle type @{typ "nat \ bool"} on + you obtain the certified type for the Isabelle type @{typ "nat \ bool"} on the ML-level as follows: @{ML_response_fake [display,gray] @@ -882,7 +941,7 @@ result that type-checks. \end{exercise} - Remember Isabelle foolows the Church-style typing for terms, i.e.~a term contains + Remember Isabelle follows the Church-style typing for terms, i.e.~a term contains enough typing information (constants, free variables and abstractions all have typing information) so that it is always clear what the type of a term is. Given a well-typed term, the function @{ML type_of} returns the @@ -892,7 +951,7 @@ "type_of (@{term \"f::nat \ bool\"} $ @{term \"x::nat\"})" "bool"} To calculate the type, this function traverses the whole term and will - detect any typing inconsistency. For examle changing the type of the variable + detect any typing inconsistency. For example changing the type of the variable @{term "x"} from @{typ "nat"} to @{typ "int"} will result in the error message: @{ML_response_fake [display,gray] @@ -932,11 +991,11 @@ Const (\"HOL.one_class.one\", \"nat\") $ Free (\"x\", \"nat\")"} Instead of giving explicitly the type for the constant @{text "plus"} and the free - variable @{text "x"}, the type-inference filles in the missing information. + variable @{text "x"}, the type-inference fills in the missing information. \begin{readmore} See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading, - checking and pretty-printing of terms are defined. Fuctions related to the + checking and pretty-printing of terms are defined. Functions related to the type inference are implemented in @{ML_file "Pure/type.ML"} and @{ML_file "Pure/type_infer.ML"}. \end{readmore} @@ -1013,22 +1072,29 @@ section {* Theorem Attributes *} text {* - Theorem attributes are @{text "[simp]"}, @{text "[OF \]"}, @{text - "[symmetric]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags + Theorem attributes are @{text "[symmetric]"}, @{text "[THEN \]"}, @{text + "[simp]"} and so on. Such attributes are \emph{neither} tags \emph{nor} flags annotated to theorems, but functions that do further processing once a - theorem is proven. In particular, it is not possible to find out + theorem is proved. In particular, it is not possible to find out what are all theorems that have a given attribute in common, unless of course - the function behind the attribute stores the theorems in a retrivable + the function behind the attribute stores the theorems in a retrievable datastructure. - If you want to print out all currently known attributes a theorem - can have, you can use the function: + If you want to print out all currently known attributes a theorem can have, + you can use the Isabelle command - @{ML_response_fake [display,gray] "Attrib.print_attributes @{theory}" -"COMP: direct composition with rules (no lifting) -HOL.dest: declaration of Classical destruction rule -HOL.elim: declaration of Classical elimination rule -\"} + \begin{isabelle} + \isacommand{print\_attributes}\\ + @{text "> COMP: direct composition with rules (no lifting)"}\\ + @{text "> HOL.dest: declaration of Classical destruction rule"}\\ + @{text "> HOL.elim: declaration of Classical elimination rule"}\\ + @{text "> \"} + \end{isabelle} + + The theorem attributes fall roughly into two categories: the first category manipulates + the proved theorem (like @{text "[symmetric]"} and @{text "[THEN \]"}), and the second + stores the proved theorem somewhere as data (like @{text "[simp]"}, which adds + the theorem to the current simpset). To explain how to write your own attribute, let us start with an extremely simple version of the attribute @{text "[symmetric]"}. The purpose of this attribute is @@ -1046,15 +1112,15 @@ an attribute. Before we can use the attribute, we need to set it up. This can be done - using the function @{ML Attrib.setup} as follows. + using the Isabelle command \isacommand{attribute\_setup} as follows: *} -setup %gray {* Attrib.setup @{binding "my_sym"} - (Scan.succeed my_symmetric) "applying the sym rule"*} +attribute_setup %gray my_sym = {* Scan.succeed my_symmetric *} + "applying the sym rule" text {* - The attribute does not expect any further arguments (unlike @{text "[OF - \]"}, for example, which can take a list of theorems as argument). Therefore + The attribute does not expect any further arguments (unlike @{text "[THEN + \]"}, for example). Therefore we use the parser @{ML Scan.succeed}. Later on we will also consider attributes taking further arguments. An example for the attribute @{text "[my_sym]"} is the proof @@ -1063,14 +1129,74 @@ lemma test[my_sym]: "2 = Suc (Suc 0)" by simp text {* - which stores the theorem @{thm test} under the name @{thm [source] test}. We - can also use the attribute when referring to this theorem. + which stores the theorem @{thm test} under the name @{thm [source] test}. You + can see this, if you query the lemma: + + \begin{isabelle} + \isacommand{thm}~@{text "test"}\\ + @{text "> "}~@{thm test} + \end{isabelle} + + We can also use the attribute when referring to this theorem: \begin{isabelle} \isacommand{thm}~@{text "test[my_sym]"}\\ @{text "> "}~@{thm test[my_sym]} \end{isabelle} + As an example of a slightly more complicated theorem attribute, we implement + our version of @{text "[THEN \]"}. This attribute takes a list of theorems + as argument and resolves the proved theorem with this list (one theorem + after another). The code for this attribute is: +*} + +ML{*fun MY_THEN thms = + Thm.rule_attribute (fn _ => fn thm => foldl ((op RS) o swap) thm thms)*} + +text {* + where @{ML swap} swaps the components of a pair. The setup of this theorem + attribute uses the parser @{ML Attrib.thms}, which parses a list of + theorems. +*} + +attribute_setup %gray MY_THEN = {* Attrib.thms >> MY_THEN *} + "resolving the list of theorems with the proved theorem" + +text {* + You can, for example, use this theorem attribute to turn an equation into a + meta-equation: + + \begin{isabelle} + \isacommand{thm}~@{text "test[MY_THEN eq_reflection]"}\\ + @{text "> "}~@{thm test[MY_THEN eq_reflection]} + \end{isabelle} + + If you need the symmetric version as a meta-equation, you can write + + \begin{isabelle} + \isacommand{thm}~@{text "test[MY_THEN sym eq_reflection]"}\\ + @{text "> "}~@{thm test[MY_THEN sym eq_reflection]} + \end{isabelle} + + Of course, it is also possible to combine different theorem attributes, as in: + + \begin{isabelle} + \isacommand{thm}~@{text "test[my_sym, MY_THEN eq_reflection]"}\\ + @{text "> "}~@{thm test[my_sym, MY_THEN eq_reflection]} + \end{isabelle} + + However, here also a weakness of the concept + of theorem attributes show through: since theorem attributes can be an + arbitrary functions, they do not in general commute. If you try + + \begin{isabelle} + \isacommand{thm}~@{text "test[MY_THEN eq_reflection, my_sym]"}\\ + @{text "> "}~@{text "exception THM 1 raised: RSN: no unifiers"} + \end{isabelle} + + you get an exception indicating that the theorem @{thm [source] sym} + does not resolve with meta-equations. + The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems. Another usage of attributes is to add and delete theorems from stored data. For example the attribute @{text "[simp]"} adds or deletes a theorem from the @@ -1079,13 +1205,15 @@ of theorems. *} -ML{*val my_thms = ref ([]:thm list)*} +ML{*val my_thms = ref ([] : thm list)*} text {* A word of warning: such references must not be used in any code that is meant to be more than just for testing purposes! Here it is only used to illustrate matters. We will show later how to store data properly without - using references. The function @{ML Thm.declaration_attribute} expects us to + using references. + + The function @{ML Thm.declaration_attribute} expects us to provide two functions that add and delete theorems from this list. For this we use the two functions: *} @@ -1116,8 +1244,8 @@ and set up the attributes as follows *} -setup %gray {* Attrib.setup @{binding "my_thms"} - (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *} +attribute_setup %gray my_thms = {* Attrib.add_del my_add my_del *} + "maintaining a list of my_thms" text {* Now if you prove the lemma attaching the attribute @{text "[my_thms]"} @@ -1167,7 +1295,7 @@ start a ``data slot'' in a context by using the functor @{text GenericDataFun}: *} -ML {*structure Data = GenericDataFun +ML {*structure MyThmsData = GenericDataFun (type T = thm list val empty = [] val extend = I @@ -1178,11 +1306,36 @@ @{ML my_thms_del} to: *} -ML{*val thm_add = Data.map o Thm.add_thm -val thm_del = Data.map o Thm.del_thm*} +ML{*val thm_add = MyThmsData.map o Thm.add_thm +val thm_del = MyThmsData.map o Thm.del_thm*} + +text {* + where @{ML MyThmsData.map} updates the data appropriately in the context. The + theorem addtributes are +*} + +ML{*val add = Thm.declaration_attribute thm_add +val del = Thm.declaration_attribute thm_del *} text {* - where @{ML Data.map} updates the data appropriately in the context. Since storing + ad the setup is as follows +*} + +attribute_setup %gray my_thms2 = {* Attrib.add_del add del *} + "properly maintaining a list of my_thms" + +ML {* MyThmsData.get (Context.Proof @{context}) *} + +lemma [my_thms2]: "2 = Suc (Suc 0)" by simp + +ML {* MyThmsData.get (Context.Proof @{context}) *} + +ML {* !my_thms *} + +text {* + (FIXME: explain problem with backtracking) + + Since storing theorems in a special list is such a common task, there is the functor @{text NamedThmsFun}, which does most of the work for you. To obtain such a named theorem lists, you just declare diff -r 2fff636e1fa0 -r ffd93dcc269d ProgTutorial/Parsing.thy --- a/ProgTutorial/Parsing.thy Sat Mar 21 12:35:03 2009 +0100 +++ b/ProgTutorial/Parsing.thy Mon Mar 23 12:13:33 2009 +0100 @@ -16,7 +16,7 @@ parsing combinators. These combinators are a well-established technique for parsing, which has, for example, been described in Paulson's classic ML-book \cite{paulson-ml2}. Isabelle developers are usually concerned with writing these outer syntax parsers, - either for new definitional packages or for calling tactics with specific arguments. + either for new definitional packages or for calling methods with specific arguments. \begin{readmore} The library @@ -298,7 +298,7 @@ @{ML_response [display,gray] "let - fun double (x,y) = (x ^ x, y ^ y) + fun double (x, y) = (x ^ x, y ^ y) in (($$ \"h\") -- ($$ \"e\") >> double) (explode \"hello\") end" @@ -318,6 +318,8 @@ where the single-character strings in the parsed output are transformed back into one string. + (FIXME: move to an earlier place) + The function @{ML Scan.ahead} parses some input, but leaves the original input unchanged. For example: @@ -492,8 +494,9 @@ text {* The function @{ML "OuterParse.!!!"} can be used to force termination of the - parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section), - except that the error message is fixed to be @{text [quotes] "Outer syntax error"} + parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section). + Except that the error message of @{ML "OuterParse.!!!"} is fixed to be + @{text [quotes] "Outer syntax error"} with a relatively precise description of the failure. For example: @{ML_response_fake [display,gray] @@ -554,6 +557,12 @@ *} +section {* Context Parser (TBD) *} + +text {* + Used for example in \isacommand{attribute\_setup} and \isacommand{method\_setup}. +*} + section {* Parsing Inner Syntax *} text {* @@ -711,6 +720,29 @@ \end{readmore} *} +text_raw {* + \begin{exercise} + If you look closely how @{ML SpecParse.where_alt_specs} is implemented + in file @{ML_file "Pure/Isar/spec_parse.ML"} and compare it to our @{ML "spec_parser"}, + you can notice an additional ``tail'' (Lines 8 to 10): + \begin{isabelle} +*} +ML %linenosgray{*val spec_parser' = + OuterParse.fixes -- + Scan.optional + (OuterParse.$$$ "where" |-- + OuterParse.!!! + (OuterParse.enum1 "|" + ((SpecParse.opt_thm_name ":" -- OuterParse.prop) --| + Scan.option (Scan.ahead (OuterParse.name || + OuterParse.$$$ "[") -- + OuterParse.!!! (OuterParse.$$$ "|"))))) [] *} +text_raw {* + \end{isabelle} + What is the purpose of this additional ``tail''? + \end{exercise} +*} + section {* New Commands and Keyword Files\label{sec:newcommand} *} text {* @@ -968,7 +1000,6 @@ @{ML Toplevel.local_theory} do?) (FIXME read a name and show how to store theorems) - *} section {* Methods *} @@ -982,7 +1013,7 @@ print_methods text {* - An example of a very simple method is the following code. + An example of a very simple method is: *} method_setup %gray foobar_meth = @@ -1007,6 +1038,14 @@ \end{minipage} *} (*<*)oops(*>*) + +(* +ML {* SIMPLE_METHOD *} +ML {* METHOD *} +ML {* K (SIMPLE_METHOD ((etac @{thm conjE} THEN' rtac @{thm conjI}) 1)) *} +ML {* Scan.succeed *} +*) + text {* (FIXME: explain a version of rule-tac) *} diff -r 2fff636e1fa0 -r ffd93dcc269d progtutorial.pdf Binary file progtutorial.pdf has changed