# HG changeset patch # User Christian Urban # Date 1237158752 -3600 # Node ID fb8f22dd8ad00409fecc2a14eee639185f7936bf # Parent 4e2341f6599dce628342a8b857bc25b73c0c56f6 adapted to latest Attrib.setup changes and more work on the simple induct chapter diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/FirstSteps.thy --- a/CookBook/FirstSteps.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/FirstSteps.thy Mon Mar 16 00:12:32 2009 +0100 @@ -248,8 +248,8 @@ avoided: it is more than easy to get the intermediate values wrong, not to mention the nightmares the maintenance of this code causes! - A ``real world'' example for a function written in the waterfall fashion might - be the following: + In the context of Isabelle, a ``real world'' example for a function written in + the waterfall fashion might be the following code: *} ML %linenosgray{*fun apply_fresh_args pred ctxt = @@ -261,12 +261,12 @@ |> (curry list_comb) pred *} text {* - The point of this function is to extract the argument types of the given + The point of this code is to extract the argument types of the given predicate and then generate for each type a distinct variable; finally it applies the generated variables to the predicate. You can read this off from how the function is coded: in Line 2, the function @{ML fastype_of} calculates the type of the predicate; in Line 3, @{ML binder_types} produces - the list of argument types; Line 4 pairs up each type with the string/name + the list of argument types; Line 4 pairs up each type with the string (or name) @{text "z"}; the function @{ML variant_frees in Variable} generates for each @{text "z"} a unique name avoiding @{text pred}; the list of name-type pairs is turned into a list of variable terms in Line 6, which in the last line @@ -782,9 +782,9 @@ (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero) end" "0 + 0"} - In Isabelle also types need can be certified. For example, you obtain - the certified type for the Isablle type @{typ "nat \ bool"} on the ML-level - as follows: + In Isabelle not just types need to be certified, but also types. For example, + you obtain the certified type for the Isablle type @{typ "nat \ bool"} on + the ML-level as follows: @{ML_response_fake [display,gray] "ctyp_of @{theory} (@{typ nat} --> @{typ bool})" @@ -800,8 +800,8 @@ result that type-checks. \end{exercise} - Remember that in Isabelle a term contains enough typing information - (constants, free variables and abstractions all have typing + Remember Isabelle foolows the Church-style typing for terms, i.e.~a term contains + enough typing information (constants, free variables and abstractions all have typing information) so that it is always clear what the type of a term is. Given a well-typed term, the function @{ML type_of} returns the type of a term. Consider for example: @@ -965,16 +965,16 @@ an attribute. Before we can use the attribute, we need to set it up. This can be done - using the function @{ML Attrib.add_attributes} as follows. + using the function @{ML Attrib.setup} as follows. *} -setup %gray {* Attrib.add_attributes - [("my_sym", Attrib.no_args my_symmetric, "applying the sym rule")] *} +setup %gray {* Attrib.setup @{binding "my_sym"} + (Scan.succeed my_symmetric) "applying the sym rule"*} text {* The attribute does not expect any further arguments (unlike @{text "[OF \]"}, for example, which can take a list of theorems as argument). Therefore - we use the function @{ML Attrib.no_args}. Later on we will also consider + we use the parser @{ML Scan.succeed}. Later on we will also consider attributes taking further arguments. An example for the attribute @{text "[my_sym]"} is the proof *} @@ -1035,9 +1035,8 @@ and set up the attributes as follows *} -setup %gray {* Attrib.add_attributes - [("my_thms", Attrib.add_del_args my_add my_del, - "maintaining a list of my_thms")] *} +setup %gray {* Attrib.setup @{binding "my_thms"} + (Attrib.add_del my_add my_del) "maintaining a list of my_thms" *} text {* Now if you prove the lemma attaching the attribute @{text "[my_thms]"} @@ -1165,13 +1164,62 @@ *} +section {* Setups (TBD) *} + +text {* + In the previous section we used \isacommand{setup} in order to make + a theorem attribute known to Isabelle. What happens behind the scenes + is that \isacommand{setup} expects a functions from @{ML_type theory} + to @{ML_type theory}: the input theory is the current theory and the + output the theory where the theory attribute has been stored. + + This is a fundamental principle in Isabelle. A similar situation occurs + for example with declaring constants. The function that declares a + constant on the ML-level is @{ML Sign.add_consts_i}. Recall that ML-code + needs to be \isacommand{ML}~@{text "\ \ \"}. + If you write +*} + +ML{*Sign.add_consts_i [(Binding.name "BAR", @{typ "nat"}, NoSyn)] @{theory} *} + +text {* + for declaring the constant @{text "BAR"} with type @{typ nat} and + run the code, then you indeed obtain a theory as result. But if you + query the constant with \isacommand{term} + + \begin{isabelle} + \isacommand{term}~@{text [quotes] "BAR"}\\ + @{text "> \"BAR\" :: \"'a\""} + \end{isabelle} + + you do not obtain a constant of type @{typ nat}, but a free variable (printed in + blue) of polymorphic type. The problem is that the ML-expression above did + not register the declaration with the current theory. This is what the command + \isacommand{setup} is for. The constant is properly declared with +*} + +setup %gray {* Sign.add_consts_i [(Binding.name "BAR", @{typ "nat"}, NoSyn)] *} + +text {* + Now + + \begin{isabelle} + \isacommand{term}~@{text [quotes] "BAR"}\\ + @{text "> \"BAR\" :: \"nat\""} + \end{isabelle} + + returns a (black) constant with the type @{typ nat}. + + A similar command is \isacommand{local\_setup}, which expects a function + of type @{ML_type "local_theory -> local_theory"}. Later on we will also + use the commands \isacommand{method\_setup} for installing methods in the + current theory and \isacommand{simproc\_setup} for adding new simprocs to + the current simpset. +*} + section {* Theories, Contexts and Local Theories (TBD) *} text {* - (FIXME: expand) - - (FIXME: explain \isacommand{setup}) - There are theories, proof contexts and local theories (in this order, if you want to order them). @@ -1182,10 +1230,7 @@ @{ML_type local_theory} is identical to the type of \emph{proof contexts} @{ML_type "Proof.context"}, although not every proof context constitutes a valid local theory. - - *} - - +*} section {* Storing Theorems\label{sec:storing} (TBD) *} diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/Package/Ind_Code.thy --- a/CookBook/Package/Ind_Code.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/Package/Ind_Code.thy Mon Mar 16 00:12:32 2009 +0100 @@ -7,35 +7,72 @@ subsection {* Definitions *} text {* - If we give it a term and a constant name, it will define the - constant as the term. + @{text [display] "rule ::= \xs. As \ (\ys. Bs \ pred ss)\<^isup>* \ pred ts"} + + @{text [display] "orule ::= \xs. As \ (\ys. Bs \ pred ss)\<^isup>* \ pred ts"} + + @{text [display] "def ::= pred \ \zs. \preds. orules \ pred zs"} + + @{text [display] "ind ::= \zs. pred zs \ rules[preds::=Ps] \ P zs"} + + @{text [display] "oind ::= \zs. pred zs \ orules[preds::=Ps] \ P zs"} + + So we have @{text "pred zs"} and @{text "orules[preds::=Ps]"}; have to show + @{text "P zs"}. Expanding @{text "pred zs"} gives @{text "\preds. orules \ pred zs"}. + Instantiating the @{text "preds"} with @{text "Ps"} gives + @{text "orules[preds::=Ps] \ P zs"}. So we can conclude with @{text "P zs"}. + + We have to show @{text "\xs. As \ (\ys. Bs \ pred ss)\<^isup>* \ pred ts"}; + expanding the defs + + @{text [display] + "\xs. As \ (\ys. Bs \ (\preds. orules \ pred ss))\<^isup>* \ (\preds. orules \ pred ts"} + + so we have @{text "As"}, @{text "(\ys. Bs \ (\preds. orules \ pred ss))\<^isup>*"}, + @{text "orules"}; and have to show @{text "pred ts"} + + the @{text "orules"} are of the form @{text "\xs. As \ (\ys. Bs \ pred ss)\<^isup>* \ pred ts"}. + + using the @{text "As"} we ???? *} -ML{*fun make_defs ((binding, syn), trm) lthy = + +text {* + First we have to produce for each predicate its definitions of the form + + @{text [display] "pred \ \zs. \preds. orules \ pred zs"} + + We use the following wrapper function to actually make the definition via + @{ML LocalTheory.define}. The function takes a predicate name, a syntax + annotation and a term representing the right-hand side of the definition. +*} + +ML %linenosgray{*fun make_defs ((predname, syn), trm) lthy = let - val arg = ((binding, syn), (Attrib.empty_binding, trm)) + val arg = ((predname, syn), (Attrib.empty_binding, trm)) val ((_, (_ , thm)), lthy') = LocalTheory.define Thm.internalK arg lthy in (thm, lthy') end*} text {* - Returns the definition and the local theory in which this definition has - been made. The @{ML internalK in Thm} is just a flag attached to the + Returns the definition (as theorem) and the local theory in which this definition has + been made. The @{ML internalK in Thm} in Line 4 is just a flag attached to the theorem (others flags are @{ML definitionK in Thm} or @{ML axiomK in Thm}). These flags just classify theorems and have no significant meaning, except - for tools such as finding theorems in the theorem database. + for tools such as finding theorems in the theorem database. We also + use @{ML empty_binding in Attrib} in Line 3, since the definition does not need any + theorem attributes. *} -local_setup {* - fn lthy => - let - val (def, lthy') = make_defs ((Binding.name "MyTrue", NoSyn), @{term True}) lthy - val _ = warning (str_of_thm lthy' def) - in - lthy' - end -*} +local_setup %gray {* fn lthy => +let + val arg = ((Binding.name "MyTrue", NoSyn), @{term True}) + val (def, lthy') = make_defs arg lthy + val _ = warning (str_of_thm lthy' def) +in + lthy' +end *} text {* Prints out the theorem @{prop "MyTrue \ True"}. @@ -89,8 +126,7 @@ val arg_tys = [@{typ "nat"}] val def = defs_aux lthy orules preds (pred, arg_tys) in - warning (Syntax.string_of_term lthy def); - lthy + warning (Syntax.string_of_term lthy def); lthy end*} text {* @@ -117,6 +153,22 @@ The actual definitions are made in Line 7. *} +local_setup {* +fn lthy => +let + val rules = [@{prop "even 0"}, + @{prop "\n::nat. odd n \ even (Suc n)"}, + @{prop "\n::nat. even n \ odd (Suc n)"}] + val preds = [@{term "even::nat\bool"}, @{term "odd::nat\bool"}] + val prednames = [Binding.name "even", Binding.name "odd"] + val syns = [NoSyn, NoSyn] + val arg_tyss = [[@{typ "nat"}], [@{typ "nat"}]] + val (defs, lthy') = definitions rules preds prednames syns arg_tyss lthy +in + warning (str_of_thms lthy' defs); lthy +end*} + + subsection {* Induction Principles *} text {* Recall the proof for the induction principle for @{term "even"}: *} @@ -189,8 +241,22 @@ set up the goals for the induction principles. *} -ML {* singleton *} -ML {* ProofContext.export *} +ML {* +fun prove_induction lthy2 defs rules cnewpreds ((pred, newpred), tys) = +let + val zs = replicate (length tys) "z" + val (newargnames, lthy3) = Variable.variant_fixes zs lthy2; + val newargs = map Free (newargnames ~~ tys) + + val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs)) + val goal = Logic.list_implies + (rules, HOLogic.mk_Trueprop (list_comb (newpred, newargs))) +in + Goal.prove lthy3 [] [prem] goal + (fn {prems, ...} => induction_tac defs prems cnewpreds) + |> singleton (ProofContext.export lthy3 lthy2) +end +*} ML %linenosgray{*fun inductions rules defs preds tyss lthy1 = let @@ -204,23 +270,10 @@ val cnewpreds = map (cterm_of thy) newpreds val rules' = map (subst_free (preds ~~ newpreds)) rules - - fun prove_induction ((pred, newpred), tys) = - let - val zs = replicate (length tys) "z" - val (newargnames, lthy3) = Variable.variant_fixes zs lthy2; - val newargs = map Free (newargnames ~~ tys) - - val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs)) - val goal = Logic.list_implies - (rules', HOLogic.mk_Trueprop (list_comb (newpred, newargs))) - in - Goal.prove lthy3 [] [prem] goal - (fn {prems, ...} => induction_tac defs prems cnewpreds) - |> singleton (ProofContext.export lthy3 lthy1) - end in - map prove_induction (preds ~~ newpreds ~~ tyss) + map (prove_induction lthy2 defs rules' cnewpreds) + (preds ~~ newpreds ~~ tyss) + |> ProofContext.export lthy2 lthy1 end*} ML {* diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/Package/Ind_Interface.thy --- a/CookBook/Package/Ind_Interface.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/Package/Ind_Interface.thy Mon Mar 16 00:12:32 2009 +0100 @@ -360,6 +360,8 @@ The definition of the transitive closure should look as follows: *} +ML {* SpecParse.opt_thm_name *} + text {* A proposition can be parsed using the function @{ML prop in OuterParse}. @@ -377,7 +379,7 @@ \begin{table} @{ML "opt_thm_name: - string -> token list -> Attrib.binding * token list" in SpecParse} + string -> Attrib.binding parser" in SpecParse} \end{table} We now have all the necessary tools to write the parser for our diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/Parsing.thy --- a/CookBook/Parsing.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/Parsing.thy Mon Mar 16 00:12:32 2009 +0100 @@ -978,6 +978,42 @@ *} +section {* Methods *} + +text {* + Methods are one central concept in Isabelle. Methods can be applied with the command + \isacommand{apply}. To print out all currently known methods you can use the Isabelle + command. +*} + +print_methods + +text {* + An example of a very simple method is the following code. +*} + +method_setup %gray foobar_meth = + {* Method.no_args + (SIMPLE_METHOD ((etac @{thm conjE} THEN' rtac @{thm conjI}) 1)) *} + "foobar method" + +text {* + It defines the method @{text foobar_meth} which takes no arguments and + only applies the tactic @{thm [source] conjE} and then @{thm [source] conjI}. + It can be applied in the following proof +*} + +lemma shows "A \ B \ C \ D" +apply(foobar_meth) +txt {* + where it results in the goal state + + \begin{minipage}{\textwidth} + @{subgoals} + \end{minipage} *} +(*<*)oops(*>*) + + (*<*) chapter {* Parsing *} diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/Solutions.thy --- a/CookBook/Solutions.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/Solutions.thy Mon Mar 16 00:12:32 2009 +0100 @@ -148,7 +148,7 @@ To measure any difference between the simproc and conversion, we will create mechanically terms involving additions and then set up a goal to be simplified. We have to be careful to set up the goal so that - other parts of the simplifier do not interfere. For this we set up an + other parts of the simplifier do not interfere. For this we construct an unprovable goal which, after simplification, we are going to ``prove'' with the help of the lemma: *} @@ -175,7 +175,7 @@ end*} text {* - This function generates for example + This function generates for example: @{ML_response_fake [display,gray] "warning (Syntax.string_of_term @{context} (term_tree 2))" @@ -191,7 +191,7 @@ Note that the goal needs to be wrapped in a @{term "Trueprop"}. Next we define two tactics, @{text "c_tac"} and @{text "s_tac"}, for the conversion and simproc, respectively. The idea is to first apply the conversion (respectively simproc) and - then prove the remaining goal using the lemma @{thm [source] cheat}. + then prove the remaining goal using the @{thm [source] cheat}-lemma. *} ML{*local @@ -210,7 +210,7 @@ text {* If you do the exercise, you can see that both ways of simplifying additions - perform relatively the same with perhaps some advantages for the + perform relatively similar with perhaps some advantages for the simproc. That means the simplifier, even if much more complicated than conversions, is quite efficient for tasks it is designed for. It usually does not make sense to implement general-purpose rewriting using diff -r 4e2341f6599d -r fb8f22dd8ad0 CookBook/Tactical.thy --- a/CookBook/Tactical.thy Sat Mar 14 00:48:22 2009 +0100 +++ b/CookBook/Tactical.thy Mon Mar 16 00:12:32 2009 +0100 @@ -1470,7 +1470,7 @@ *} lemma shows "Suc 0 = 1" - apply(simp) +apply(simp) (*<*)oops(*>*) text {* @@ -1493,14 +1493,14 @@ *} lemma shows "Suc 0 = 1" - apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*}) +apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*}) (*<*)oops(*>*) text {* Now the message shows up only once since the term @{term "1::nat"} is left unchanged. - Setting up a simproc using the command \isacommand{setup\_simproc} will + Setting up a simproc using the command \isacommand{simproc\_setup} will always add automatically the simproc to the current simpset. If you do not want this, then you have to use a slightly different method for setting up the simproc. First the function @{ML fail_sp_aux} needs to be modified diff -r 4e2341f6599d -r fb8f22dd8ad0 cookbook.pdf Binary file cookbook.pdf has changed