# HG changeset patch # User Christian Urban # Date 1237481428 -3600 # Node ID ca0ac2e75f6dbbbbd54029604c250056260fd726 # Parent 069d525f8f1dcc65d03f4307f2807fa2e2351726 more one the simple-inductive chapter diff -r 069d525f8f1d -r ca0ac2e75f6d ProgTutorial/FirstSteps.thy --- a/ProgTutorial/FirstSteps.thy Thu Mar 19 13:28:16 2009 +0100 +++ b/ProgTutorial/FirstSteps.thy Thu Mar 19 17:50:28 2009 +0100 @@ -25,8 +25,6 @@ section {* Including ML-Code *} - - text {* The easiest and quickest way to include code in a theory is by using the \isacommand{ML}-command. For example: @@ -154,9 +152,21 @@ text {* The easiest way to get the string of a theorem is to transform it - into a @{ML_type cterm} using the function @{ML crep_thm}. Theorems - also include schematic variables, such as @{text "?P"}, @{text "?Q"} - and so on. In order to improve the readability of theorems we convert + into a @{ML_type cterm} using the function @{ML crep_thm}. +*} + +ML{*fun str_of_thm_raw ctxt thm = + str_of_cterm ctxt (#prop (crep_thm thm))*} + +text {* + Theorems also include schematic variables, such as @{text "?P"}, + @{text "?Q"} and so on. + + @{ML_response_fake [display, gray] + "warning (str_of_thm_raw @{context} @{thm conjI})" + "\?P; ?Q\ \ ?P \ ?Q"} + + In order to improve the readability of theorems we convert these schematic variables into free variables using the function @{ML Variable.import_thms}. *} @@ -172,11 +182,20 @@ str_of_cterm ctxt (#prop (crep_thm (no_vars ctxt thm)))*} text {* + Theorem @{thm [source] conjI} looks now as follows + + @{ML_response_fake [display, gray] + "warning (str_of_thm_raw @{context} @{thm conjI})" + "\?P; ?Q\ \ ?P \ ?Q"} + Again the function @{ML commas} helps with printing more than one theorem. *} ML{*fun str_of_thms ctxt thms = - commas (map (str_of_thm ctxt) thms)*} + commas (map (str_of_thm ctxt) thms) + +fun str_of_thms_raw ctxt thms = + commas (map (str_of_thm_raw ctxt) thms)*} text {* (FIXME @{ML Toplevel.debug} @{ML Toplevel.profiling} @{ML Toplevel.debug}) @@ -599,7 +618,8 @@ Const \ $ (Const \ $ (Free (\"P\",\) $ \)) $ (Const \ $ (Free (\"Q\",\) $ \)))"} - (FIXME: handy functions for constructing terms: @{ML list_comb}, @{ML lambda}) + (FIXME: handy functions for constructing terms: @{ML list_comb}, @{ML lambda}, + @{ML subst_free}) *} diff -r 069d525f8f1d -r ca0ac2e75f6d ProgTutorial/Package/Ind_Code.thy --- a/ProgTutorial/Package/Ind_Code.thy Thu Mar 19 13:28:16 2009 +0100 +++ b/ProgTutorial/Package/Ind_Code.thy Thu Mar 19 17:50:28 2009 +0100 @@ -33,24 +33,28 @@ @{text [display] "\xs. As \ (\ys. Bs \ (\preds. orules \ pred ss))\<^isup>* \ (\preds. orules \ pred ts"} - applying as many allI and impI as possible + By applying as many allI and impI as possible, we have - so we have @{text "As"}, @{text "(\ys. Bs \ (\preds. orules \ pred ss))\<^isup>*"}, + @{text "As"}, @{text "(\ys. Bs \ (\preds. orules \ pred ss))\<^isup>*"}, @{text "orules"}; and have to show @{text "pred ts"} the $i$th @{text "orule"} is of the form @{text "\xs. As \ (\ys. Bs \ pred ss)\<^isup>* \ pred ts"}. - using the @{text "As"} we ???? + So we apply the $i$th @{text "orule"}, but we have to show the @{text "As"} (by assumption) + and all @{text "(\ys. Bs \ pred ss)\<^isup>*"}. For the latter we use the assumptions + @{text "(\ys. Bs \ (\preds. orules \ pred ss))\<^isup>*"} and @{text "orules"}. + *} text {* - First we have to produce for each predicate its definitions of the form + First we have to produce for each predicate the definition of the form @{text [display] "pred \ \zs. \preds. orules \ pred zs"} - In order to make definitions, we use the following wrapper for + and then ``register'' the definitions with Isabelle. + To do the latter, we use the following wrapper for @{ML LocalTheory.define}. The wrapper takes a predicate name, a syntax annotation and a term representing the right-hand side of the definition. *} @@ -92,20 +96,18 @@ @{text "> MyTrue \ True"} \end{isabelle} - The next two functions construct the terms we need for the definitions for - our \isacommand{simple\_inductive} command. These - terms are of the form + The next two functions construct the right-hand sides of the definitions, which + are of the form - @{text [display] "\\<^raw:$zs$>. \preds. orules \ pred \<^raw:$zs$>"} + @{text [display] "\zs. \preds. orules \ pred zs"} - The variables @{text "\<^raw:$zs$>"} need to be chosen so that they do not occur + The variables @{text "zs"} need to be chosen so that they do not occur in the @{text orules} and also be distinct from the @{text "preds"}. The first function constructs the term for one particular predicate, say @{text "pred"}; the number of arguments of this predicate is - determined by the number of argument types of @{text "arg_tys"}. - So it takes these two parameters as arguments. The other arguments are - all the @{text "preds"} and the @{text "orules"}. + determined by the number of argument types given in @{text "arg_tys"}. + The other arguments are all @{text "preds"} and the @{text "orules"}. *} ML %linenosgray{*fun defs_aux lthy orules preds (pred, arg_tys) = @@ -127,19 +129,17 @@ text {* The function in Line 3 is just a helper function for constructing universal quantifications. The code in Lines 5 to 9 produces the fresh @{text - "\<^raw:$zs$>"}. For this it pairs every argument type with the string + "zs"}. For this it pairs every argument type with the string @{text [quotes] "z"} (Line 7); then generates variants for all these strings - so that they are unique w.r.t.~to the @{text "orules"} and the predicates; + so that they are unique w.r.t.~to the predicates and @{text "orules"} (Line 8); in Line 9 it generates the corresponding variable terms for the unique strings. The unique free variables are applied to the predicate (Line 11) using the function @{ML list_comb}; then the @{text orules} are prefixed (Line 12); in Line 13 we quantify over all predicates; and in line 14 we just abstract - over all the @{text "\<^raw:$zs$>"}, i.e.~the fresh arguments of the - predicate. - - A testcase for this function is + over all the @{text "zs"}, i.e.~the fresh arguments of the + predicate. A testcase for this function is *} local_setup %gray{* fn lthy => @@ -147,7 +147,7 @@ val orules = [@{prop "even 0"}, @{prop "\n::nat. odd n \ even (Suc n)"}, @{prop "\n::nat. even n \ odd (Suc n)"}] - val preds = [@{term "even::nat\bool"}, @{term "odd::nat\bool"}, @{term "z::nat"}] + val preds = [@{term "even::nat\bool"}, @{term "odd::nat\bool"}] val pred = @{term "even::nat\bool"} val arg_tys = [@{typ "nat"}] val def = defs_aux lthy orules preds (pred, arg_tys) @@ -156,14 +156,14 @@ end *} text {* - It constructs the left-hand side for the definition of @{text "even"}. So we obtain - as printout the term + It calls @{ML defs_aux} for the definition of @{text "even"} and prints + out the definition. So we obtain as printout @{text [display] "\z. \even odd. (even 0) \ (\n. odd n \ even (Suc n)) \ (\n. even n \ odd (Suc n)) \ even z"} - The main function for the definitions now has to just iterate the function + The second function for the definitions has to just iterate the function @{ML defs_aux} over all predicates. The argument @{text "preds"} is again the the list of predicates as @{ML_type term}s; the argument @{text "prednames"} is the list of names of the predicates; @{text "arg_tyss"} is @@ -188,10 +188,8 @@ @{ML ObjectLogic.atomize_term} to make the transformation (Line 4). The call to @{ML defs_aux} in Line 5 produces all left-hand sides of the definitions. The actual definitions are then made in Line 7. The result - of the function is a list of theorems and a local theory. - - - A testcase for this function is + of the function is a list of theorems and a local theory. A testcase for + this function is *} local_setup %gray {* fn lthy => @@ -240,9 +238,9 @@ done text {* - The code for such induction principles has to accomplish two tasks: + The code for automating such induction principles has to accomplish two tasks: constructing the induction principles from the given introduction - rules and then automatically generating a proof of them using a tactic. + rules and then automatically generating proofs for them using a tactic. The tactic will use the following helper function for instantiating universal quantifiers. @@ -260,7 +258,7 @@ EVERY' (map (dtac o inst_spec) ctrms)*} text {* - we can use @{ML inst_spec} in the following proof to instantiate the + we can use @{ML inst_spec_tac} in the following proof to instantiate the three quantifiers in the assumption. *} @@ -290,11 +288,11 @@ assume_tac]*} text {* - We only have to give it as arguments the definitions, the premise - (like @{text "even n"}) - and the instantiations. Compare this with the manual proof given for the - lemma @{thm [source] man_ind_principle}. - A testcase for this tactic is the function + We only have to give it the definitions, the premise (like @{text "even n"}) + and the instantiations as arguments. Compare this with the manual proof + given for the lemma @{thm [source] man_ind_principle}: there is almos a + one-to-one correspondence between the \isacommand{apply}-script and the + @{ML induction_tac}. A testcase for this tactic is the function *} ML{*fun test_tac prems = @@ -309,7 +307,7 @@ which indeed proves the induction principle: *} -lemma +lemma auto_ind_principle: assumes prems: "even n" shows "P 0 \ (\m. Q m \ P (Suc m)) \ (\m. P m \ Q (Suc m)) \ P n" apply(tactic {* test_tac @{thms prems} *}) @@ -322,27 +320,37 @@ @{text "pred"} a goal of the form @{text [display] - "\\<^raw:$zs$>. pred \<^raw:$zs$> \ rules[preds := \<^raw:$Ps$>] \ \<^raw:$P$> \<^raw:$zs$>"} + "pred ?zs \ rules[preds := ?Ps] \ ?P$ ?zs"} - where the given predicates @{text preds} are replaced in the introduction - rules by new distinct variables written @{text "\<^raw:$Ps$>"}. + where the predicates @{text preds} are replaced in the introduction + rules by new distinct variables written @{text "Ps"}. We also need to generate fresh arguments for the predicate @{text "pred"} in - the premise and the @{text "\<^raw:$P$>"} in the conclusion. We achieve + the premise and the @{text "?P"} in the conclusion. Note + that the @{text "?Ps"} and @{text "?zs"} need to be + schematic variables that can be instantiated. This corresponds to what the + @{thm [source] auto_ind_principle} looks like: + + \begin{isabelle} + \isacommand{thm}~@{thm [source] auto_ind_principle}\\ + @{text "> \even ?n; ?P 0; \m. ?Q m \ ?P (Suc m); \m. ?P m \ ?Q (Suc m)\ \ ?P ?n"} + \end{isabelle} + + We achieve that in two steps. The function below expects that the introduction rules are already appropriately substituted. The argument @{text "srules"} stands for these substituted rules; @{text cnewpreds} are the certified terms coresponding - to the variables @{text "\<^raw:$Ps$>"}; @{text "pred"} is the predicate for + to the variables @{text "Ps"}; @{text "pred"} is the predicate for which we prove the introduction principle; @{text "newpred"} is its replacement and @{text "tys"} are the argument types of this predicate. *} -ML %linenosgray{*fun prove_induction lthy defs srules cnewpreds ((pred, newpred), tys) = +ML %linenosgray{*fun prove_induction lthy defs srules cnewpreds ((pred, newpred), arg_tys) = let - val zs = replicate (length tys) "z" + val zs = replicate (length arg_tys) "z" val (newargnames, lthy') = Variable.variant_fixes zs lthy; - val newargs = map Free (newargnames ~~ tys) + val newargs = map Free (newargnames ~~ arg_tys) val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs)) val goal = Logic.list_implies @@ -354,33 +362,58 @@ end *} text {* - In Line 3 we produce names @{text "\<^raw:$zs$>"} for each type in the + In Line 3 we produce names @{text "zs"} for each type in the argument type list. Line 4 makes these names unique and declares them as \emph{free} (but fixed) variables in the local theory @{text "lthy'"}. In Line 5 we just construct the terms corresponding to these variables. The term variables are applied to the predicate in Line 7 (this corresponds - to the first premise @{text "pred \<^raw:$zs$>"} of the induction principle). - In Line 8 and 9, we first construct the term @{text "\<^raw:$P$>\<^raw:$zs$>"} + to the first premise @{text "pred zs"} of the induction principle). + In Line 8 and 9, we first construct the term @{text "P zs"} and then add the (substituded) introduction rules as premises. In case that no introduction rules are given, the conclusion of this implication needs to be wrapped inside a @{term Trueprop}, otherwise the Isabelle's goal mechanism will fail. - In Line 11 we set up the goal to be proved; in the next line call the tactic - for proving the induction principle. This tactic expects definitions, the + In Line 11 we set up the goal to be proved; in the next line we call the tactic + for proving the induction principle. This tactic expects the definitions, the premise and the (certified) predicates with which the introduction rules - have been substituted. This will return a theorem. However, it is a theorem + have been substituted. The code in these two lines will return a theorem. + However, it is a theorem proved inside the local theory @{text "lthy'"}, where the variables @{text - "\<^raw:$zs$>"} are fixed, but free. By exporting this theorem from @{text - "lthy'"} (which contains the @{text "\<^raw:$zs$>"} as free) to @{text - "lthy"} (which does not), we obtain the desired quantifications @{text - "\\<^raw:$zs$>"}. + "zs"} are fixed, but free. By exporting this theorem from @{text + "lthy'"} (which contains the @{text "zs"} as free) to @{text + "lthy"} (which does not), we obtain the desired schematic variables. +*} - (FIXME testcase) - +local_setup %gray{* fn lthy => +let + val defs = [@{thm even_def}, @{thm odd_def}] + val srules = [@{prop "P (0::nat)"}, + @{prop "\n::nat. Q n \ P (Suc n)"}, + @{prop "\n::nat. P n \ Q (Suc n)"}] + val cnewpreds = [@{cterm "P::nat\bool"}, @{cterm "Q::nat\bool"}] + val pred = @{term "even::nat\bool"} + val newpred = @{term "P::nat\bool"} + val arg_tys = [@{typ "nat"}] + val intro = + prove_induction lthy defs srules cnewpreds ((pred, newpred), arg_tys) +in + warning (str_of_thm_raw lthy intro); lthy +end *} - Now it is left to produce the new predicates with which the introduction - rules are substituted. +text {* + This prints out: + + @{text [display] + " \even ?z; P 0; \n. Q n \ P (Suc n); \n. P n \ Q (Suc n)\ \ P ?z"} + + Note that the export from @{text lthy'} to @{text lthy} in Line 13 above + has turned the free, but fixed, @{text "z"} into a schematic + variable @{text "?z"}. + + We still have to produce the new predicates with which the introduction + rules are substituted and iterate @{ML prove_induction} over all + predicates. This is what the next function does. *} ML %linenosgray{*fun inductions rules defs preds arg_tyss lthy = @@ -404,15 +437,16 @@ text {* In Line 3 we generate a string @{text [quotes] "P"} for each predicate. In Line 4, we use the same trick as in the previous function, that is making the - @{text "\<^raw:$Ps$>"} fresh and declaring them as fixed but free in + @{text "Ps"} fresh and declaring them as fixed, but free, in the new local theory @{text "lthy'"}. From the local theory we extract the ambient theory in Line 6. We need this theory in order to certify the new predicates. In Line 8 we calculate the types of these new predicates - using the argument types. Next we turn them into terms and subsequently - certify them. We can now produce the substituted introduction rules - (Line 11). Line 14 and 15 just iterate the proofs for all predicates. + using the given argument types. Next we turn them into terms and subsequently + certify them (Line 9 and 10). We can now produce the substituted introduction rules + (Line 11) using the function @{ML subst_free}. Line 14 and 15 just iterate + the proofs for all predicates. From this we obtain a list of theorems. Finally we need to export the - fixed variables @{text "\<^raw:$Ps$>"} to obtain the correct quantification + fixed variables @{text "Ps"} to obtain the schematic variables (Line 16). A testcase for this function is @@ -428,32 +462,104 @@ val tyss = [[@{typ "nat"}], [@{typ "nat"}]] val ind_thms = inductions rules defs preds tyss lthy in - warning (str_of_thms lthy ind_thms); lthy -end -*} + warning (str_of_thms_raw lthy ind_thms); lthy +end *} text {* which prints out @{text [display] -"> even z \ -> P 0 \ (\m. Pa m \ P (Suc m)) \ (\m. P m \ Pa (Suc m)) \ P z, -> odd z \ -> P 0 \ (\m. Pa m \ P (Suc m)) \ (\m. P m \ Pa (Suc m)) \ Pa z"} +"> even ?z \ ?P1 0 \ +> (\m. ?Pa1 m \ ?P1 (Suc m)) \ (\m. ?P1 m \ ?Pa1 (Suc m)) \ ?P1 ?z, +> odd ?z \ ?P1 0 +> \ (\m. ?Pa1 m \ ?P1 (Suc m)) \ (\m. ?P1 m \ ?Pa1 (Suc m)) \ ?Pa1 ?z"} - This completes the code for the induction principles. Finally we can - prove the introduction rules. + Note that now both, the @{text "Ps"} and the @{text "zs"}, are schematic + variables. The numbers have been introduced by the pretty-printer and are + not significant. + This completes the code for the induction principles. Finally we can prove the + introduction rules. Their proofs are quite a bit more involved. To ease them + somewhat we use the following two helper function. *} -ML {* ObjectLogic.rulify *} - - ML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct) val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*} +text {* + To see what they do, let us suppose whe have the follwoing three + theorems. +*} + +lemma all_elims_test: + fixes P::"nat \ nat \ nat \ bool" + shows "\x y z. P x y z" sorry + +lemma imp_elims_test: + fixes A B C::"bool" + shows "A \ B \ C" sorry + +lemma imp_elims_test': + fixes A::"bool" + shows "A" "B" sorry + +text {* + The function @{ML all_elims} takes a list of (certified) terms and instantiates + theorems of the form @{thm [source] all_elims_test}. For example we can instantiate + the quantifiers in this theorem with @{term a}, @{term b} and @{term c} as follows + + @{ML_response_fake [display, gray] +"let + val ctrms = [@{cterm \"a::nat\"}, @{cterm \"b::nat\"}, @{cterm \"c::nat\"}] + val new_thm = all_elims ctrms @{thm all_elims_test} +in + warning (str_of_thm @{context} new_thm) +end" + "P a b c"} + + Similarly, the function @{ML imp_elims} eliminates preconditions from implications. + For example + + @{ML_response_fake [display, gray] +"warning (str_of_thm @{context} + (imp_elims @{thms imp_elims_test'} @{thm imp_elims_test}))" + "C"} +*} + +ML {* prems_of *} +ML {* Logic.strip_params *} +ML {* Logic.strip_assums_hyp *} + +ML {* +fun chop_print_tac ctxt thm = +let + val [trm] = prems_of thm + val params = map fst (Logic.strip_params trm) + val prems = Logic.strip_assums_hyp trm + val (prems1, prems2) = chop (length prems - 3) prems; + val (params1, params2) = chop (length params - 2) params; + val _ = warning (Syntax.string_of_term ctxt trm) + val _ = warning (commas params) + val _ = warning (commas (map (Syntax.string_of_term ctxt) prems)) + val _ = warning ((commas params1) ^ " | " ^ (commas params2)) + val _ = warning ((commas (map (Syntax.string_of_term ctxt) prems1)) ^ " | " ^ + (commas (map (Syntax.string_of_term ctxt) prems2))) +in + Seq.single thm +end +*} + + +lemma intro1: + shows "\m. odd m \ even (Suc m)" +apply(tactic {* ObjectLogic.rulify_tac 1 *}) +apply(tactic {* rewrite_goals_tac [@{thm even_def}, @{thm odd_def}] *}) +apply(tactic {* REPEAT (resolve_tac [@{thm allI}, @{thm impI}] 1) *}) +apply(tactic {* chop_print_tac @{context} *}) +oops + ML{*fun subproof2 prem params2 prems2 = SUBPROOF (fn {prems, ...} => let @@ -468,43 +574,79 @@ rtac prem'' 1 end)*} -ML{*fun subproof1 rules preds i = +text {* + +*} + + +ML %linenosgray{*fun subproof1 rules preds i = SUBPROOF (fn {params, prems, context = ctxt', ...} => let val (prems1, prems2) = chop (length prems - length rules) prems; val (params1, params2) = chop (length params - length preds) params; in rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1 + (* applicateion of the i-ith intro rule *) THEN EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1) end)*} +text {* + @{text "params1"} are the variables of the rules; @{text "params2"} is + the variables corresponding to the @{text "preds"}. + + @{text "prems1"} are the assumption corresponding to the rules; + @{text "prems2"} are the assumptions coming from the allIs/impIs + + you instantiate the parameters i-th introduction rule with the parameters + that come from the rule; and you apply it to the goal + + this now generates subgoals corresponding to the premisses of this + intro rule +*} + ML{* -fun introductions_tac defs rules preds i ctxt = +fun intros_tac defs rules preds i ctxt = EVERY1 [ObjectLogic.rulify_tac, K (rewrite_goals_tac defs), REPEAT o (resolve_tac [@{thm allI}, @{thm impI}]), subproof1 rules preds i ctxt]*} -lemma evenS: - shows "odd m \ even (Suc m)" -apply(tactic {* +text {* + A test case +*} + +ML{*fun intros_tac_test ctxt i = let val rules = [@{prop "even (0::nat)"}, - @{prop "\n::nat. odd n \ even (Suc n)"}, - @{prop "\n::nat. even n \ odd (Suc n)"}] + @{prop "\n::nat. odd n \ even (Suc n)"}, + @{prop "\n::nat. even n \ odd (Suc n)"}] val defs = [@{thm even_def}, @{thm odd_def}] val preds = [@{term "even::nat\bool"}, @{term "odd::nat\bool"}] in - introductions_tac defs rules preds 1 @{context} -end *}) + intros_tac defs rules preds i ctxt +end*} + +lemma intro0: + shows "even 0" +apply(tactic {* intros_tac_test @{context} 0 *}) +done + +lemma intro1: + shows "\m. odd m \ even (Suc m)" +apply(tactic {* intros_tac_test @{context} 1 *}) +done + +lemma intro2: + shows "\m. even m \ odd (Suc m)" +apply(tactic {* intros_tac_test @{context} 2 *}) done ML{*fun introductions rules preds defs lthy = let fun prove_intro (i, goal) = Goal.prove lthy [] [] goal - (fn {context, ...} => introductions_tac defs rules preds i context) + (fn {context, ...} => intros_tac defs rules preds i context) in map_index prove_intro rules end*} diff -r 069d525f8f1d -r ca0ac2e75f6d ProgTutorial/Tactical.thy --- a/ProgTutorial/Tactical.thy Thu Mar 19 13:28:16 2009 +0100 +++ b/ProgTutorial/Tactical.thy Thu Mar 19 17:50:28 2009 +0100 @@ -1776,7 +1776,7 @@ "((\x y. x + y) 2) 10 \ 2 + 10"} Note that the actual response in this example is @{term "2 + 10 \ 2 + (10::nat)"}, - since the pretty-printer for @{ML_type cterm}s already beta-normalises terms. + since the pretty-printer for @{ML_type cterm}s eta-normalises terms. But how we constructed the term (using the function @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s) ensures that the left-hand side must contain beta-redexes. Indeed @@ -1810,8 +1810,7 @@ end" "((\x y. x + y) 2) \ \y. 2 + y"} - Again, we actually see as output only the fully normalised term - @{text "\y. 2 + y"}. + Again, we actually see as output only the fully eta-normalised term. The main point of conversions is that they can be used for rewriting @{ML_type cterm}s. To do this you can use the function @{ML diff -r 069d525f8f1d -r ca0ac2e75f6d progtutorial.pdf Binary file progtutorial.pdf has changed