changeset 170 | 90bee31628dc |
parent 169 | d3fcc1a0272c |
parent 166 | 00d153e32a53 |
child 172 | ec47352e99c2 |
169:d3fcc1a0272c | 170:90bee31628dc |
---|---|
1 theory Tactical |
1 theory Tactical |
2 imports Base FirstSteps |
2 imports Base FirstSteps |
3 uses "infix_conv.ML" |
|
4 begin |
3 begin |
5 |
4 |
6 chapter {* Tactical Reasoning\label{chp:tactical} *} |
5 chapter {* Tactical Reasoning\label{chp:tactical} *} |
7 |
6 |
8 text {* |
7 text {* |
9 The main reason for descending to the ML-level of Isabelle is to be able to |
8 The main reason for descending to the ML-level of Isabelle is to be able to |
10 implement automatic proof procedures. Such proof procedures usually lessen |
9 implement automatic proof procedures. Such proof procedures usually lessen |
11 considerably the burden of manual reasoning, for example, when introducing |
10 considerably the burden of manual reasoning, for example, when introducing |
12 new definitions. These proof procedures are centred around refining a goal |
11 new definitions. These proof procedures are centred around refining a goal |
13 state using tactics. This is similar to the \isacommand{apply}-style |
12 state using tactics. This is similar to the \isacommand{apply}-style |
14 reasoning at the user level, where goals are modified in a sequence of proof |
13 reasoning at the user-level, where goals are modified in a sequence of proof |
15 steps until all of them are solved. However, there are also more structured |
14 steps until all of them are solved. However, there are also more structured |
16 operations available on the ML-level that help with the handling of |
15 operations available on the ML-level that help with the handling of |
17 variables and assumptions. |
16 variables and assumptions. |
18 |
17 |
19 *} |
18 *} |
64 \begin{readmore} |
63 \begin{readmore} |
65 To learn more about the function @{ML Goal.prove} see \isccite{sec:results} |
64 To learn more about the function @{ML Goal.prove} see \isccite{sec:results} |
66 and the file @{ML_file "Pure/goal.ML"}. See @{ML_file |
65 and the file @{ML_file "Pure/goal.ML"}. See @{ML_file |
67 "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic |
66 "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic |
68 tactics and tactic combinators; see also Chapters 3 and 4 in the old |
67 tactics and tactic combinators; see also Chapters 3 and 4 in the old |
69 Isabelle Reference Manual. |
68 Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual. |
70 \end{readmore} |
69 \end{readmore} |
71 |
70 |
72 Note that in the code above we use antiquotations for referencing the theorems. Many theorems |
71 Note that in the code above we use antiquotations for referencing the theorems. Many theorems |
73 also have ML-bindings with the same name. Therefore, we could also just have |
72 also have ML-bindings with the same name. Therefore, we could also just have |
74 written @{ML "etac disjE 1"}, or in case where there are no ML-binding obtain |
73 written @{ML "etac disjE 1"}, or in case where there are no ML-binding obtain |
99 apply(tactic {* foo_tac *}) |
98 apply(tactic {* foo_tac *}) |
100 done |
99 done |
101 |
100 |
102 text {* |
101 text {* |
103 By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the |
102 By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the |
104 user level of Isabelle the tactic @{ML foo_tac} or |
103 user-level of Isabelle the tactic @{ML foo_tac} or |
105 any other function that returns a tactic. |
104 any other function that returns a tactic. |
106 |
105 |
107 The tactic @{ML foo_tac} is just a sequence of simple tactics stringed |
106 The tactic @{ML foo_tac} is just a sequence of simple tactics stringed |
108 together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac} |
107 together by @{ML THEN}. As can be seen, each simple tactic in @{ML foo_tac} |
109 has a hard-coded number that stands for the subgoal analysed by the |
108 has a hard-coded number that stands for the subgoal analysed by the |
287 \end{isabelle} |
286 \end{isabelle} |
288 \caption{The figure shows a proof where each intermediate goal state is |
287 \caption{The figure shows a proof where each intermediate goal state is |
289 printed by the Isabelle system and by @{ML my_print_tac}. The latter shows |
288 printed by the Isabelle system and by @{ML my_print_tac}. The latter shows |
290 the goal state as represented internally (highlighted boxes). This |
289 the goal state as represented internally (highlighted boxes). This |
291 illustrates that every goal state in Isabelle is represented by a theorem: |
290 illustrates that every goal state in Isabelle is represented by a theorem: |
292 when we start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is |
291 when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is |
293 @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when we finish the proof the |
292 @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the |
294 theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}} |
293 theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}} |
295 \end{figure} |
294 \end{figure} |
296 *} |
295 *} |
297 |
296 |
298 |
297 |
326 |
325 |
327 section {* Simple Tactics *} |
326 section {* Simple Tactics *} |
328 |
327 |
329 text {* |
328 text {* |
330 Let us start with the tactic @{ML print_tac}, which is quite useful for low-level |
329 Let us start with the tactic @{ML print_tac}, which is quite useful for low-level |
331 debugging of tactics. It just prints out a message and the current goal state. |
330 debugging of tactics. It just prints out a message and the current goal state |
332 Processing the proof |
331 (unlike @{ML my_print_tac}, it prints the goal state as the user would see it). |
332 For example, processing the proof |
|
333 *} |
333 *} |
334 |
334 |
335 lemma shows "False \<Longrightarrow> True" |
335 lemma shows "False \<Longrightarrow> True" |
336 apply(tactic {* print_tac "foo message" *}) |
336 apply(tactic {* print_tac "foo message" *}) |
337 txt{*gives:\medskip |
337 txt{*gives:\medskip |
384 \end{minipage}*} |
384 \end{minipage}*} |
385 (*<*)oops(*>*) |
385 (*<*)oops(*>*) |
386 |
386 |
387 text {* |
387 text {* |
388 Note the number in each tactic call. Also as mentioned in the previous section, most |
388 Note the number in each tactic call. Also as mentioned in the previous section, most |
389 basic tactics take such an argument; it addresses the subgoal they are analysing. |
389 basic tactics take such a number as argument; it addresses the subgoal they are analysing. |
390 In the proof below, we first split up the conjunction in the second subgoal by |
390 In the proof below, we first split up the conjunction in the second subgoal by |
391 focusing on this subgoal first. |
391 focusing on this subgoal first. |
392 *} |
392 *} |
393 |
393 |
394 lemma shows "Foo" and "P \<and> Q" |
394 lemma shows "Foo" and "P \<and> Q" |
476 |
476 |
477 then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but |
477 then the function raises an exception. The function @{ML RSN} is similar to @{ML RS}, but |
478 takes an additional number as argument that makes explicit which premise |
478 takes an additional number as argument that makes explicit which premise |
479 should be instantiated. |
479 should be instantiated. |
480 |
480 |
481 To improve readability of the theorems we produce below, we shall use |
481 To improve readability of the theorems we produce below, we shall use the |
482 the following function |
482 function @{ML no_vars} from Section~\ref{sec:printing}, which transforms |
483 *} |
483 schematic variables into free ones. Using this function for the first @{ML |
484 |
484 RS}-expression above produces the more readable result: |
485 ML{*fun no_vars ctxt thm = |
|
486 let |
|
487 val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt |
|
488 in |
|
489 thm' |
|
490 end*} |
|
491 |
|
492 text {* |
|
493 that transform the schematic variables of a theorem into free variables. |
|
494 Using this function for the first @{ML RS}-expression above would produce |
|
495 the more readable result: |
|
496 |
485 |
497 @{ML_response_fake [display,gray] |
486 @{ML_response_fake [display,gray] |
498 "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"} |
487 "no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"} |
499 |
488 |
500 If you want to instantiate more than one premise of a theorem, you can use |
489 If you want to instantiate more than one premise of a theorem, you can use |
660 \begin{readmore} |
649 \begin{readmore} |
661 The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and |
650 The function @{ML SUBPROOF} is defined in @{ML_file "Pure/subgoal.ML"} and |
662 also described in \isccite{sec:results}. |
651 also described in \isccite{sec:results}. |
663 \end{readmore} |
652 \end{readmore} |
664 |
653 |
665 A similar but less powerful function than @{ML SUBPROOF} is @{ML SUBGOAL}. |
654 Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL} |
666 It allows you to inspect a given subgoal. With this you can implement |
655 and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former |
667 a tactic that applies a rule according to the topmost logic connective in the |
656 presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type |
668 subgoal (to illustrate this we only analyse a few connectives). The code |
657 cterm}). With this you can implement a tactic that applies a rule according |
669 of the tactic is as follows.\label{tac:selecttac} |
658 to the topmost logic connective in the subgoal (to illustrate this we only |
670 *} |
659 analyse a few connectives). The code of the tactic is as |
671 |
660 follows.\label{tac:selecttac} |
672 ML %linenosgray{*fun select_tac (t,i) = |
661 |
662 *} |
|
663 |
|
664 ML %linenosgray{*fun select_tac (t, i) = |
|
673 case t of |
665 case t of |
674 @{term "Trueprop"} $ t' => select_tac (t',i) |
666 @{term "Trueprop"} $ t' => select_tac (t', i) |
675 | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t',i) |
667 | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i) |
676 | @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i |
668 | @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i |
677 | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i |
669 | @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i |
678 | @{term "Not"} $ _ => rtac @{thm notI} i |
670 | @{term "Not"} $ _ => rtac @{thm notI} i |
679 | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i |
671 | Const (@{const_name "All"}, _) $ _ => rtac @{thm allI} i |
680 | _ => all_tac*} |
672 | _ => all_tac*} |
687 analysed. Similarly with meta-implications in the next line. While for the |
679 analysed. Similarly with meta-implications in the next line. While for the |
688 first five patterns we can use the @{text "@term"}-antiquotation to |
680 first five patterns we can use the @{text "@term"}-antiquotation to |
689 construct the patterns, the pattern in Line 8 cannot be constructed in this |
681 construct the patterns, the pattern in Line 8 cannot be constructed in this |
690 way. The reason is that an antiquotation would fix the type of the |
682 way. The reason is that an antiquotation would fix the type of the |
691 quantified variable. So you really have to construct the pattern using the |
683 quantified variable. So you really have to construct the pattern using the |
692 basic term-constructors. This is not necessary in other cases, because their type |
684 basic term-constructors. This is not necessary in other cases, because their |
693 is always fixed to function types involving only the type @{typ bool}. For the |
685 type is always fixed to function types involving only the type @{typ |
694 final pattern, we chose to just return @{ML all_tac}. Consequently, |
686 bool}. (See Section \ref{sec:terms_types_manually} about constructing terms |
695 @{ML select_tac} never fails. |
687 manually.) For the catch-all pattern, we chose to just return @{ML all_tac}. |
688 Consequently, @{ML select_tac} never fails. |
|
689 |
|
696 |
690 |
697 Let us now see how to apply this tactic. Consider the four goals: |
691 Let us now see how to apply this tactic. Consider the four goals: |
698 *} |
692 *} |
699 |
693 |
700 |
694 |
725 text {* |
719 text {* |
726 then we have to be careful to not apply the tactic to the two subgoals produced by |
720 then we have to be careful to not apply the tactic to the two subgoals produced by |
727 the first goal. To do this can result in quite messy code. In contrast, |
721 the first goal. To do this can result in quite messy code. In contrast, |
728 the ``reverse application'' is easy to implement. |
722 the ``reverse application'' is easy to implement. |
729 |
723 |
730 The tactic @{ML "CSUBGOAL"} is similar to @{ML SUBGOAL} except that it takes |
724 Of course, this example is |
731 a @{ML_type cterm} instead of a @{ML_type term}. Of course, this example is |
|
732 contrived: there are much simpler methods available in Isabelle for |
725 contrived: there are much simpler methods available in Isabelle for |
733 implementing a proof procedure analysing a goal according to its topmost |
726 implementing a proof procedure analysing a goal according to its topmost |
734 connective. These simpler methods use tactic combinators, which we will |
727 connective. These simpler methods use tactic combinators, which we will |
735 explain in the next section. |
728 explain in the next section. |
736 |
729 |
920 because otherwise @{ML REPEAT} runs into an infinite loop (it applies the |
913 because otherwise @{ML REPEAT} runs into an infinite loop (it applies the |
921 tactic as long as it succeeds). The function |
914 tactic as long as it succeeds). The function |
922 @{ML REPEAT1} is similar, but runs the tactic at least once (failing if |
915 @{ML REPEAT1} is similar, but runs the tactic at least once (failing if |
923 this is not possible). |
916 this is not possible). |
924 |
917 |
925 If you are after the ``primed'' version of @{ML repeat_xmp} then you |
918 If you are after the ``primed'' version of @{ML repeat_xmp}, then you |
926 need to implement it as |
919 need to implement it as |
927 *} |
920 *} |
928 |
921 |
929 ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*} |
922 ML{*val repeat_xmp' = REPEAT o CHANGED o select_tac'*} |
930 |
923 |
1011 Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}. |
1004 Most tactic combinators described in this section are defined in @{ML_file "Pure/tctical.ML"}. |
1012 \end{readmore} |
1005 \end{readmore} |
1013 |
1006 |
1014 *} |
1007 *} |
1015 |
1008 |
1016 section {* Rewriting and Simplifier Tactics *} |
1009 section {* Simplifier Tactics *} |
1017 |
1010 |
1018 text {* |
1011 text {* |
1019 @{ML rewrite_goals_tac} |
1012 A lot of convenience in the reasoning with Isabelle derives from its |
1013 powerful simplifier. The power of simplifier is a strength and a weakness at |
|
1014 the same time, because you can easily make the simplifier to run into a loop and its |
|
1015 behaviour can be difficult to predict. There is also a multitude |
|
1016 of options that you can configurate to control the behaviour of the simplifier. |
|
1017 We describe some of them in this and the next section. |
|
1018 |
|
1019 There are the following five main tactics behind |
|
1020 the simplifier (in parentheses is their user-level counterpart): |
|
1021 |
|
1022 \begin{isabelle} |
|
1023 \begin{center} |
|
1024 \begin{tabular}{l@ {\hspace{2cm}}l} |
|
1025 @{ML simp_tac} & @{text "(simp (no_asm))"} \\ |
|
1026 @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\ |
|
1027 @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\ |
|
1028 @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\ |
|
1029 @{ML asm_full_simp_tac} & @{text "(simp)"} |
|
1030 \end{tabular} |
|
1031 \end{center} |
|
1032 \end{isabelle} |
|
1033 |
|
1034 All of the tactics take a simpset and an interger as argument (the latter as usual |
|
1035 to specify the goal to be analysed). So the proof |
|
1036 *} |
|
1037 |
|
1038 lemma "Suc (1 + 2) < 3 + 2" |
|
1039 apply(simp) |
|
1040 done |
|
1041 |
|
1042 text {* |
|
1043 corresponds on the ML-level to the tactic |
|
1044 *} |
|
1045 |
|
1046 lemma "Suc (1 + 2) < 3 + 2" |
|
1047 apply(tactic {* asm_full_simp_tac @{simpset} 1 *}) |
|
1048 done |
|
1049 |
|
1050 text {* |
|
1051 If the simplifier cannot make any progress, then it leaves the goal unchanged, |
|
1052 i.e.~does not raise any error message. That means if you use it to unfold a |
|
1053 definition for a constant and this constant is not present in the goal state, |
|
1054 you can still safely apply the simplifier. |
|
1055 |
|
1056 When using the simplifier, the crucial information you have to provide is |
|
1057 the simpset. If this information is not handled with care, then the |
|
1058 simplifier can easily run into a loop. Therefore a good rule of thumb is to |
|
1059 use simpsets that are as minimal as possible. It might be surprising that a |
|
1060 simpset is more complex than just a simple collection of theorems used as |
|
1061 simplification rules. One reason for the complexity is that the simplifier |
|
1062 must be able to rewrite inside terms and should also be able to rewrite |
|
1063 according to rules that have precoditions. |
|
1064 |
|
1065 |
|
1066 The rewriting inside terms requires congruence rules, which |
|
1067 are meta-equalities typical of the form |
|
1068 |
|
1069 \begin{isabelle} |
|
1070 \begin{center} |
|
1071 \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}} |
|
1072 {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}} |
|
1073 \end{center} |
|
1074 \end{isabelle} |
|
1075 |
|
1076 with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}. |
|
1077 Every simpset contains only |
|
1078 one concruence rule for each term-constructor, which however can be |
|
1079 overwritten. The user can declare lemmas to be congruence rules using the |
|
1080 attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as |
|
1081 equations, which are then internally transformed into meta-equations. |
|
1082 |
|
1083 |
|
1084 The rewriting with rules involving preconditions requires what is in |
|
1085 Isabelle called a subgoaler, a solver and a looper. These can be arbitrary |
|
1086 tactics that can be installed in a simpset and which are called during |
|
1087 various stages during simplification. However, simpsets also include |
|
1088 simprocs, which can produce rewrite rules on demand (see next |
|
1089 section). Another component are split-rules, which can simplify for example |
|
1090 the ``then'' and ``else'' branches of if-statements under the corresponding |
|
1091 precoditions. |
|
1092 |
|
1093 |
|
1094 \begin{readmore} |
|
1095 For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"} |
|
1096 and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in |
|
1097 @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in |
|
1098 @{ML_file "Provers/splitter.ML"}. |
|
1099 \end{readmore} |
|
1100 |
|
1101 \begin{readmore} |
|
1102 FIXME: Find the right place Discrimination nets are implemented |
|
1103 in @{ML_file "Pure/net.ML"}. |
|
1104 \end{readmore} |
|
1105 |
|
1106 The most common combinators to modify simpsets are |
|
1107 |
|
1108 \begin{isabelle} |
|
1109 \begin{tabular}{ll} |
|
1110 @{ML addsimps} & @{ML delsimps}\\ |
|
1111 @{ML addcongs} & @{ML delcongs}\\ |
|
1112 @{ML addsimprocs} & @{ML delsimprocs}\\ |
|
1113 \end{tabular} |
|
1114 \end{isabelle} |
|
1115 |
|
1116 (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits}) |
|
1117 *} |
|
1118 |
|
1119 text_raw {* |
|
1120 \begin{figure}[tp] |
|
1121 \begin{isabelle}*} |
|
1122 ML{*fun print_ss ctxt ss = |
|
1123 let |
|
1124 val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss |
|
1125 |
|
1126 fun name_thm (nm, thm) = |
|
1127 " " ^ nm ^ ": " ^ (str_of_thm ctxt thm) |
|
1128 fun name_ctrm (nm, ctrm) = |
|
1129 " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm) |
|
1130 |
|
1131 val s1 = ["Simplification rules:"] |
|
1132 val s2 = map name_thm simps |
|
1133 val s3 = ["Congruences rules:"] |
|
1134 val s4 = map name_thm congs |
|
1135 val s5 = ["Simproc patterns:"] |
|
1136 val s6 = map name_ctrm procs |
|
1137 in |
|
1138 (s1 @ s2 @ s3 @ s4 @ s5 @ s6) |
|
1139 |> separate "\n" |
|
1140 |> implode |
|
1141 |> warning |
|
1142 end*} |
|
1143 text_raw {* |
|
1144 \end{isabelle} |
|
1145 \caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing |
|
1146 all printable information stored in a simpset. We are here only interested in the |
|
1147 simplifcation rules, congruence rules and simprocs.\label{fig:printss}} |
|
1148 \end{figure} *} |
|
1149 |
|
1150 text {* |
|
1151 To see how they work, consider the two functions in Figure~\ref{fig:printss}, which |
|
1152 print out some parts of a simpset. If you use them to print out the components of the |
|
1153 empty simpset, in ML @{ML empty_ss} and the most primitive simpset |
|
1020 |
1154 |
1021 @{ML ObjectLogic.full_atomize_tac} |
1155 @{ML_response_fake [display,gray] |
1156 "print_ss @{context} empty_ss" |
|
1157 "Simplification rules: |
|
1158 Congruences rules: |
|
1159 Simproc patterns:"} |
|
1160 |
|
1161 you can see it contains nothing. This simpset is usually not useful, except as a |
|
1162 building block to build bigger simpsets. For example you can add to @{ML empty_ss} |
|
1163 the simplification rule @{thm [source] Diff_Int} as follows: |
|
1164 *} |
|
1165 |
|
1166 ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *} |
|
1167 |
|
1168 text {* |
|
1169 Printing then out the components of the simpset gives: |
|
1170 |
|
1171 @{ML_response_fake [display,gray] |
|
1172 "print_ss @{context} ss1" |
|
1173 "Simplification rules: |
|
1174 ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C) |
|
1175 Congruences rules: |
|
1176 Simproc patterns:"} |
|
1177 |
|
1178 (FIXME: Why does it print out ??.unknown) |
|
1179 |
|
1180 Adding also the congruence rule @{thm [source] UN_cong} |
|
1181 *} |
|
1182 |
|
1183 ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *} |
|
1184 |
|
1185 text {* |
|
1186 gives |
|
1187 |
|
1188 @{ML_response_fake [display,gray] |
|
1189 "print_ss @{context} ss2" |
|
1190 "Simplification rules: |
|
1191 ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C) |
|
1192 Congruences rules: |
|
1193 UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x |
|
1194 Simproc patterns:"} |
|
1195 |
|
1196 Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss} |
|
1197 expects this form of the simplification and congruence rules. However, even |
|
1198 when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet. |
|
1199 |
|
1200 In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While |
|
1201 printing out the components of this simpset |
|
1202 |
|
1203 @{ML_response_fake [display,gray] |
|
1204 "print_ss @{context} HOL_basic_ss" |
|
1205 "Simplification rules: |
|
1206 Congruences rules: |
|
1207 Simproc patterns:"} |
|
1208 |
|
1209 also produces ``nothing'', the printout is misleading. In fact |
|
1210 the @{ML HOL_basic_ss} is setup so that it can solve goals of the |
|
1211 form @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]}; |
|
1212 and also resolve with assumptions. For example: |
|
1213 *} |
|
1214 |
|
1215 lemma |
|
1216 "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A" |
|
1217 apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *}) |
|
1218 done |
|
1219 |
|
1220 text {* |
|
1221 This behaviour is not because of simplification rules, but how the subgoaler, solver |
|
1222 and looper are set up. @{ML HOL_basic_ss} is usually a good start to build your |
|
1223 own simpsets, because of the low danger of causing loops via interacting simplifiction |
|
1224 rules. |
|
1225 |
|
1226 The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing |
|
1227 already many useful simplification and congruence rules for the logical |
|
1228 connectives in HOL. |
|
1229 |
|
1230 @{ML_response_fake [display,gray] |
|
1231 "print_ss @{context} HOL_ss" |
|
1232 "Simplification rules: |
|
1233 Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V |
|
1234 HOL.the_eq_trivial: THE x. x = y \<equiv> y |
|
1235 HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y |
|
1236 \<dots> |
|
1237 Congruences rules: |
|
1238 HOL.simp_implies: \<dots> |
|
1239 \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q') |
|
1240 op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q' |
|
1241 Simproc patterns: |
|
1242 \<dots>"} |
|
1243 |
|
1022 |
1244 |
1023 @{ML ObjectLogic.rulify_tac} |
1245 The simplifier is often used to unfold definitions in a proof. For this the |
1246 simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the |
|
1247 definition |
|
1248 *} |
|
1249 |
|
1250 definition "MyTrue \<equiv> True" |
|
1251 |
|
1252 lemma shows "MyTrue \<Longrightarrow> True \<and> True" |
|
1253 apply(rule conjI) |
|
1254 apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *}) |
|
1255 txt{* then the tactic produces the goal state |
|
1256 |
|
1257 \begin{minipage}{\textwidth} |
|
1258 @{subgoals [display]} |
|
1259 \end{minipage} *} |
|
1260 (*<*)oops(*>*) |
|
1261 |
|
1262 text {* |
|
1263 As you can see, the tactic unfolds the definitions in all subgoals. |
|
1264 *} |
|
1265 |
|
1266 |
|
1267 text_raw {* |
|
1268 \begin{figure}[tp] |
|
1269 \begin{isabelle} *} |
|
1270 types prm = "(nat \<times> nat) list" |
|
1271 consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80) |
|
1272 |
|
1273 primrec (perm_nat) |
|
1274 "[]\<bullet>c = c" |
|
1275 "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab |
|
1276 else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))" |
|
1277 |
|
1278 primrec (perm_prod) |
|
1279 "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)" |
|
1280 |
|
1281 primrec (perm_list) |
|
1282 "pi\<bullet>[] = []" |
|
1283 "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)" |
|
1284 |
|
1285 lemma perm_append[simp]: |
|
1286 fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1287 shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))" |
|
1288 by (induct pi\<^isub>1) (auto) |
|
1289 |
|
1290 lemma perm_eq[simp]: |
|
1291 fixes c::"nat" and pi::"prm" |
|
1292 shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)" |
|
1293 by (induct pi) (auto) |
|
1294 |
|
1295 lemma perm_rev[simp]: |
|
1296 fixes c::"nat" and pi::"prm" |
|
1297 shows "pi\<bullet>((rev pi)\<bullet>c) = c" |
|
1298 by (induct pi arbitrary: c) (auto) |
|
1299 |
|
1300 lemma perm_compose: |
|
1301 fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1302 shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)" |
|
1303 by (induct pi\<^isub>2) (auto) |
|
1304 text_raw {* |
|
1305 \end{isabelle}\medskip |
|
1306 \caption{A simple theory about permutations over @{typ nat}. The point is that the |
|
1307 lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as |
|
1308 it would cause the simplifier to loop. It can still be used as a simplification |
|
1309 rule if the permutation is sufficiently protected.\label{fig:perms} |
|
1310 (FIXME: Uses old primrec.)} |
|
1311 \end{figure} *} |
|
1312 |
|
1313 |
|
1314 text {* |
|
1315 The simplifier is often used in order to bring terms into a normal form. |
|
1316 Unfortunately, often the situation arises that the corresponding |
|
1317 simplification rules will cause the simplifier to run into an infinite |
|
1318 loop. Consider for example the simple theory about permutations over natural |
|
1319 numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to |
|
1320 push permutations as far inside as possible, where they might disappear by |
|
1321 Lemma @{thm [source] perm_rev}. However, to fully normalise all instances, |
|
1322 it would be desirable to add also the lemma @{thm [source] perm_compose} to |
|
1323 the simplifier for pushing permutations over other permutations. Unfortunately, |
|
1324 the right-hand side of this lemma is again an instance of the left-hand side |
|
1325 and so causes an infinite loop. The seems to be no easy way to reformulate |
|
1326 this rule and so one ends up with clunky proofs like: |
|
1327 *} |
|
1328 |
|
1329 lemma |
|
1330 fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1331 shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)" |
|
1332 apply(simp) |
|
1333 apply(rule trans) |
|
1334 apply(rule perm_compose) |
|
1335 apply(simp) |
|
1336 done |
|
1337 |
|
1338 text {* |
|
1339 It is however possible to create a single simplifier tactic that solves |
|
1340 such proofs. The trick is to introduce an auxiliary constant for permutations |
|
1341 and split the simplification into two phases (below actually three). Let |
|
1342 assume the auxiliary constant is |
|
1343 *} |
|
1344 |
|
1345 definition |
|
1346 perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80) |
|
1347 where |
|
1348 "pi \<bullet>aux c \<equiv> pi \<bullet> c" |
|
1349 |
|
1350 text {* Now the two lemmas *} |
|
1351 |
|
1352 lemma perm_aux_expand: |
|
1353 fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1354 shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)" |
|
1355 unfolding perm_aux_def by (rule refl) |
|
1356 |
|
1357 lemma perm_compose_aux: |
|
1358 fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1359 shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)" |
|
1360 unfolding perm_aux_def by (rule perm_compose) |
|
1361 |
|
1362 text {* |
|
1363 are simple consequence of the definition and @{thm [source] perm_compose}. |
|
1364 More importantly, the lemma @{thm [source] perm_compose_aux} can be safely |
|
1365 added to the simplifier, because now the right-hand side is not anymore an instance |
|
1366 of the left-hand side. In a sense it freezes all redexes of permutation compositions |
|
1367 after one step. In this way, we can split simplification of permutations |
|
1368 into three phases without the user not noticing anything about the auxiliary |
|
1369 contant. We first freeze any instance of permutation compositions in the term using |
|
1370 lemma @{thm [source] "perm_aux_expand"} (Line 9); |
|
1371 then simplifiy all other permutations including pusing permutations over |
|
1372 other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and |
|
1373 finally ``unfreeze'' all instances of permutation compositions by unfolding |
|
1374 the definition of the auxiliary constant. |
|
1375 *} |
|
1376 |
|
1377 ML %linenosgray{*val perm_simp_tac = |
|
1378 let |
|
1379 val thms1 = [@{thm perm_aux_expand}] |
|
1380 val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev}, |
|
1381 @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @ |
|
1382 @{thms perm_list.simps} @ @{thms perm_nat.simps} |
|
1383 val thms3 = [@{thm perm_aux_def}] |
|
1384 in |
|
1385 simp_tac (HOL_basic_ss addsimps thms1) |
|
1386 THEN' simp_tac (HOL_basic_ss addsimps thms2) |
|
1387 THEN' simp_tac (HOL_basic_ss addsimps thms3) |
|
1388 end*} |
|
1389 |
|
1390 text {* |
|
1391 For all three phases we have to build simpsets addig specific lemmas. As is sufficient |
|
1392 for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain |
|
1393 the desired results. Now we can solve the following lemma |
|
1394 *} |
|
1395 |
|
1396 lemma |
|
1397 fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" |
|
1398 shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)" |
|
1399 apply(tactic {* perm_simp_tac 1 *}) |
|
1400 done |
|
1401 |
|
1402 |
|
1403 text {* |
|
1404 in one step. This tactic can deal with most instances of normalising permutations; |
|
1405 in order to solve all cases we have to deal with corner-cases such as the |
|
1406 lemma being an exact instance of the permutation composition lemma. This can |
|
1407 often be done easier by implementing a simproc or a conversion. Both will be |
|
1408 explained in the next two chapters. |
|
1409 |
|
1410 (FIXME: Is it interesting to say something about @{term "op =simp=>"}?) |
|
1411 |
|
1412 (FIXME: What are the second components of the congruence rules---something to |
|
1413 do with weak congruence constants?) |
|
1414 |
|
1415 (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?) |
|
1416 |
|
1417 (FIXME: @{ML ObjectLogic.full_atomize_tac}, |
|
1418 @{ML ObjectLogic.rulify_tac}) |
|
1024 |
1419 |
1025 *} |
1420 *} |
1026 |
1421 |
1027 section {* Simprocs *} |
1422 section {* Simprocs *} |
1028 |
1423 |
1347 @{ML_response_fake [display,gray] |
1742 @{ML_response_fake [display,gray] |
1348 "Conv.no_conv @{cterm True}" |
1743 "Conv.no_conv @{cterm True}" |
1349 "*** Exception- CTERM (\"no conversion\", []) raised"} |
1744 "*** Exception- CTERM (\"no conversion\", []) raised"} |
1350 |
1745 |
1351 A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it |
1746 A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it |
1352 produces an equation between a term and its beta-normal form. For example |
1747 produces a meta-equation between a term and its beta-normal form. For example |
1353 |
1748 |
1354 @{ML_response_fake [display,gray] |
1749 @{ML_response_fake [display,gray] |
1355 "let |
1750 "let |
1356 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1751 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1357 val two = @{cterm \"2::nat\"} |
1752 val two = @{cterm \"2::nat\"} |
1360 Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten) |
1755 Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten) |
1361 end" |
1756 end" |
1362 "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"} |
1757 "((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"} |
1363 |
1758 |
1364 Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"}, |
1759 Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"}, |
1365 since the pretty printer for @{ML_type cterm}s already beta-normalises terms. |
1760 since the pretty-printer for @{ML_type cterm}s already beta-normalises terms. |
1366 But by the way how we constructed the term (using the function |
1761 But by the way how we constructed the term (using the function |
1367 @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s), |
1762 @{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s), |
1368 we can be sure the left-hand side must contain beta-redexes. Indeed |
1763 we can be sure the left-hand side must contain beta-redexes. Indeed |
1369 if we obtain the ``raw'' representation of the produced theorem, we |
1764 if we obtain the ``raw'' representation of the produced theorem, we |
1370 can see the difference: |
1765 can see the difference: |
1372 @{ML_response [display,gray] |
1767 @{ML_response [display,gray] |
1373 "let |
1768 "let |
1374 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1769 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1375 val two = @{cterm \"2::nat\"} |
1770 val two = @{cterm \"2::nat\"} |
1376 val ten = @{cterm \"10::nat\"} |
1771 val ten = @{cterm \"10::nat\"} |
1772 val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten) |
|
1377 in |
1773 in |
1378 #prop (rep_thm |
1774 #prop (rep_thm thm) |
1379 (Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten))) |
|
1380 end" |
1775 end" |
1381 "Const (\"==\",\<dots>) $ |
1776 "Const (\"==\",\<dots>) $ |
1382 (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ |
1777 (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ |
1383 (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} |
1778 (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} |
1384 |
1779 |
1399 Again, we actually see as output only the fully normalised term |
1794 Again, we actually see as output only the fully normalised term |
1400 @{text "\<lambda>y. 2 + y"}. |
1795 @{text "\<lambda>y. 2 + y"}. |
1401 |
1796 |
1402 The main point of conversions is that they can be used for rewriting |
1797 The main point of conversions is that they can be used for rewriting |
1403 @{ML_type cterm}s. To do this you can use the function @{ML |
1798 @{ML_type cterm}s. To do this you can use the function @{ML |
1404 "Conv.rewr_conv"} which expects a meta-equation as an argument. Suppose we |
1799 "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we |
1405 want to rewrite a @{ML_type cterm} according to the (meta)equation: |
1800 want to rewrite a @{ML_type cterm} according to the meta-equation: |
1406 *} |
1801 *} |
1407 |
1802 |
1408 lemma true_conj1: "True \<and> P \<equiv> P" by simp |
1803 lemma true_conj1: "True \<and> P \<equiv> P" by simp |
1409 |
1804 |
1410 text {* |
1805 text {* |
1411 You can see how this function works with the example |
1806 You can see how this function works in the example rewriting |
1412 @{term "True \<and> (Foo \<longrightarrow> Bar)"}: |
1807 the @{ML_type cterm} @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}. |
1413 |
1808 |
1414 @{ML_response_fake [display,gray] |
1809 @{ML_response_fake [display,gray] |
1415 "let |
1810 "let |
1416 val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"} |
1811 val ctrm = @{cterm \"True \<and> (Foo \<longrightarrow> Bar)\"} |
1417 in |
1812 in |
1418 Conv.rewr_conv @{thm true_conj1} ctrm |
1813 Conv.rewr_conv @{thm true_conj1} ctrm |
1419 end" |
1814 end" |
1420 "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"} |
1815 "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"} |
1421 |
1816 |
1422 Note, however, that the function @{ML Conv.rewr_conv} only rewrites the |
1817 Note, however, that the function @{ML Conv.rewr_conv} only rewrites the |
1423 outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match the |
1818 outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match |
1819 exactly the |
|
1424 left-hand side of the theorem, then @{ML Conv.rewr_conv} raises |
1820 left-hand side of the theorem, then @{ML Conv.rewr_conv} raises |
1425 the exception @{ML CTERM}. |
1821 the exception @{ML CTERM}. |
1426 |
1822 |
1427 This very primitive way of rewriting can be made more powerful by |
1823 This very primitive way of rewriting can be made more powerful by |
1428 combining several conversions into one. For this you can use conversion |
1824 combining several conversions into one. For this you can use conversion |
1456 end" |
1852 end" |
1457 "(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"} |
1853 "(True \<and> Q \<equiv> Q, P \<or> True \<and> Q \<equiv> P \<or> True \<and> Q)"} |
1458 |
1854 |
1459 Here the conversion of @{thm [source] true_conj1} only applies |
1855 Here the conversion of @{thm [source] true_conj1} only applies |
1460 in the first case, but fails in the second. The whole conversion |
1856 in the first case, but fails in the second. The whole conversion |
1461 does not fail, however, because the combinator @{ML else_conv} will then |
1857 does not fail, however, because the combinator @{ML Conv.else_conv} will then |
1462 try out @{ML Conv.all_conv}, which always succeeds. |
1858 try out @{ML Conv.all_conv}, which always succeeds. |
1463 |
1859 |
1464 Apart from the function @{ML beta_conversion in Thm}, which is able to fully |
1860 Apart from the function @{ML beta_conversion in Thm}, which is able to fully |
1465 beta-normalise a term, the conversions so far are restricted in that they |
1861 beta-normalise a term, the conversions so far are restricted in that they |
1466 only apply to the outer-most level of a @{ML_type cterm}. In what follows we |
1862 only apply to the outer-most level of a @{ML_type cterm}. In what follows we |
1477 Conv.arg_conv conv ctrm |
1873 Conv.arg_conv conv ctrm |
1478 end" |
1874 end" |
1479 "P \<or> (True \<and> Q) \<equiv> P \<or> Q"} |
1875 "P \<or> (True \<and> Q) \<equiv> P \<or> Q"} |
1480 |
1876 |
1481 The reason for this behaviour is that @{text "(op \<or>)"} expects two |
1877 The reason for this behaviour is that @{text "(op \<or>)"} expects two |
1482 arguments. So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The |
1878 arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The |
1483 conversion is then applied to @{text "t2"} which in the example above |
1879 conversion is then applied to @{text "t2"} which in the example above |
1484 stands for @{term "True \<and> Q"}. |
1880 stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies |
1881 the conversion to the first argument of an application. |
|
1485 |
1882 |
1486 The function @{ML Conv.abs_conv} applies a conversion under an abstractions. |
1883 The function @{ML Conv.abs_conv} applies a conversion under an abstractions. |
1487 For example: |
1884 For example: |
1488 |
1885 |
1489 @{ML_response_fake [display,gray] |
1886 @{ML_response_fake [display,gray] |
1493 in |
1890 in |
1494 Conv.abs_conv conv @{context} ctrm |
1891 Conv.abs_conv conv @{context} ctrm |
1495 end" |
1892 end" |
1496 "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"} |
1893 "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"} |
1497 |
1894 |
1498 The conversion that goes under an application is |
1895 Note that this conversion needs a context as an argument. The conversion that |
1499 @{ML Conv.combination_conv}. It expects two conversions as arguments, |
1896 goes under an application is @{ML Conv.combination_conv}. It expects two conversions |
1500 each of which is applied to the corresponding ``branch'' of the application. |
1897 as arguments, each of which is applied to the corresponding ``branch'' of the |
1501 |
1898 application. |
1502 We can now apply all these functions in a |
1899 |
1503 conversion that recursively descends a term and applies a conversion in all |
1900 We can now apply all these functions in a conversion that recursively |
1504 possible positions. |
1901 descends a term and applies a ``@{thm [source] true_conj1}''-conversion |
1902 in all possible positions. |
|
1505 *} |
1903 *} |
1506 |
1904 |
1507 ML %linenosgray{*fun all_true1_conv ctxt ctrm = |
1905 ML %linenosgray{*fun all_true1_conv ctxt ctrm = |
1508 case (Thm.term_of ctrm) of |
1906 case (Thm.term_of ctrm) of |
1509 @{term "op \<and>"} $ @{term True} $ _ => |
1907 @{term "op \<and>"} $ @{term True} $ _ => |
1510 (Conv.arg_conv (all_true1_conv ctxt) then_conv |
1908 (Conv.arg_conv (all_true1_conv ctxt) then_conv |
1511 Conv.rewr_conv @{thm true_conj1}) ctrm |
1909 Conv.rewr_conv @{thm true_conj1}) ctrm |
1512 | _ $ _ => Conv.combination_conv |
1910 | _ $ _ => Conv.combination_conv |
1513 (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm |
1911 (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm |
1514 | Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm |
1912 | Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm |
1515 | _ => Conv.all_conv ctrm*} |
1913 | _ => Conv.all_conv ctrm*} |
1516 |
1914 |
1517 text {* |
1915 text {* |
1518 This functions descends under applications (Line 6 and 7) and abstractions |
1916 This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"}; |
1519 (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"} |
1917 it descends under applications (Line 6 and 7) and abstractions |
1520 (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2 |
1918 (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2 |
1521 we need to transform the @{ML_type cterm} into a @{ML_type term} in order |
1919 we need to transform the @{ML_type cterm} into a @{ML_type term} in order |
1522 to be able to pattern-match the term. To see this |
1920 to be able to pattern-match the term. To see this |
1523 conversion in action, consider the following example |
1921 conversion in action, consider the following example: |
1524 |
1922 |
1525 @{ML_response_fake [display,gray] |
1923 @{ML_response_fake [display,gray] |
1526 "let |
1924 "let |
1527 val ctxt = @{context} |
1925 val ctxt = @{context} |
1528 val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"} |
1926 val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"} |
1529 in |
1927 in |
1530 all_true1_conv ctxt ctrm |
1928 all_true1_conv ctxt ctrm |
1531 end" |
1929 end" |
1532 "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"} |
1930 "distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"} |
1533 |
1931 |
1534 where the conversion is applied ``deep'' inside the term. |
|
1535 |
|
1536 To see how much control you have about rewriting by using conversions, let us |
1932 To see how much control you have about rewriting by using conversions, let us |
1537 make the task a bit more complicated by rewriting according to the rule |
1933 make the task a bit more complicated by rewriting according to the rule |
1538 @{text true_conj1}, but only in the first arguments of @{term If}s. Then |
1934 @{text true_conj1}, but only in the first arguments of @{term If}s. Then |
1539 the conversion should be as follows. |
1935 the conversion should be as follows. |
1540 *} |
1936 *} |
1542 ML{*fun if_true1_conv ctxt ctrm = |
1938 ML{*fun if_true1_conv ctxt ctrm = |
1543 case Thm.term_of ctrm of |
1939 case Thm.term_of ctrm of |
1544 Const (@{const_name If}, _) $ _ => |
1940 Const (@{const_name If}, _) $ _ => |
1545 Conv.arg_conv (all_true1_conv ctxt) ctrm |
1941 Conv.arg_conv (all_true1_conv ctxt) ctrm |
1546 | _ $ _ => Conv.combination_conv |
1942 | _ $ _ => Conv.combination_conv |
1547 (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm |
1943 (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm |
1548 | Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm |
1944 | Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm |
1549 | _ => Conv.all_conv ctrm *} |
1945 | _ => Conv.all_conv ctrm *} |
1550 |
1946 |
1551 text {* |
1947 text {* |
1552 Here is an example for this conversion: |
1948 Here is an example for this conversion: |
1553 |
1949 |
1554 @{ML_response_fake [display,gray] |
1950 @{ML_response_fake [display,gray] |
1555 "let |
1951 "let |
1556 val ctxt = @{context} |
1952 val ctxt = @{context} |
1557 val ctrm = |
1953 val ctrm = |
1558 @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"} |
1954 @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"} |
1559 in |
1955 in |
1560 if_true1_conv ctxt ctrm |
1956 if_true1_conv ctxt ctrm |
1561 end" |
1957 end" |
1562 "if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False |
1958 "if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False |
1563 \<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"} |
1959 \<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"} |
1579 "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}" |
1975 "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}" |
1580 "?P \<or> \<not> ?P"} |
1976 "?P \<or> \<not> ?P"} |
1581 |
1977 |
1582 Finally, conversions can also be turned into tactics and then applied to |
1978 Finally, conversions can also be turned into tactics and then applied to |
1583 goal states. This can be done with the help of the function @{ML CONVERSION}, |
1979 goal states. This can be done with the help of the function @{ML CONVERSION}, |
1584 and also some predefined conversion combinator which traverse a goal |
1980 and also some predefined conversion combinators that traverse a goal |
1585 state. The combinators for the goal state are: @{ML Conv.params_conv} for |
1981 state. The combinators for the goal state are: @{ML Conv.params_conv} for |
1586 going under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow> |
1982 converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow> |
1587 Q"}); the function @{ML Conv.prems_conv} for applying the conversion to all |
1983 Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all |
1588 premises of a goal, and @{ML Conv.concl_conv} for applying the conversion to |
1984 premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to |
1589 the conclusion of a goal. |
1985 the conclusion of a goal. |
1590 |
1986 |
1591 Assume we want to apply @{ML all_true1_conv} only in the conclusion |
1987 Assume we want to apply @{ML all_true1_conv} only in the conclusion |
1592 of the goal, and @{ML if_true1_conv} should only be applied to the premises. |
1988 of the goal, and @{ML if_true1_conv} should only apply to the premises. |
1593 Here is a tactic doing exactly that: |
1989 Here is a tactic doing exactly that: |
1594 *} |
1990 *} |
1595 |
1991 |
1596 ML{*val true1_tac = CSUBGOAL (fn (goal, i) => |
1992 ML{*val true1_tac = CSUBGOAL (fn (goal, i) => |
1597 let |
1993 let |
1598 val ctxt = ProofContext.init (Thm.theory_of_cterm goal) |
1994 val ctxt = ProofContext.init (Thm.theory_of_cterm goal) |
1599 in |
1995 in |
1600 CONVERSION |
1996 CONVERSION |
1601 (Conv.params_conv ~1 (fn ctxt => |
1997 (Conv.params_conv ~1 (fn ctxt => |
1602 (Conv.prems_conv ~1 (if_true1_conv ctxt)) then_conv |
1998 (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv |
1603 Conv.concl_conv ~1 (all_true1_conv ctxt)) ctxt) i |
1999 Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i |
1604 end)*} |
2000 end)*} |
1605 |
2001 |
1606 text {* |
2002 text {* |
1607 We call the conversions with the argument @{ML "~1"}. This is to |
2003 We call the conversions with the argument @{ML "~1"}. This is to |
1608 analyse all parameters, premises and conclusions. If we call them with |
2004 analyse all parameters, premises and conclusions. If we call them with |
1626 |
2022 |
1627 To sum up this section, conversions are not as powerful as the simplifier |
2023 To sum up this section, conversions are not as powerful as the simplifier |
1628 and simprocs; the advantage of conversions, however, is that you never have |
2024 and simprocs; the advantage of conversions, however, is that you never have |
1629 to worry about non-termination. |
2025 to worry about non-termination. |
1630 |
2026 |
2027 (FIXME: explain @{ML Conv.try_conv}) |
|
2028 |
|
2029 \begin{exercise}\label{ex:addconversion} |
|
2030 Write a tactic that does the same as the simproc in exercise |
|
2031 \ref{ex:addsimproc}, but is based in conversions. That means replace terms |
|
2032 of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make |
|
2033 the same assumptions as in \ref{ex:addsimproc}. |
|
2034 \end{exercise} |
|
2035 |
|
2036 \begin{exercise} |
|
2037 Compare which way (either Exercise ~\ref{addsimproc} or \ref{ex:addconversion}) of |
|
2038 rewriting such terms is faster. For this you might have to construct quite |
|
2039 large terms. Also see Recipe \ref{rec:timing} for information about timing. |
|
2040 \end{exercise} |
|
2041 |
|
1631 \begin{readmore} |
2042 \begin{readmore} |
1632 See @{ML_file "Pure/conv.ML"} for more information about conversion combinators. |
2043 See @{ML_file "Pure/conv.ML"} for more information about conversion combinators. |
1633 Further conversions are defined in @{ML_file "Pure/thm.ML"}, |
2044 Further conversions are defined in @{ML_file "Pure/thm.ML"}, |
1634 @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}. |
2045 @{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}. |
1635 \end{readmore} |
2046 \end{readmore} |
1636 *} |
2047 |
1637 |
2048 *} |
1638 |
2049 |
1639 |
2050 |
1640 section {* Structured Proofs *} |
2051 |
2052 |
|
2053 section {* Structured Proofs (TBD) *} |
|
1641 |
2054 |
1642 text {* TBD *} |
2055 text {* TBD *} |
1643 |
2056 |
1644 lemma True |
2057 lemma True |
1645 proof |
2058 proof |