diff -r 643b1e1d7d29 -r b1a458a03a8e ProgTutorial/Tactical.thy --- a/ProgTutorial/Tactical.thy Fri Oct 30 09:42:17 2009 +0100 +++ b/ProgTutorial/Tactical.thy Sat Oct 31 11:37:41 2009 +0100 @@ -23,7 +23,6 @@ tactic combinators. We also describe the simplifier, simprocs and conversions. *} - section {* Basics of Reasoning with Tactics*} text {* @@ -99,9 +98,13 @@ lookup of @{text [quotes] "disjE"}. Although theorems must have a unique name in the theorem database, the string can stand for a dynamically updatable theorem list. Also in this case we cannot be sure which theorem is applied. - These problems can be nicely prevented by using antiquotations, because then - the theorems are fixed statically at compile-time. - + These problems can be nicely prevented by using antiquotations + + @{ML_response_fake [gray,display] + "@{thm \"disjE\"}" + "\?P \ ?Q; ?P \ ?R; ?Q \ ?R\ \ ?R"} + + because then the theorems are fixed statically at compile-time. During the development of automatic proof procedures, you will often find it necessary to test a tactic on examples. This can be conveniently done with @@ -391,7 +394,7 @@ section {* Simple Tactics\label{sec:simpletacs} *} text {* - In this section we will introduce more simple tactics. The + In this section we will introduce more of the simple tactics in Isabelle. The first one is @{ML_ind print_tac in Tactical}, which is quite useful for low-level debugging of tactics. It just prints out a message and the current goal state. Unlike @{ML my_print_tac} shown earlier, it prints the goal state @@ -401,13 +404,12 @@ lemma shows "False \ True" apply(tactic {* print_tac "foo message" *}) -txt{*gives:\medskip - - \begin{minipage}{\textwidth}\small +txt{*gives: + \begin{isabelle} @{text "foo message"}\\[3mm] @{prop "False \ True"}\\ @{text " 1. False \ True"}\\ - \end{minipage} + \end{isabelle} *} (*<*)oops(*>*) @@ -430,7 +432,7 @@ This tactic works however only if the quick-and-dirty mode of Isabelle is switched on. This is done automatically in the ``interactive mode'' of Isabelle, but must be done manually in the ``batch mode'' - with for example the assignment + with the assignment *} ML{*quick_and_dirty := true*} @@ -518,11 +520,10 @@ lemma shows "True \ False" apply(tactic {* cut_facts_tac [@{thm True_def}, @{thm False_def}] 1 *}) -txt{*produces the goal state\medskip - - \begin{minipage}{\textwidth} +txt{*produces the goal state + \begin{isabelle} @{subgoals [display]} - \end{minipage}*} + \end{isabelle}*} (*<*)oops(*>*) text {* @@ -593,7 +594,7 @@ \end{tabular} \end{quote} - The @{text params} and @{text schematics} stand or list of pairs where the + The @{text params} and @{text schematics} stand for list of pairs where the left-hand side of @{text ":="} is replaced by the right-hand side inside the tactic. Notice that in the actual output the variables @{term x} and @{term y} have a brown colour. Although they are parameters in the original goal, @@ -756,10 +757,16 @@ explain in the next section. But before that we will show how tactic application can be constrained. - Since rules are applied using higher-order unification, an automatic proof - procedure might become too fragile, if it just applies inference rules as - shown above. The reason is that a number of rules introduce schematic variables - into the goal state. Consider for example the proof + \begin{readmore} + The functions @{ML SUBGOAL} and @{ML CSUBGOAL} are defined in @{ML_file "Pure/tactical.ML"}. + \end{readmore} + + + Since @{ML_ind rtac in Tactic} and the like use higher-order unification, an + automatic proof procedure based on them might become too fragile, if it just + applies theorems as shown above. The reason is that a number of theorems + introduce schematic variables into the goal state. Consider for example the + proof *} lemma @@ -771,20 +778,20 @@ (*<*)oops(*>*) text {* - where the application of rule @{text bspec} generates two subgoals involving the - schematic variable @{text "?x"}. Now, if you are not careful, tactics + where the application of theorem @{text bspec} generates two subgoals involving the + new schematic variable @{text "?x"}. Now, if you are not careful, tactics applied to the first subgoal might instantiate this schematic variable in such a way that the second subgoal becomes unprovable. If it is clear what the @{text "?x"} should be, then this situation can be avoided by introducing a more - constrained version of the @{text bspec}-rule. One way to give such + constrained version of the @{text bspec}-theorem. One way to give such constraints is by pre-instantiating theorems with other theorems. The function @{ML_ind RS in Drule}, for example, does this. @{ML_response_fake [display,gray] "@{thm disjI1} RS @{thm conjI}" "\?P1; ?Q\ \ (?P1 \ ?Q1) \ ?Q"} - In this example it instantiates the first premise of the @{text conjI}-rule - with the rule @{text disjI1}. If the instantiation is impossible, as in the + In this example it instantiates the first premise of the @{text conjI}-theorem + with the theorem @{text disjI1}. If the instantiation is impossible, as in the case of @{ML_response_fake_both [display,gray] @@ -829,7 +836,8 @@ Q \ (P \ Q) \ Qa]"} \begin{readmore} - The combinators for instantiating theorems are defined in @{ML_file "Pure/drule.ML"}. + The combinators for instantiating theorems with other theorems are + defined in @{ML_file "Pure/drule.ML"}. \end{readmore} Higher-order unification is also often in the way when applying certain @@ -861,7 +869,7 @@ intended one. While we can use \isacommand{back} to interactively find the intended instantiation, this is not an option for an automatic proof procedure. Fortunately, the tactic @{ML_ind cong_tac in Cong_Tac} helps - with applying congruence rules and finding the intended instantiation. + with applying congruence theorems and finding the intended instantiation. For example *} @@ -875,7 +883,7 @@ (*<*)oops(*>*) text {* - However, frequently it is necessary to explicitly match a theorem against a proof + However, frequently it is necessary to explicitly match a theorem against a goal state and in doing so construct manually an appropriate instantiation. Imagine you have the theorem *} @@ -928,9 +936,15 @@ text {* We obtain the intended subgoals and also the parameters are correctly introduced in both of them. Such manual instantiations are quite frequently - necessary in order to appropriately constrain the application of inference - rules. Otherwise one would end up with ``wild'' higher-order unification - problems that make automatic proofs fail. + necessary in order to appropriately constrain the application of theorems. + Otherwise one can end up with ``wild'' higher-order unification problems + that make automatic proofs fail. + + \begin{readmore} + Functions for matching @{ML_type cterm}s are defined in @{ML_file "Pure/thm.ML"}. + Functions for instantiating schematic variables in theorems are defined + in @{ML_file "Pure/drule.ML"}. + \end{readmore} *} section {* Tactic Combinators *} @@ -950,9 +964,9 @@ (*<*)oops(*>*) text {* - If you want to avoid the hard-coded subgoal addressing, then, as - seen earlier, you can use - the ``primed'' version of @{ML THEN}. For example: + If you want to avoid the hard-coded subgoal addressing in each component, + then, as seen earlier, you can use the ``primed'' version of @{ML THEN}. + For example: *} lemma @@ -965,7 +979,7 @@ text {* Here you have to specify the subgoal of interest only once and - it is consistently applied to the component tactics. + it is consistently applied to the component. For most tactic combinators such a ``primed'' version exists and in what follows we will usually prefer it over the ``unprimed'' one. @@ -1003,7 +1017,7 @@ ML{*val orelse_xmp_tac = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*} text {* - will first try out whether rule @{text disjI} applies and in case of failure + will first try out whether theorem @{text disjI} applies and in case of failure will try @{text conjI}. To see this consider the proof *} @@ -1012,10 +1026,9 @@ apply(tactic {* orelse_xmp_tac 2 *}) apply(tactic {* orelse_xmp_tac 1 *}) txt {* which results in the goal state - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals [display]} - \end{minipage} + \end{isabelle} *} (*<*)oops(*>*) @@ -1031,7 +1044,7 @@ text {* Since we like to mimic the behaviour of @{ML select_tac} as closely as possible, we must include @{ML all_tac} at the end of the list, otherwise the tactic will - fail if no rule applies (we also have to wrap @{ML all_tac} using the + fail if no theorem applies (we also have to wrap @{ML all_tac} using the @{ML K}-combinator, because it does not take a subgoal number as argument). You can test the tactic on the same goals: *} @@ -1063,7 +1076,7 @@ text {* Remember that we chose to implement @{ML select_tac'} so that it - always succeeds by adding @{ML all_tac} at the end of the tactic + always succeeds by fact of having @{ML all_tac} at the end of the tactic list. The same can be achieved with the tactic combinator @{ML_ind TRY in Tactical}. For example: *} @@ -1089,18 +1102,18 @@ (*<*)oops(*>*) text {* - In this case no rule applies, but because of @{ML TRY} or the inclusion of @{ML all_tac} - the tactics do not fail. The problem with this is that for the user there is little - chance to see whether or not progress in the proof has been made. By convention - therefore, tactics visible to the user should either change something or fail. - + In this case no theorem applies. But because we wrapped the tactic in a @{ML + TRY}, it does not fail anymore. The problem is that for the user there is + little chance to see whether progress in the proof has been made, or not. By + convention therefore, tactics visible to the user should either change + something or fail. + To comply with this convention, we could simply delete the @{ML "K all_tac"} - from the end of the theorem list. As a result @{ML select_tac'} would only - succeed on goals where it can make progress. But for the sake of argument, - let us suppose that this deletion is \emph{not} an option. In such cases, you can - use the combinator @{ML_ind CHANGED in Tactical} to make sure the subgoal has been changed - by the tactic. Because now - + in @{ML select_tac'} or delete @{ML TRY} from @{ML select_tac''}. But for + the sake of argument, let us suppose that this deletion is \emph{not} an + option. In such cases, you can use the combinator @{ML_ind CHANGED in + Tactical} to make sure the subgoal has been changed by the tactic. Because + now *} lemma @@ -1115,9 +1128,9 @@ text {* - We can further extend @{ML select_tac'} so that it not just applies to the topmost + We can further extend the @{ML select_tac}s so that they not just apply to the topmost connective, but also to the ones immediately ``underneath'', i.e.~analyse the goal - completely. For this you can use the tactic combinator @{ML_ind REPEAT in Tactical}. As an example + completely. For this you can use the tactic combinator @{ML_ind REPEAT in Tactical}. As an example suppose the following tactic *} @@ -1129,18 +1142,17 @@ shows "((\A) \ (\x. B x)) \ (C \ D)" apply(tactic {* repeat_xmp_tac *}) txt{* produces - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals [display]} - \end{minipage} *} + \end{isabelle} *} (*<*)oops(*>*) text {* - Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED}, + Here it is crucial that @{ML select_tac'} is prefixed with @{ML CHANGED}, because otherwise @{ML REPEAT} runs into an infinite loop (it applies the - tactic as long as it succeeds). The function - @{ML_ind REPEAT1 in Tactical} is similar, but runs the tactic at least once (failing if - this is not possible). + tactic as long as it succeeds). The function @{ML_ind REPEAT1 in Tactical} + is similar, but runs the tactic at least once (failing if this is not + possible). If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you can implement it as @@ -1156,13 +1168,13 @@ that goals 2 and 3 are not analysed. This is because the tactic is applied repeatedly only to the first subgoal. To analyse also all resulting subgoals, you can use the tactic combinator @{ML_ind REPEAT_ALL_NEW in Tactical}. - Suppose the tactic + Supposing the tactic *} ML{*val repeat_all_new_xmp_tac = REPEAT_ALL_NEW (CHANGED o select_tac')*} text {* - you see that the following goal + you can see that the following goal *} lemma @@ -1185,11 +1197,10 @@ lemma shows "\P1 \ Q1; P2 \ Q2\ \ R" apply(tactic {* etac @{thm disjE} 1 *}) -txt{* applies the rule to the first assumption yielding the goal state:\smallskip - - \begin{minipage}{\textwidth} +txt{* applies the rule to the first assumption yielding the goal state: + \begin{isabelle} @{subgoals [display]} - \end{minipage}\smallskip + \end{isabelle} After typing *} @@ -1200,11 +1211,10 @@ apply(tactic {* etac @{thm disjE} 1 *}) (*>*) back -txt{* the rule now applies to the second assumption.\smallskip - - \begin{minipage}{\textwidth} +txt{* the rule now applies to the second assumption. + \begin{isabelle} @{subgoals [display]} - \end{minipage} *} + \end{isabelle} *} (*<*)oops(*>*) text {* @@ -1231,36 +1241,9 @@ @{text "*** At command \"back\"."} \end{isabelle} - Recall that we implemented @{ML select_tac'} on Page~\pageref{tac:selectprime} specifically - so that it always succeeds. We achieved this by adding at the end the tactic @{ML all_tac}. - We can achieve this also by using the combinator @{ML TRY}. Suppose, for example the - tactic -*} - -ML{*val select_tac'' = FIRST' [rtac @{thm conjI}, rtac @{thm impI}, - rtac @{thm notI}, rtac @{thm allI}]*} - -text {* - which will fail if none of the rules applies. However, if you prefix it as follows -*} - -ML{*val select_tac''' = TRY o select_tac''*} - -text {* - then the tactic @{ML select_tac''} will be tried out and any failure is harnessed. - We again have to use the construction with \mbox{@{text "TRY o ..."}} since there is - no primed version of @{ML_ind TRY}. The tactic combinator @{ML_ind TRYALL} will try out - a tactic on all subgoals. For example the tactic -*} - -ML{*val triv_tac = TRYALL (rtac @{thm TrueI} ORELSE' etac @{thm FalseE})*} - -text {* - will solve all trivial subgoals involving @{term True} or @{term "False"}. - - (FIXME: say something about @{ML_ind COND} and COND') - - (FIXME: PARALLEL-CHOICE PARALLEL-GOALS) + + \footnote{\bf FIXME: say something about @{ML_ind COND} and COND'} + \footnote{\bf FIXME: PARALLEL-CHOICE PARALLEL-GOALS} \begin{readmore} Most tactic combinators described in this section are defined in @{ML_file "Pure/tactical.ML"}. @@ -1271,9 +1254,11 @@ text {* \begin{exercise}\label{ex:dyckhoff} Dyckhoff presents in \cite{Dyckhoff92} inference rules for a sequent - calculus, called G4ip, in which no contraction rule is needed in order to be - complete. As a result the rules applied in any order give a simple decision - procedure for propositional intuitionistic logic. His rules are + calculus, called G4ip, for intuitionistic propositional logic. The + interesting feature of this calculus is that no contraction rule is needed + in order to be complete. As a result applying the rules exhaustively leads + to simple decision procedure for propositional intuitionistic logic. The task + is to implement this decision procedure as a tactic. His rules are \begin{center} \def\arraystretch{2.3} @@ -1313,39 +1298,42 @@ \end{tabular} \end{center} - Implement a tactic that explores all possibilites of applying these rules to - a propositional formula until a goal state is reached in which all subgoals - are discharged. Note that in Isabelle the left-rules need to be implemented - as elimination rules. You need to prove separate lemmas corresponding to - $\longrightarrow_{L_{1..4}}$. In order to explore all possibilities of applying - the rules, use the tactic combinator @{ML_ind DEPTH_SOLVE in Search}, which searches - for a state in which all subgoals are solved. Add also rules for equality and - run your tactic on the de Bruijn formulae discussed in Exercise~\ref{ex:debruijn}. + Note that in Isabelle the left-rules need to be implemented as elimination + rules. You need to prove separate lemmas corresponding to + $\longrightarrow_{L_{1..4}}$. The tactic should explore all possibilites of + applying these rules to a propositional formula until a goal state is + reached in which all subgoals are discharged. For this you can use the tactic + combinator @{ML_ind DEPTH_SOLVE in Search}. + \end{exercise} + + \begin{exercise} + Add to the previous exercise also rules for equality and run your tactic on + the de Bruijn formulae discussed in Exercise~\ref{ex:debruijn}. \end{exercise} *} section {* Simplifier Tactics *} text {* - A lot of convenience in the reasoning with Isabelle derives from its - powerful simplifier. The power of the simplifier is a strength and a weakness at - the same time, because you can easily make the simplifier run into a loop and - in general its - behaviour can be difficult to predict. There is also a multitude - of options that you can configure to control the behaviour of the simplifier. - We describe some of them in this and the next section. - - There are the following five main tactics behind - the simplifier (in parentheses is their user-level counterpart): + A lot of convenience in reasoning with Isabelle derives from its + powerful simplifier. We will describe it in this section, whereby + we can, however, only scratch the surface due to its complexity. + + The power of the simplifier is a strength and a weakness at the same time, + because you can easily make the simplifier run into a loop and in general + its behaviour can be difficult to predict. There is also a multitude of + options that you can configure to change the behaviour of the + simplifier. There are the following five main tactics behind the simplifier + (in parentheses is their user-level counterpart): \begin{isabelle} \begin{center} \begin{tabular}{l@ {\hspace{2cm}}l} - @{ML_ind simp_tac} & @{text "(simp (no_asm))"} \\ - @{ML_ind asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\ - @{ML_ind full_simp_tac} & @{text "(simp (no_asm_use))"} \\ - @{ML_ind asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\ - @{ML_ind asm_full_simp_tac} & @{text "(simp)"} + @{ML_ind simp_tac in Simplifier} & @{text "(simp (no_asm))"} \\ + @{ML_ind asm_simp_tac in Simplifier} & @{text "(simp (no_asm_simp))"} \\ + @{ML_ind full_simp_tac in Simplifier} & @{text "(simp (no_asm_use))"} \\ + @{ML_ind asm_lr_simp_tac in Simplifier} & @{text "(simp (asm_lr))"} \\ + @{ML_ind asm_full_simp_tac in Simplifier} & @{text "(simp)"} \end{tabular} \end{center} \end{isabelle} @@ -1374,21 +1362,19 @@ definition for a constant and this constant is not present in the goal state, you can still safely apply the simplifier. - (FIXME: show rewriting of cterms) - + \footnote{\bf FIXME: show rewriting of cterms} When using the simplifier, the crucial information you have to provide is - the simpset. If this information is not handled with care, then the - simplifier can easily run into a loop. Therefore a good rule of thumb is to - use simpsets that are as minimal as possible. It might be surprising that a - simpset is more complex than just a simple collection of theorems used as - simplification rules. One reason for the complexity is that the simplifier - must be able to rewrite inside terms and should also be able to rewrite - according to rules that have preconditions. - - - The rewriting inside terms requires congruence rules, which - are meta-equalities typical of the form + the simpset. If this information is not handled with care, then, as + mentioned above, the simplifier can easily run into a loop. Therefore a good + rule of thumb is to use simpsets that are as minimal as possible. It might + be surprising that a simpset is more complex than just a simple collection + of theorems. One reason for the complexity is that the simplifier must be + able to rewrite inside terms and should also be able to rewrite according to + theorems that have premises. + + The rewriting inside terms requires congruence theorems, which + are typically meta-equalities of the form \begin{isabelle} \begin{center} @@ -1397,47 +1383,42 @@ \end{center} \end{isabelle} - with @{text "constr"} being a constant, like @{const "If"} or @{const "Let"}. - Every simpset contains only - one congruence rule for each term-constructor, which however can be - overwritten. The user can declare lemmas to be congruence rules using the - attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as - equations, which are then internally transformed into meta-equations. - - - The rewriting with rules involving preconditions requires what is in - Isabelle called a subgoaler, a solver and a looper. These can be arbitrary - tactics that can be installed in a simpset and which are called at - various stages during simplification. However, simpsets also include - simprocs, which can produce rewrite rules on demand (see next - section). Another component are split-rules, which can simplify for example - the ``then'' and ``else'' branches of if-statements under the corresponding - preconditions. - + with @{text "constr"} being a constant, like @{const "If"}, @{const "Let"} + and so on. Every simpset contains only one congruence rule for each + term-constructor, which however can be overwritten. The user can declare + lemmas to be congruence rules using the attribute @{text "[cong]"}. In HOL, + the user usually states these lemmas as equations, which are then internally + transformed into meta-equations. + + The rewriting with theorems involving premises requires what is in Isabelle + called a subgoaler, a solver and a looper. These can be arbitrary tactics + that can be installed in a simpset and which are executed at various stages + during simplification. However, simpsets also include simprocs, which can + produce rewrite rules on demand (see next section). Another component are + split-rules, which can simplify for example the ``then'' and ``else'' + branches of if-statements under the corresponding preconditions. \begin{readmore} For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"} - and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in - @{ML_file "HOL/Tools/simpdata.ML"}. The generic splitter is implemented in - @{ML_file "Provers/splitter.ML"}. + and @{ML_file "Pure/simplifier.ML"}. The generic splitter is implemented in + @{ML_file "Provers/splitter.ML"}. \end{readmore} - \begin{readmore} - FIXME: Find the right place: Discrimination nets are implemented - in @{ML_file "Pure/net.ML"}. - \end{readmore} + + \footnote{\bf FIXME: Find the right place to mention this: Discrimination + nets are implemented in @{ML_file "Pure/net.ML"}.} The most common combinators to modify simpsets are: \begin{isabelle} \begin{tabular}{ll} - @{ML_ind addsimps} & @{ML_ind delsimps}\\ - @{ML_ind addcongs} & @{ML_ind delcongs}\\ - @{ML_ind addsimprocs} & @{ML_ind delsimprocs}\\ + @{ML_ind addsimps in MetaSimplifier} & @{ML_ind delsimps in MetaSimplifier}\\ + @{ML_ind addcongs in MetaSimplifier} & @{ML_ind delcongs in MetaSimplifier}\\ + @{ML_ind addsimprocs in MetaSimplifier} & @{ML_ind delsimprocs in MetaSimplifier}\\ \end{tabular} \end{isabelle} - (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits}) + \footnote{\bf FIXME: What about splitters? @{ML addsplits}, @{ML delsplits}} *} text_raw {* @@ -1472,7 +1453,7 @@ text {* To see how they work, consider the function in Figure~\ref{fig:printss}, which prints out some parts of a simpset. If you use it to print out the components of the - empty simpset, i.e., @{ML_ind empty_ss} + empty simpset, i.e., @{ML_ind empty_ss in MetaSimplifier} @{ML_response_fake [display,gray] "print_ss @{context} empty_ss" @@ -1497,7 +1478,7 @@ Congruences rules: Simproc patterns:"} - (FIXME: Why does it print out ??.unknown) + \footnote{\bf FIXME: Why does it print out ??.unknown} Adding also the congruence rule @{thm [source] UN_cong} *} @@ -1519,8 +1500,8 @@ expects this form of the simplification and congruence rules. However, even when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet. - In the context of HOL, the first really useful simpset is @{ML_ind HOL_basic_ss}. While - printing out the components of this simpset + In the context of HOL, the first really useful simpset is @{ML_ind + HOL_basic_ss in Simpdata}. While printing out the components of this simpset @{ML_response_fake [display,gray] "print_ss @{context} HOL_basic_ss" @@ -1540,15 +1521,19 @@ *} lemma - "True" and "t = t" and "t \ t" and "False \ Foo" and "\A; B; C\ \ A" + shows "True" + and "t = t" + and "t \ t" + and "False \ Foo" + and "\A; B; C\ \ A" apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *}) done text {* This behaviour is not because of simplification rules, but how the subgoaler, solver - and looper are set up in @{ML_ind HOL_basic_ss}. - - The simpset @{ML_ind HOL_ss} is an extension of @{ML HOL_basic_ss} containing + and looper are set up in @{ML_ind HOL_basic_ss}. + + The simpset @{ML_ind HOL_ss} is an extension of @{ML HOL_basic_ss} containing already many useful simplification and congruence rules for the logical connectives in HOL. @@ -1566,9 +1551,13 @@ Simproc patterns: \"} - + \begin{readmore} + The simplifier for HOL is set up in @{ML_file "HOL/Tools/simpdata.ML"}. + The simpset @{ML HOL_ss} is implemented in @{ML_file "HOL/HOL.thy"}. + \end{readmore} + The simplifier is often used to unfold definitions in a proof. For this the - simplifier implements the tactic @{ML_ind rewrite_goal_tac}.\footnote{\bf FIXME: + simplifier implements the tactic @{ML_ind rewrite_goal_tac in MetaSimplifier}.\footnote{\bf FIXME: see LocalDefs infrastructure.} Suppose for example the definition *} @@ -1583,15 +1572,14 @@ shows "MyTrue \ True" apply(tactic {* rewrite_goal_tac @{thms MyTrue_def} 1 *}) txt{* producing the goal state - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals [display]} - \end{minipage} *} + \end{isabelle} *} (*<*)oops(*>*) text {* - If you want to unfold definitions in all subgoals, then use the - the tactic @{ML_ind rewrite_goals_tac}. + If you want to unfold definitions in \emph{all} subgoals not just one, + then use the the tactic @{ML_ind rewrite_goals_tac in MetaSimplifier}. *} @@ -1947,10 +1935,9 @@ apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one]) 1*}) txt{* where the simproc produces the goal state - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals[display]} - \end{minipage} + \end{isabelle} *} (*<*)oops(*>*) @@ -2053,11 +2040,9 @@ apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number]) 1*}) txt {* you obtain the more legible goal state - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals [display]} - \end{minipage} - *} + \end{isabelle}*} (*<*)oops(*>*) text {* @@ -2389,10 +2374,9 @@ (if True \ Q then True \ Q else P) \ True \ (True \ Q)" apply(tactic {* true1_tac @{context} 1 *}) txt {* where the tactic yields the goal state - - \begin{minipage}{\textwidth} + \begin{isabelle} @{subgoals [display]} - \end{minipage}*} + \end{isabelle}*} (*<*)oops(*>*) text {* @@ -2543,8 +2527,20 @@ val this = Variable.export ctxt ctxt0 [this] *} +section {* Summary *} + text {* - If a tactic should fail, return the empty sequence. + + \begin{conventions} + \begin{itemize} + \item Reference theorems with the antiquotation @{text "@{thm \}"}. + \item If a tactic is supposed to fail, return the empty sequence. + \item If you apply a tactic to a sequence of subgoals, apply it + in reverse order (i.e.~start with the last subgoal). + \item Use simpsets that are as small as possible. + \end{itemize} + \end{conventions} + *} end