# HG changeset patch # User Christian Urban # Date 1236376337 0 # Node ID 3fb9f820a294630fb5158fcdd0c3858f74056b63 # Parent 83f36a1c62f29d561d18b3f813eb09a45f090c99 some additions to the simplifier section and general tuning diff -r 83f36a1c62f2 -r 3fb9f820a294 CookBook/Appendix.thy --- a/CookBook/Appendix.thy Fri Mar 06 16:12:16 2009 +0000 +++ b/CookBook/Appendix.thy Fri Mar 06 21:52:17 2009 +0000 @@ -5,17 +5,19 @@ text {* \appendix *} -text {* - Possible topics: translations/print translations -*} - chapter {* Recipes *} text {* - Possible topics: translations/print translations + Possible further topics: + + translations/print translations + + @{ML "ProofContext.print_syntax"} - User Space Type Systems (in the already existing form) + user space type systems (in the form that already exists) + + unification and typing algorithms *} end diff -r 83f36a1c62f2 -r 3fb9f820a294 CookBook/FirstSteps.thy --- a/CookBook/FirstSteps.thy Fri Mar 06 16:12:16 2009 +0000 +++ b/CookBook/FirstSteps.thy Fri Mar 06 21:52:17 2009 +0000 @@ -555,19 +555,18 @@ the given arguments @{ML_response [display,gray] "make_imp @{term S} @{term T} @{typ nat}" - "Const \ $ - Abs (\"x\", Type (\"nat\",[]), - Const \ $ (Free (\"S\",\) $ \) $ - (Free (\"T\",\) $ \))"} +"Const \ $ + Abs (\"x\", Type (\"nat\",[]), + Const \ $ (Free (\"S\",\) $ \) $ (Free (\"T\",\) $ \))"} whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"} and @{text "Q"} from the antiquotation. @{ML_response [display,gray] "make_wrong_imp @{term S} @{term T} @{typ nat}" - "Const \ $ - Abs (\"x\", \, - Const \ $ (Const \ $ (Free (\"P\",\) $ \)) $ - (Const \ $ (Free (\"Q\",\) $ \)))"} +"Const \ $ + Abs (\"x\", \, + Const \ $ (Const \ $ (Free (\"P\",\) $ \)) $ + (Const \ $ (Free (\"Q\",\) $ \)))"} Although types of terms can often be inferred, there are many situations where you need to construct types manually, especially @@ -576,7 +575,7 @@ *} -ML{*fun make_fun_type tau1 tau2 = Type ("fun",[tau1,tau2]) *} +ML{*fun make_fun_type tau1 tau2 = Type ("fun", [tau1, tau2]) *} text {* This can be equally written with the combinator @{ML "-->"} as: *} @@ -708,25 +707,24 @@ | is_nil_or_all (Const (@{const_name "All"}, _) $ _) = true | is_nil_or_all _ = false *} -(* text {* Occasional you have to calculate what the ``base'' name of a given constant is. For this you can use the function @{ML Sign.extern_const} or - @{ML Sign.base_name}. For example: + @{ML NameSpace.base_name}. For example: @{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""} The difference between both functions is that @{ML extern_const in Sign} returns - the smallest name that is still unique, whereas @{ML base_name in Sign} always + the smallest name that is still unique, whereas @{ML base_name in NameSpace} always strips off all qualifiers. - (FIXME: These are probably now NameSpace functions) - \begin{readmore} - FIXME: Find the right files to do with naming. + Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"}; + functions about signatures in @{ML_file "Pure/sign.ML"}. + \end{readmore} *} -*) + section {* Type-Checking *} @@ -844,6 +842,8 @@ type inference are implemented in @{ML_file "Pure/type.ML"} and @{ML_file "Pure/type_infer.ML"}. \end{readmore} + + (FIXME: say something about sorts) *} diff -r 83f36a1c62f2 -r 3fb9f820a294 CookBook/Intro.thy --- a/CookBook/Intro.thy Fri Mar 06 16:12:16 2009 +0000 +++ b/CookBook/Intro.thy Fri Mar 06 21:52:17 2009 +0000 @@ -14,7 +14,7 @@ examples included in the tutorial. The code is as far as possible checked against recent versions of Isabelle. If something does not work, then please let us know. If you have comments, criticism or like to add to the - tutorial, feel free---you are most welcome! The tutorial is meant to be + tutorial, please feel free---you are most welcome! The tutorial is meant to be gentle and comprehensive. To achieve this we need your feedback. *} @@ -38,7 +38,7 @@ part of the distribution of Isabelle): \begin{description} - \item[The Implementation Manual] describes Isabelle + \item[The Isabelle/Isar Implementation Manual] describes Isabelle from a high-level perspective, documenting both the underlying concepts and some of the interfaces. @@ -110,7 +110,7 @@ A few exercises a scattered around the text. Their solutions are given in Appendix~\ref{ch:solutions}. Of course, you learn most, if you first try - to solve the exercises on your own. + to solve the exercises on your own, and then look at the solutions. *} diff -r 83f36a1c62f2 -r 3fb9f820a294 CookBook/Tactical.thy --- a/CookBook/Tactical.thy Fri Mar 06 16:12:16 2009 +0000 +++ b/CookBook/Tactical.thy Fri Mar 06 21:52:17 2009 +0000 @@ -2,7 +2,6 @@ imports Base FirstSteps begin - chapter {* Tactical Reasoning\label{chp:tactical} *} text {* @@ -66,7 +65,7 @@ and the file @{ML_file "Pure/goal.ML"}. See @{ML_file "Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic tactics and tactic combinators; see also Chapters 3 and 4 in the old - Isabelle Reference Manual. + Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual. \end{readmore} Note that in the code above we use antiquotations for referencing the theorems. Many theorems @@ -1011,14 +1010,14 @@ text {* A lot of convenience in the reasoning with Isabelle derives from its - powerful simplifier. The power of simplifier is at the same time a - strength and a weakness, because you can easily make the simplifier to - loop and its behaviour can sometimes be difficult to predict. There is also - a multitude of options that configurate the behaviour of the simplifier. We - describe some of them in this and the next section. + powerful simplifier. The power of simplifier is a strength and a weakness at + the same time, because you can easily make the simplifier to run into a loop and its + behaviour can be difficult to predict. There is also a multitude + of options that you can configurate to control the behaviour of the simplifier. + We describe some of them in this and the next section. There are the following five main tactics behind - the simplifier (in parentheses is their user-level counterparts): + the simplifier (in parentheses is their user-level counterpart): \begin{isabelle} \begin{center} @@ -1032,8 +1031,8 @@ \end{center} \end{isabelle} - All of them take a simpset and an interger as argument (the latter to specify the goal - to be analysed). So the proof + All of the tactics take a simpset and an interger as argument (the latter as usual + to specify the goal to be analysed). So the proof *} lemma "Suc (1 + 2) < 3 + 2" @@ -1049,18 +1048,19 @@ done text {* - If the simplifier cannot make any progress, then it leaves the goal unchanged - and does not raise any error message. That means if you use it to unfold a definition - for a constant and this constant is not present in a goal state, you can still - safely apply the simplifier. + If the simplifier cannot make any progress, then it leaves the goal unchanged, + i.e.~does not raise any error message. That means if you use it to unfold a + definition for a constant and this constant is not present in the goal state, + you can still safely apply the simplifier. - When using the simplifier, the crucial information you have to control is - the simpset. If not handled with care, then the simplifier can easily run - into a loop. It might be surprising that a simpset is more complex than just a - simple collection of theorems used as simplification rules. One reason for - the complexity is that the simplifier must be able to rewrite inside terms - and should also be able to rewrite according to rules that have - precoditions. + When using the simplifier, the crucial information you have to provide is + the simpset. If this information is not handled with care, then the + simplifier can easily run into a loop. Therefore a good rule of thumb is to + use simpsets that are as minimal as possible. It might be surprising that a + simpset is more complex than just a simple collection of theorems used as + simplification rules. One reason for the complexity is that the simplifier + must be able to rewrite inside terms and should also be able to rewrite + according to rules that have precoditions. The rewriting inside terms requires congruence rules, which @@ -1068,12 +1068,13 @@ \begin{isabelle} \begin{center} - \mbox{\inferrule{@{text "t\<^isub>i \ s\<^isub>i"}} + \mbox{\inferrule{@{text "t\<^isub>1 \ s\<^isub>1 \ t\<^isub>n \ s\<^isub>n"}} {@{text "constr t\<^isub>1\t\<^isub>n \ constr s\<^isub>1\s\<^isub>n"}}} \end{center} \end{isabelle} - with @{text "constr"} being a term-constructor. Every simpset contains only + with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}. + Every simpset contains only one concruence rule for each term-constructor, which however can be overwritten. The user can declare lemmas to be congruence rules using the attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as @@ -1082,12 +1083,14 @@ The rewriting with rules involving preconditions requires what is in Isabelle called a subgoaler, a solver and a looper. These can be arbitrary - tactics that can be installed in a simpset. However, simpsets also include + tactics that can be installed in a simpset and which are called during + various stages during simplification. However, simpsets also include simprocs, which can produce rewrite rules on demand (see next section). Another component are split-rules, which can simplify for example the ``then'' and ``else'' branches of if-statements under the corresponding precoditions. + \begin{readmore} For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"} and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in @@ -1162,7 +1165,7 @@ text {* To see how they work, consider the two functions in Figure~\ref{fig:prettyss}, which print out some parts of a simpset. If you use them to print out the components of the - empty simpset, the most primitive simpset + empty simpset, in ML @{ML empty_ss} and the most primitive simpset @{ML_response_fake [display,gray] "print_parts @{context} empty_ss" @@ -1177,7 +1180,7 @@ ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *} text {* - Printing out the components of this simpset gives: + Printing then out the components of the simpset gives: @{ML_response_fake [display,gray] "print_parts @{context} ss1" @@ -1187,7 +1190,7 @@ (FIXME: Why does it print out ??.unknown) - Adding the congruence rule @{thm [source] UN_cong} + Adding also the congruence rule @{thm [source] UN_cong} *} ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *} @@ -1204,9 +1207,9 @@ Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss} expects this form of the simplification and congruence rules. However, even - adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet. + when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet. - In the context of HOL the first useful simpset is @{ML HOL_basic_ss}. While + In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While printing out the components of this simpset @{ML_response_fake [display,gray] @@ -1215,9 +1218,9 @@ Congruences rules:"} also produces ``nothing'', the printout is misleading. In fact - the simpset @{ML HOL_basic_ss} is setup so that it can be used to solve goals of the - form @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \ t"} and @{thm FalseE[no_vars]}, - and resolve with assumptions. + the @{ML HOL_basic_ss} is setup so that it can solve goals of the + form @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \ t"} and @{thm FalseE[no_vars]}; + and also resolve with assumptions. For example: *} lemma @@ -1226,11 +1229,14 @@ done text {* - This is because how the tactics for the subgoaler, solver and looper are set - up. + This behaviour is not because of simplification rules, but how the subgoaler, solver + and looper are set up. @{ML HOL_basic_ss} is usually a good start to build your + own simpsets, because of the low danger of causing loops via interacting simplifiction + rules. - The simpset @{ML HOL_ss}, which is an extention of @{ML HOL_basic_ss}, contains - already many useful simplification rules for the logical connectives in HOL. + The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing + already many useful simplification and congruence rules for the logical + connectives in HOL. @{ML_response_fake [display,gray] "print_parts @{context} HOL_ss" @@ -1245,9 +1251,25 @@ op -->: \P \ P'; P' \ Q \ Q'\ \ P \ Q \ P' \ Q'"} - The main point of these simpsets is that they are small enough to - not cause any loops with most of the simplification rules that - you might like to add. + The simplifier is often used to unfold definitions in a proof. For this the + simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the + definition +*} + +definition "MyTrue \ True" + +lemma shows "MyTrue \ True \ True" +apply(rule conjI) +apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *}) +txt{* then the tactic produces the goal state + + \begin{minipage}{\textwidth} + @{subgoals [display]} + \end{minipage} *} +(*<*)oops(*>*) + +text {* + As you can see, the tactic unfolds the definitions in all subgoals. *} @@ -1259,9 +1281,8 @@ primrec (perm_nat) "[]\c = c" - "(ab#pi)\c = (if (pi\c)=fst ab - then snd ab - else (if (pi\c)=snd ab then fst ab else (pi\c)))" + "(ab#pi)\c = (if (pi\c)=fst ab then snd ab + else (if (pi\c)=snd ab then fst ab else (pi\c)))" primrec (perm_prod) "pi\(x, y) = (pi\x, pi\y)" @@ -1300,17 +1321,18 @@ text {* - Often the situation arises that simplification rules will cause the - simplifier to run into an infinite loop. Consider for example the simple - theory about permutations shown in Figure~\ref{fig:perms}. The purpose of - the lemmas is to pus permutations as far inside a term where they might - disappear using the lemma @{thm [source] perm_rev}. However, to fully - simplifiy all instances, it would be desirable to add also the lemma @{thm - [source] perm_compose} to the simplifier in order to push permutations over - other permutations. Unfortunately, the right-hand side of this lemma is - again an instance of the left-hand side and so causes an infinite - loop. On the user-level it is difficult to work around such situations - and we end up with clunky proofs such as: + The simplifier is often used in order to bring terms into a normal form. + Unfortunately, often the situation arises that the corresponding + simplification rules will cause the simplifier to run into an infinite + loop. Consider for example the simple theory about permutations over natural + numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to + push permutations as far inside as possible, where they might disappear by + Lemma @{thm [source] perm_rev}. However, to fully normalise all instances, + it would be desirable to add also the lemma @{thm [source] perm_compose} to + the simplifier for pushing permutations over other permutations. Unfortunately, + the right-hand side of this lemma is again an instance of the left-hand side + and so causes an infinite loop. The seems to be no easy way to reformulate + this rule and so one ends up with clunky proofs like: *} lemma @@ -1323,9 +1345,10 @@ done text {* - On the ML-level, we can however generate a single simplifier tactic that solves + It is however possible to create a single simplifier tactic that solves such proofs. The trick is to introduce an auxiliary constant for permutations - and split the simplification into two phases. Let us say the auxiliary constant is + and split the simplification into two phases (below actually three). Let + assume the auxiliary constant is *} definition @@ -1333,7 +1356,7 @@ where "pi \aux c \ pi \ c" -text {* Now the lemmas *} +text {* Now the two lemmas *} lemma perm_aux_expand: fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm" @@ -1349,7 +1372,15 @@ are simple consequence of the definition and @{thm [source] perm_compose}. More importantly, the lemma @{thm [source] perm_compose_aux} can be safely added to the simplifier, because now the right-hand side is not anymore an instance - of the left-hand side. So you can write + of the left-hand side. In a sense it freezes all redexes of permutation compositions + after one step. In this way, we can split simplification of permutations + into three phases without the user not noticing anything about the auxiliary + contant. We first freeze any instance of permutation compositions in the term using + lemma @{thm [source] "perm_aux_expand"} (Line 9); + then simplifiy all other permutations including pusing permutations over + other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and + finally ``unfreeze'' all instances of permutation compositions by unfolding + the definition of the auxiliary constant. *} ML %linenosgray{*val perm_simp_tac = @@ -1366,7 +1397,9 @@ end*} text {* - This trick is not noticable for the user. + For all three phases we have to build simpsets addig specific lemmas. As is sufficient + for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain + the desired results. Now we can solve the following lemma *} lemma @@ -1377,6 +1410,12 @@ text {* + in one step. This tactic can deal with most instances of normalising permutations; + in order to solve all cases we have to deal with corner-cases such as the + lemma being an exact instance of the permutation composition lemma. This can + often be done easier by implementing a simproc or a conversion. Both will be + explained in the next two chapters. + (FIXME: Is it interesting to say something about @{term "op =simp=>"}?) (FIXME: Unfortunately one cannot access any simproc, as there is @@ -1387,7 +1426,7 @@ (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?) - (FIXME: @{ML rewrite_goals_tac}, @{ML ObjectLogic.full_atomize_tac}, + (FIXME: @{ML ObjectLogic.full_atomize_tac}, @{ML ObjectLogic.rulify_tac}) *} diff -r 83f36a1c62f2 -r 3fb9f820a294 cookbook.pdf Binary file cookbook.pdf has changed