diff -r 6103b0eadbf2 -r f7c97e64cc2a ProgTutorial/Tactical.thy --- a/ProgTutorial/Tactical.thy Tue May 14 17:45:13 2019 +0200 +++ b/ProgTutorial/Tactical.thy Thu May 16 19:56:12 2019 +0200 @@ -47,7 +47,7 @@ text \ This proof translates to the following ML-code. -@{ML_response_fake [display,gray] +@{ML_matchresult_fake [display,gray] "let val ctxt = @{context} val goal = @{prop \"P \ Q \ Q \ P\"} @@ -396,7 +396,7 @@ text \ A simple tactic for easy discharge of any proof obligations, even difficult ones, is - @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_struct Skip_Proof}. + @{ML_ind cheat_tac in Skip_Proof} in the structure @{ML_structure Skip_Proof}. This tactic corresponds to the Isabelle command \isacommand{sorry} and is sometimes useful during the development of tactics. \ @@ -763,14 +763,14 @@ constraints is by pre-instantiating theorems with other theorems. The function @{ML_ind RS in Drule}, for example, does this. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "@{thm disjI1} RS @{thm conjI}" "\?P1; ?Q\ \ (?P1 \ ?Q1) \ ?Q"} In this example it instantiates the first premise of the \conjI\-theorem with the theorem \disjI1\. If the instantiation is impossible, as in the case of - @{ML_response_fake_both [display,gray] + @{ML_matchresult_fake_both [display,gray] "@{thm conjI} RS @{thm mp}" "*** Exception- THM (\"RSN: no unifiers\", 1, [\"\?P; ?Q\ \ ?P \ ?Q\", \"\?P \ ?Q; ?P\ \ ?Q\"]) raised"} @@ -782,7 +782,7 @@ If you want to instantiate more than one premise of a theorem, you can use the function @{ML_ind MRS in Drule}: - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "[@{thm disjI1}, @{thm disjI2}] MRS @{thm conjI}" "\?P2; ?Q1\ \ (?P2 \ ?Q2) \ (?P1 \ ?Q1)"} @@ -791,7 +791,7 @@ example in the code below, every theorem in the second list is instantiated with every theorem in the first. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val list1 = [@{thm impI}, @{thm disjI2}] val list2 = [@{thm conjI}, @{thm disjI1}] @@ -1293,7 +1293,7 @@ tactic should explore all possibilites of applying these rules to a propositional formula until a goal state is reached in which all subgoals are discharged. For this you can use the tactic combinator @{ML_ind - DEPTH_SOLVE in Search} in the structure @{ML_struct Search}. + DEPTH_SOLVE in Search} in the structure @{ML_structure Search}. \end{exercise} \begin{exercise} @@ -1359,7 +1359,7 @@ it can only deal with rewriting rules whose left-hand sides are higher order pattern (see Section \ref{sec:univ} on unification). Whether a term is a pattern or not can be tested with the function @{ML_ind pattern in Pattern} from - the structure @{ML_struct Pattern}. If a rule is not a pattern and you want + the structure @{ML_structure Pattern}. If a rule is not a pattern and you want to use it for rewriting, then you have to use simprocs or conversions, both of which we shall describe in the next section. @@ -1458,7 +1458,7 @@ prints out some parts of a simpset. If you use it to print out the components of the empty simpset, i.e., @{ML_ind empty_ss in Raw_Simplifier} - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "print_ss @{context} empty_ss" "Simplification rules: Congruences rules: @@ -1474,7 +1474,7 @@ text \ Printing then out the components of the simpset gives: - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "print_ss @{context} (Raw_Simplifier.simpset_of ss1)" "Simplification rules: ??.unknown: A - B \ C \ A - B \ (A - C) @@ -1491,7 +1491,7 @@ text \ gives - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "print_ss @{context} (Raw_Simplifier.simpset_of ss2)" "Simplification rules: ??.unknown: A - B \ C \ A - B \ (A - C) @@ -1508,7 +1508,7 @@ In the context of HOL, the first really useful simpset is @{ML_ind HOL_basic_ss in Simpdata}. While printing out the components of this simpset - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "print_ss @{context} HOL_basic_ss" "Simplification rules: Congruences rules: @@ -1542,7 +1542,7 @@ already many useful simplification and congruence rules for the logical connectives in HOL. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "print_ss @{context} HOL_ss" "Simplification rules: Pure.triv_forall_equality: (\x. PROP V) \ PROP V @@ -2064,21 +2064,21 @@ is the function @{ML_ind all_conv in Conv}, which maps a @{ML_type cterm} to an instance of the (meta)reflexivity theorem. For example: - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "Conv.all_conv @{cterm \"Foo \ Bar\"}" "Foo \ Bar \ Foo \ Bar"} Another simple conversion is @{ML_ind no_conv in Conv} which always raises the exception @{ML CTERM}. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "Conv.no_conv @{cterm True}" "*** Exception- CTERM (\"no conversion\", []) raised"} A more interesting conversion is the function @{ML_ind beta_conversion in Thm}: it produces a meta-equation between a term and its beta-normal form. For example - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val add = @{cterm \"\x y. x + (y::nat)\"} val two = @{cterm \"2::nat\"} @@ -2096,7 +2096,7 @@ If we get hold of the ``raw'' representation of the produced theorem, we obtain the expected result. - @{ML_response [display,gray] + @{ML_matchresult [display,gray] "let val add = @{cterm \"\x y. x + (y::nat)\"} val two = @{cterm \"2::nat\"} @@ -2111,7 +2111,7 @@ or in the pretty-printed form - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val add = @{cterm \"\x y. x + (y::nat)\"} val two = @{cterm \"2::nat\"} @@ -2146,7 +2146,7 @@ It can be used for example to rewrite @{term "True \ (Foo \ Bar)"} to @{term "Foo \ Bar"}. The code is as follows. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val ctrm = @{cterm \"True \ (Foo \ Bar)\"} in @@ -2165,7 +2165,7 @@ combinators. The simplest conversion combinator is @{ML_ind then_conv in Conv}, which applies one conversion after another. For example - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv1 = Thm.beta_conversion false val conv2 = Conv.rewr_conv @{thm true_conj1} @@ -2182,7 +2182,7 @@ The conversion combinator @{ML_ind else_conv in Conv} tries out the first one, and if it does not apply, tries the second. For example - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv = Conv.rewr_conv @{thm true_conj1} else_conv Conv.all_conv val ctrm1 = @{cterm \"True \ Q\"} @@ -2199,7 +2199,7 @@ behaviour can also be achieved with conversion combinator @{ML_ind try_conv in Conv}. For example - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) val ctrm = @{cterm \"True \ P\"} @@ -2209,7 +2209,7 @@ "True \ P \ True \ P"} Rewriting with more than one theorem can be done using the function - @{ML_ind rewrs_conv in Conv} from the structure @{ML_struct Conv}. + @{ML_ind rewrs_conv in Conv} from the structure @{ML_structure Conv}. Apart from the function @{ML beta_conversion in Thm}, which is able to fully @@ -2220,7 +2220,7 @@ a conversion to the first, respectively second, argument of an application. For example - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv = Conv.arg_conv (Conv.rewr_conv @{thm true_conj1}) val ctrm = @{cterm \"P \ (True \ Q)\"} @@ -2238,7 +2238,7 @@ The function @{ML_ind abs_conv in Conv} applies a conversion under an abstraction. For example: - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv = Conv.rewr_conv @{thm true_conj1} val ctrm = @{cterm \"\P. True \ (P \ Foo)\"} @@ -2279,7 +2279,7 @@ to be able to pattern-match the term. To see this conversion in action, consider the following example: -@{ML_response_fake [display,gray] +@{ML_matchresult_fake [display,gray] "let val conv = true_conj1_conv @{context} val ctrm = @{cterm \"distinct [1::nat, x] \ True \ 1 \ x\"} @@ -2322,7 +2322,7 @@ according to the two meta-equations produces two results. Below these two results are calculated. - @{ML_response_fake [display, gray] + @{ML_matchresult_fake [display, gray] "let val ctxt = @{context} val conv = Conv.try_conv (Conv.rewrs_conv @{thms conj_assoc}) @@ -2358,7 +2358,7 @@ soon as it found one redex. Here is an example for this conversion: - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val ctrm = @{cterm \"if P (True \ 1 \ (2::nat)) then True \ True else True \ False\"} @@ -2382,7 +2382,7 @@ Using the conversion you can transform this theorem into a new theorem as follows - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "let val conv = Conv.fconv_rule (true_conj1_conv @{context}) val thm = @{thm foo_test} @@ -2460,12 +2460,12 @@ Although both definitions define the same function, the theorems @{thm [source] id1_def} and @{thm [source] id2_def} are different meta-equations. However it is easy to transform one into the other. The function @{ML_ind abs_def - in Drule} from the structure @{ML_struct Drule} can automatically transform + in Drule} from the structure @{ML_structure Drule} can automatically transform theorem @{thm [source] id1_def} into @{thm [source] id2_def} by abstracting all variables on the left-hand side in the right-hand side. - @{ML_response_fake [display,gray] + @{ML_matchresult_fake [display,gray] "Drule.abs_def @{thm id1_def}" "id1 \ \x. x"} @@ -2513,7 +2513,7 @@ export the context \ctxt'\ back to \ctxt\ in order to produce meta-variables for the theorem. An example is as follows. - @{ML_response_fake [display, gray] + @{ML_matchresult_fake [display, gray] "unabs_def @{context} @{thm id2_def}" "id2 ?x1 \ ?x1"} \