732 the first goal. To do this can result in quite messy code. In contrast, |
732 the first goal. To do this can result in quite messy code. In contrast, |
733 the ``reverse application'' is easy to implement. |
733 the ``reverse application'' is easy to implement. |
734 |
734 |
735 Of course, this example is |
735 Of course, this example is |
736 contrived: there are much simpler methods available in Isabelle for |
736 contrived: there are much simpler methods available in Isabelle for |
737 implementing a proof procedure analysing a goal according to its topmost |
737 implementing a tactic analysing a goal according to its topmost |
738 connective. These simpler methods use tactic combinators, which we will |
738 connective. These simpler methods use tactic combinators, which we will |
739 explain in the next section. |
739 explain in the next section. |
740 |
740 |
741 *} |
741 *} |
742 |
742 |
806 *} |
806 *} |
807 |
807 |
808 ML{*val orelse_xmp_tac = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*} |
808 ML{*val orelse_xmp_tac = rtac @{thm disjI1} ORELSE' rtac @{thm conjI}*} |
809 |
809 |
810 text {* |
810 text {* |
811 will first try out whether rule @{text disjI} applies and after that |
811 will first try out whether rule @{text disjI} applies and in case of failure |
812 @{text conjI}. To see this consider the proof |
812 will try @{text conjI}. To see this consider the proof |
813 *} |
813 *} |
814 |
814 |
815 lemma shows "True \<and> False" and "Foo \<or> Bar" |
815 lemma shows "True \<and> False" and "Foo \<or> Bar" |
816 apply(tactic {* orelse_xmp_tac 2 *}) |
816 apply(tactic {* orelse_xmp_tac 2 *}) |
817 apply(tactic {* orelse_xmp_tac 1 *}) |
817 apply(tactic {* orelse_xmp_tac 1 *}) |
863 \end{minipage} *} |
863 \end{minipage} *} |
864 (*<*)oops(*>*) |
864 (*<*)oops(*>*) |
865 |
865 |
866 text {* |
866 text {* |
867 Remember that we chose to implement @{ML select_tac'} so that it |
867 Remember that we chose to implement @{ML select_tac'} so that it |
868 always succeeds. This can be potentially very confusing for the user, |
868 always succeeds by adding @{ML all_tac} at the end of the tactic |
869 for example, in cases where the goal is the form |
869 list. The same can be achieved with the tactic combinator @{ML TRY}. |
|
870 For example: |
|
871 *} |
|
872 |
|
873 ML{*val select_tac'' = TRY o FIRST' [rtac @{thm conjI}, rtac @{thm impI}, |
|
874 rtac @{thm notI}, rtac @{thm allI}]*} |
|
875 text_raw{*\label{tac:selectprimeprime}*} |
|
876 |
|
877 text {* |
|
878 This tactic behaves in the same way as @{ML select_tac'}: it tries out |
|
879 one of the given tactics and if none applies leaves the goal state |
|
880 unchanged. This, however, can be potentially very confusing when visible to |
|
881 the user, for example, in cases where the goal is the form |
|
882 |
870 *} |
883 *} |
871 |
884 |
872 lemma shows "E \<Longrightarrow> F" |
885 lemma shows "E \<Longrightarrow> F" |
873 apply(tactic {* select_tac' 1 *}) |
886 apply(tactic {* select_tac' 1 *}) |
874 txt{* \begin{minipage}{\textwidth} |
887 txt{* \begin{minipage}{\textwidth} |
875 @{subgoals [display]} |
888 @{subgoals [display]} |
876 \end{minipage} *} |
889 \end{minipage} *} |
877 (*<*)oops(*>*) |
890 (*<*)oops(*>*) |
878 |
891 |
879 text {* |
892 text {* |
880 In this case no rule applies. The problem for the user is that there is little |
893 In this case no rule applies, but because of @{ML TRY} or the inclusion of @{ML all_tac} |
|
894 the tactics do not fail. The problem with this is that for the user there is little |
881 chance to see whether or not progress in the proof has been made. By convention |
895 chance to see whether or not progress in the proof has been made. By convention |
882 therefore, tactics visible to the user should either change something or fail. |
896 therefore, tactics visible to the user should either change something or fail. |
883 |
897 |
884 To comply with this convention, we could simply delete the @{ML "K all_tac"} |
898 To comply with this convention, we could simply delete the @{ML "K all_tac"} |
885 from the end of the theorem list. As a result @{ML select_tac'} would only |
899 from the end of the theorem list. As a result @{ML select_tac'} would only |
926 tactic as long as it succeeds). The function |
940 tactic as long as it succeeds). The function |
927 @{ML REPEAT1} is similar, but runs the tactic at least once (failing if |
941 @{ML REPEAT1} is similar, but runs the tactic at least once (failing if |
928 this is not possible). |
942 this is not possible). |
929 |
943 |
930 If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you |
944 If you are after the ``primed'' version of @{ML repeat_xmp_tac}, then you |
931 need to implement it as |
945 can implement it as |
932 *} |
946 *} |
933 |
947 |
934 ML{*val repeat_xmp_tac' = REPEAT o CHANGED o select_tac'*} |
948 ML{*val repeat_xmp_tac' = REPEAT o CHANGED o select_tac'*} |
935 |
949 |
936 text {* |
950 text {* |
937 since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}. |
951 since there are no ``primed'' versions of @{ML REPEAT} and @{ML CHANGED}. |
938 |
952 |
939 If you look closely at the goal state above, the tactics @{ML repeat_xmp_tac} |
953 If you look closely at the goal state above, then you see the tactics @{ML repeat_xmp_tac} |
940 and @{ML repeat_xmp_tac'} are not yet quite what we are after: the problem is |
954 and @{ML repeat_xmp_tac'} are not yet quite what we are after: the problem is |
941 that goals 2 and 3 are not analysed. This is because the tactic |
955 that goals 2 and 3 are not analysed. This is because the tactic |
942 is applied repeatedly only to the first subgoal. To analyse also all |
956 is applied repeatedly only to the first subgoal. To analyse also all |
943 resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}. |
957 resulting subgoals, you can use the tactic combinator @{ML REPEAT_ALL_NEW}. |
944 Suppose the tactic |
958 Suppose the tactic |
1114 \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}} |
1128 \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}} |
1115 {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}} |
1129 {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}} |
1116 \end{center} |
1130 \end{center} |
1117 \end{isabelle} |
1131 \end{isabelle} |
1118 |
1132 |
1119 with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}. |
1133 with @{text "constr"} being a constant, like @{const "If"} or @{const "Let"}. |
1120 Every simpset contains only |
1134 Every simpset contains only |
1121 one congruence rule for each term-constructor, which however can be |
1135 one congruence rule for each term-constructor, which however can be |
1122 overwritten. The user can declare lemmas to be congruence rules using the |
1136 overwritten. The user can declare lemmas to be congruence rules using the |
1123 attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as |
1137 attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as |
1124 equations, which are then internally transformed into meta-equations. |
1138 equations, which are then internally transformed into meta-equations. |
1135 |
1149 |
1136 |
1150 |
1137 \begin{readmore} |
1151 \begin{readmore} |
1138 For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"} |
1152 For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"} |
1139 and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in |
1153 and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in |
1140 @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in |
1154 @{ML_file "HOL/Tools/simpdata.ML"}. The generic splitter is implemented in |
1141 @{ML_file "Provers/splitter.ML"}. |
1155 @{ML_file "Provers/splitter.ML"}. |
1142 \end{readmore} |
1156 \end{readmore} |
1143 |
1157 |
1144 \begin{readmore} |
1158 \begin{readmore} |
1145 FIXME: Find the right place: Discrimination nets are implemented |
1159 FIXME: Find the right place: Discrimination nets are implemented |
1163 \begin{figure}[t] |
1177 \begin{figure}[t] |
1164 \begin{minipage}{\textwidth} |
1178 \begin{minipage}{\textwidth} |
1165 \begin{isabelle}*} |
1179 \begin{isabelle}*} |
1166 ML{*fun print_ss ctxt ss = |
1180 ML{*fun print_ss ctxt ss = |
1167 let |
1181 let |
1168 val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss |
1182 val {simps, congs, procs, ...} = Simplifier.dest_ss ss |
1169 |
1183 |
1170 fun name_thm (nm, thm) = |
1184 fun name_thm (nm, thm) = |
1171 " " ^ nm ^ ": " ^ (str_of_thm_no_vars ctxt thm) |
1185 " " ^ nm ^ ": " ^ (str_of_thm_no_vars ctxt thm) |
1172 fun name_ctrm (nm, ctrm) = |
1186 fun name_ctrm (nm, ctrm) = |
1173 " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm) |
1187 " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm) |
1174 |
1188 |
1175 val s = ["Simplification rules:"] @ (map name_thm simps) @ |
1189 val s = ["Simplification rules:"] @ map name_thm simps @ |
1176 ["Congruences rules:"] @ (map name_thm congs) @ |
1190 ["Congruences rules:"] @ map name_thm congs @ |
1177 ["Simproc patterns:"] @ (map name_ctrm procs) |
1191 ["Simproc patterns:"] @ map name_ctrm procs |
1178 in |
1192 in |
1179 s |> separate "\n" |
1193 s |> cat_lines |
1180 |> implode |
|
1181 |> writeln |
1194 |> writeln |
1182 end*} |
1195 end*} |
1183 text_raw {* |
1196 text_raw {* |
1184 \end{isabelle} |
1197 \end{isabelle} |
1185 \end{minipage} |
1198 \end{minipage} |
1186 \caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing |
1199 \caption{The function @{ML Simplifier.dest_ss} returns a record containing |
1187 all printable information stored in a simpset. We are here only interested in the |
1200 all printable information stored in a simpset. We are here only interested in the |
1188 simplification rules, congruence rules and simprocs.\label{fig:printss}} |
1201 simplification rules, congruence rules and simprocs.\label{fig:printss}} |
1189 \end{figure} *} |
1202 \end{figure} *} |
1190 |
1203 |
1191 text {* |
1204 text {* |
1497 To see how simprocs work, let us first write a simproc that just prints out |
1511 To see how simprocs work, let us first write a simproc that just prints out |
1498 the pattern which triggers it and otherwise does nothing. For this |
1512 the pattern which triggers it and otherwise does nothing. For this |
1499 you can use the function: |
1513 you can use the function: |
1500 *} |
1514 *} |
1501 |
1515 |
1502 ML %linenosgray{*fun fail_sp_aux simpset redex = |
1516 ML %linenosgray{*fun fail_simproc simpset redex = |
1503 let |
1517 let |
1504 val ctxt = Simplifier.the_context simpset |
1518 val ctxt = Simplifier.the_context simpset |
1505 val _ = writeln ("The redex: " ^ (str_of_cterm ctxt redex)) |
1519 val _ = writeln ("The redex: " ^ (str_of_cterm ctxt redex)) |
1506 in |
1520 in |
1507 NONE |
1521 NONE |
1515 moment we are \emph{not} interested in actually rewriting anything. We want |
1529 moment we are \emph{not} interested in actually rewriting anything. We want |
1516 that the simproc is triggered by the pattern @{term "Suc n"}. This can be |
1530 that the simproc is triggered by the pattern @{term "Suc n"}. This can be |
1517 done by adding the simproc to the current simpset as follows |
1531 done by adding the simproc to the current simpset as follows |
1518 *} |
1532 *} |
1519 |
1533 |
1520 simproc_setup %gray fail_sp ("Suc n") = {* K fail_sp_aux *} |
1534 simproc_setup %gray fail ("Suc n") = {* K fail_simproc *} |
1521 |
1535 |
1522 text {* |
1536 text {* |
1523 where the second argument specifies the pattern and the right-hand side |
1537 where the second argument specifies the pattern and the right-hand side |
1524 contains the code of the simproc (we have to use @{ML K} since we are ignoring |
1538 contains the code of the simproc (we have to use @{ML K} since we are ignoring |
1525 an argument about morphisms. |
1539 an argument about morphisms. |
1545 We can add or delete the simproc from the current simpset by the usual |
1559 We can add or delete the simproc from the current simpset by the usual |
1546 \isacommand{declare}-statement. For example the simproc will be deleted |
1560 \isacommand{declare}-statement. For example the simproc will be deleted |
1547 with the declaration |
1561 with the declaration |
1548 *} |
1562 *} |
1549 |
1563 |
1550 declare [[simproc del: fail_sp]] |
1564 declare [[simproc del: fail]] |
1551 |
1565 |
1552 text {* |
1566 text {* |
1553 If you want to see what happens with just \emph{this} simproc, without any |
1567 If you want to see what happens with just \emph{this} simproc, without any |
1554 interference from other rewrite rules, you can call @{text fail_sp} |
1568 interference from other rewrite rules, you can call @{text fail} |
1555 as follows: |
1569 as follows: |
1556 *} |
1570 *} |
1557 |
1571 |
1558 lemma shows "Suc 0 = 1" |
1572 lemma shows "Suc 0 = 1" |
1559 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail_sp}]) 1*}) |
1573 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc fail}]) 1*}) |
1560 (*<*)oops(*>*) |
1574 (*<*)oops(*>*) |
1561 |
1575 |
1562 text {* |
1576 text {* |
1563 Now the message shows up only once since the term @{term "1::nat"} is |
1577 Now the message shows up only once since the term @{term "1::nat"} is |
1564 left unchanged. |
1578 left unchanged. |
1565 |
1579 |
1566 Setting up a simproc using the command \isacommand{simproc\_setup} will |
1580 Setting up a simproc using the command \isacommand{simproc\_setup} will |
1567 always add automatically the simproc to the current simpset. If you do not |
1581 always add automatically the simproc to the current simpset. If you do not |
1568 want this, then you have to use a slightly different method for setting |
1582 want this, then you have to use a slightly different method for setting |
1569 up the simproc. First the function @{ML fail_sp_aux} needs to be modified |
1583 up the simproc. First the function @{ML fail_simproc} needs to be modified |
1570 to |
1584 to |
1571 *} |
1585 *} |
1572 |
1586 |
1573 ML{*fun fail_sp_aux' simpset redex = |
1587 ML{*fun fail_simproc' simpset redex = |
1574 let |
1588 let |
1575 val ctxt = Simplifier.the_context simpset |
1589 val ctxt = Simplifier.the_context simpset |
1576 val _ = writeln ("The redex: " ^ (Syntax.string_of_term ctxt redex)) |
1590 val _ = writeln ("The redex: " ^ (Syntax.string_of_term ctxt redex)) |
1577 in |
1591 in |
1578 NONE |
1592 NONE |
1584 We can turn this function into a proper simproc using the function |
1598 We can turn this function into a proper simproc using the function |
1585 @{ML Simplifier.simproc_i}: |
1599 @{ML Simplifier.simproc_i}: |
1586 *} |
1600 *} |
1587 |
1601 |
1588 |
1602 |
1589 ML{*val fail_sp' = |
1603 ML{*val fail' = |
1590 let |
1604 let |
1591 val thy = @{theory} |
1605 val thy = @{theory} |
1592 val pat = [@{term "Suc n"}] |
1606 val pat = [@{term "Suc n"}] |
1593 in |
1607 in |
1594 Simplifier.simproc_i thy "fail_sp'" pat (K fail_sp_aux') |
1608 Simplifier.simproc_i thy "fail_simproc'" pat (K fail_simproc') |
1595 end*} |
1609 end*} |
1596 |
1610 |
1597 text {* |
1611 text {* |
1598 Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}). |
1612 Here the pattern is given as @{ML_type term} (instead of @{ML_type cterm}). |
1599 The function also takes a list of patterns that can trigger the simproc. |
1613 The function also takes a list of patterns that can trigger the simproc. |
1604 Simprocs are applied from inside to outside and from left to right. You can |
1618 Simprocs are applied from inside to outside and from left to right. You can |
1605 see this in the proof |
1619 see this in the proof |
1606 *} |
1620 *} |
1607 |
1621 |
1608 lemma shows "Suc (Suc 0) = (Suc 1)" |
1622 lemma shows "Suc (Suc 0) = (Suc 1)" |
1609 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail_sp']) 1*}) |
1623 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [fail']) 1*}) |
1610 (*<*)oops(*>*) |
1624 (*<*)oops(*>*) |
1611 |
1625 |
1612 text {* |
1626 text {* |
1613 The simproc @{ML fail_sp'} prints out the sequence |
1627 The simproc @{ML fail'} prints out the sequence |
1614 |
1628 |
1615 @{text [display] |
1629 @{text [display] |
1616 "> Suc 0 |
1630 "> Suc 0 |
1617 > Suc (Suc 0) |
1631 > Suc (Suc 0) |
1618 > Suc 1"} |
1632 > Suc 1"} |
1631 the rewriting with simprocs, let us further assume we want that the simproc |
1645 the rewriting with simprocs, let us further assume we want that the simproc |
1632 only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write |
1646 only rewrites terms ``greater'' than @{term "Suc 0"}. For this we can write |
1633 *} |
1647 *} |
1634 |
1648 |
1635 |
1649 |
1636 ML{*fun plus_one_sp_aux ss redex = |
1650 ML{*fun plus_one_simproc ss redex = |
1637 case redex of |
1651 case redex of |
1638 @{term "Suc 0"} => NONE |
1652 @{term "Suc 0"} => NONE |
1639 | _ => SOME @{thm plus_one}*} |
1653 | _ => SOME @{thm plus_one}*} |
1640 |
1654 |
1641 text {* |
1655 text {* |
1642 and set up the simproc as follows. |
1656 and set up the simproc as follows. |
1643 *} |
1657 *} |
1644 |
1658 |
1645 ML{*val plus_one_sp = |
1659 ML{*val plus_one = |
1646 let |
1660 let |
1647 val thy = @{theory} |
1661 val thy = @{theory} |
1648 val pat = [@{term "Suc n"}] |
1662 val pat = [@{term "Suc n"}] |
1649 in |
1663 in |
1650 Simplifier.simproc_i thy "sproc +1" pat (K plus_one_sp_aux) |
1664 Simplifier.simproc_i thy "sproc +1" pat (K plus_one_simproc) |
1651 end*} |
1665 end*} |
1652 |
1666 |
1653 text {* |
1667 text {* |
1654 Now the simproc is set up so that it is triggered by terms |
1668 Now the simproc is set up so that it is triggered by terms |
1655 of the form @{term "Suc n"}, but inside the simproc we only produce |
1669 of the form @{term "Suc n"}, but inside the simproc we only produce |
1656 a theorem if the term is not @{term "Suc 0"}. The result you can see |
1670 a theorem if the term is not @{term "Suc 0"}. The result you can see |
1657 in the following proof |
1671 in the following proof |
1658 *} |
1672 *} |
1659 |
1673 |
1660 lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)" |
1674 lemma shows "P (Suc (Suc (Suc 0))) (Suc 0)" |
1661 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one_sp]) 1*}) |
1675 apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [plus_one]) 1*}) |
1662 txt{* |
1676 txt{* |
1663 where the simproc produces the goal state |
1677 where the simproc produces the goal state |
1664 |
1678 |
1665 \begin{minipage}{\textwidth} |
1679 \begin{minipage}{\textwidth} |
1666 @{subgoals[display]} |
1680 @{subgoals[display]} |
1668 *} |
1682 *} |
1669 (*<*)oops(*>*) |
1683 (*<*)oops(*>*) |
1670 |
1684 |
1671 text {* |
1685 text {* |
1672 As usual with rewriting you have to worry about looping: you already have |
1686 As usual with rewriting you have to worry about looping: you already have |
1673 a loop with @{ML plus_one_sp}, if you apply it with the default simpset (because |
1687 a loop with @{ML plus_one}, if you apply it with the default simpset (because |
1674 the default simpset contains a rule which just does the opposite of @{ML plus_one_sp}, |
1688 the default simpset contains a rule which just does the opposite of @{ML plus_one}, |
1675 namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful |
1689 namely rewriting @{text [quotes] "+ 1"} to a successor). So you have to be careful |
1676 in choosing the right simpset to which you add a simproc. |
1690 in choosing the right simpset to which you add a simproc. |
1677 |
1691 |
1678 Next let us implement a simproc that replaces terms of the form @{term "Suc n"} |
1692 Next let us implement a simproc that replaces terms of the form @{term "Suc n"} |
1679 with the number @{text n} increased by one. First we implement a function that |
1693 with the number @{text n} increased by one. First we implement a function that |
1733 |
1747 |
1734 Anyway, either version can be used in the function that produces the actual |
1748 Anyway, either version can be used in the function that produces the actual |
1735 theorem for the simproc. |
1749 theorem for the simproc. |
1736 *} |
1750 *} |
1737 |
1751 |
1738 ML{*fun nat_number_sp_aux ss t = |
1752 ML{*fun nat_number_simproc ss t = |
1739 let |
1753 let |
1740 val ctxt = Simplifier.the_context ss |
1754 val ctxt = Simplifier.the_context ss |
1741 in |
1755 in |
1742 SOME (get_thm ctxt (t, dest_suc_trm t)) |
1756 SOME (get_thm ctxt (t, dest_suc_trm t)) |
1743 handle TERM _ => NONE |
1757 handle TERM _ => NONE |
1744 end*} |
1758 end*} |
1745 |
1759 |
1746 text {* |
1760 text {* |
1747 This function uses the fact that @{ML dest_suc_trm} might throw an exception |
1761 This function uses the fact that @{ML dest_suc_trm} might raise an exception |
1748 @{ML TERM}. In this case there is nothing that can be rewritten and therefore no |
1762 @{ML TERM}. In this case there is nothing that can be rewritten and therefore no |
1749 theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc |
1763 theorem is produced (i.e.~the function returns @{ML NONE}). To try out the simproc |
1750 on an example, you can set it up as follows: |
1764 on an example, you can set it up as follows: |
1751 *} |
1765 *} |
1752 |
1766 |
1753 ML{*val nat_number_sp = |
1767 ML{*val nat_number = |
1754 let |
1768 let |
1755 val thy = @{theory} |
1769 val thy = @{theory} |
1756 val pat = [@{term "Suc n"}] |
1770 val pat = [@{term "Suc n"}] |
1757 in |
1771 in |
1758 Simplifier.simproc_i thy "nat_number" pat (K nat_number_sp_aux) |
1772 Simplifier.simproc_i thy "nat_number" pat (K nat_number_simproc) |
1759 end*} |
1773 end*} |
1760 |
1774 |
1761 text {* |
1775 text {* |
1762 Now in the lemma |
1776 Now in the lemma |
1763 *} |
1777 *} |
1764 |
1778 |
1765 lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))" |
1779 lemma "P (Suc (Suc 2)) (Suc 99) (0::nat) (Suc 4 + Suc 0) (Suc (0 + 0))" |
1766 apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number_sp]) 1*}) |
1780 apply(tactic {* simp_tac (HOL_ss addsimprocs [nat_number]) 1*}) |
1767 txt {* |
1781 txt {* |
1768 you obtain the more legible goal state |
1782 you obtain the more legible goal state |
1769 |
1783 |
1770 \begin{minipage}{\textwidth} |
1784 \begin{minipage}{\textwidth} |
1771 @{subgoals [display]} |
1785 @{subgoals [display]} |
1844 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1858 val add = @{cterm \"\<lambda>x y. x + (y::nat)\"} |
1845 val two = @{cterm \"2::nat\"} |
1859 val two = @{cterm \"2::nat\"} |
1846 val ten = @{cterm \"10::nat\"} |
1860 val ten = @{cterm \"10::nat\"} |
1847 val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten) |
1861 val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten) |
1848 in |
1862 in |
1849 #prop (rep_thm thm) |
1863 Thm.prop_of thm |
1850 end" |
1864 end" |
1851 "Const (\"==\",\<dots>) $ |
1865 "Const (\"==\",\<dots>) $ |
1852 (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ |
1866 (Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $ |
1853 (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} |
1867 (Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"} |
1854 |
1868 |
1855 The argument @{ML true} in @{ML Thm.beta_conversion} indicates that |
1869 The argument @{ML true} in @{ML Thm.beta_conversion} indicates that |
1856 the right-hand side will be fully beta-normalised. If instead |
1870 the right-hand side should be fully beta-normalised. If instead |
1857 @{ML false} is given, then only a single beta-reduction is performed |
1871 @{ML false} is given, then only a single beta-reduction is performed |
1858 on the outer-most level. For example |
1872 on the outer-most level. For example |
1859 |
1873 |
1860 @{ML_response_fake [display,gray] |
1874 @{ML_response_fake [display,gray] |
1861 "let |
1875 "let |
1889 "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"} |
1903 "True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"} |
1890 |
1904 |
1891 Note, however, that the function @{ML Conv.rewr_conv} only rewrites the |
1905 Note, however, that the function @{ML Conv.rewr_conv} only rewrites the |
1892 outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match |
1906 outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match |
1893 exactly the |
1907 exactly the |
1894 left-hand side of the theorem, then @{ML Conv.rewr_conv} raises |
1908 left-hand side of the theorem, then @{ML Conv.rewr_conv} fails by raising |
1895 the exception @{ML CTERM}. |
1909 the exception @{ML CTERM}. |
1896 |
1910 |
1897 This very primitive way of rewriting can be made more powerful by |
1911 This very primitive way of rewriting can be made more powerful by |
1898 combining several conversions into one. For this you can use conversion |
1912 combining several conversions into one. For this you can use conversion |
1899 combinators. The simplest conversion combinator is @{ML then_conv}, |
1913 combinators. The simplest conversion combinator is @{ML then_conv}, |
1960 arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The |
1974 arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The |
1961 conversion is then applied to @{text "t2"} which in the example above |
1975 conversion is then applied to @{text "t2"} which in the example above |
1962 stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies |
1976 stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies |
1963 the conversion to the first argument of an application. |
1977 the conversion to the first argument of an application. |
1964 |
1978 |
1965 The function @{ML Conv.abs_conv} applies a conversion under an abstractions. |
1979 The function @{ML Conv.abs_conv} applies a conversion under an abstraction. |
1966 For example: |
1980 For example: |
1967 |
1981 |
1968 @{ML_response_fake [display,gray] |
1982 @{ML_response_fake [display,gray] |
1969 "let |
1983 "let |
1970 val conv = K (Conv.rewr_conv @{thm true_conj1}) |
1984 val conv = Conv.rewr_conv @{thm true_conj1} |
1971 val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"} |
1985 val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"} |
1972 in |
1986 in |
1973 Conv.abs_conv conv @{context} ctrm |
1987 Conv.abs_conv (K conv) @{context} ctrm |
1974 end" |
1988 end" |
1975 "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"} |
1989 "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"} |
1976 |
1990 |
1977 Note that this conversion needs a context as an argument. The conversion that |
1991 Note that this conversion needs a context as an argument. The conversion that |
1978 goes under an application is @{ML Conv.combination_conv}. It expects two conversions |
1992 goes under an application is @{ML Conv.combination_conv}. It expects two conversions |
2069 Assume we want to apply @{ML all_true1_conv} only in the conclusion |
2083 Assume we want to apply @{ML all_true1_conv} only in the conclusion |
2070 of the goal, and @{ML if_true1_conv} should only apply to the premises. |
2084 of the goal, and @{ML if_true1_conv} should only apply to the premises. |
2071 Here is a tactic doing exactly that: |
2085 Here is a tactic doing exactly that: |
2072 *} |
2086 *} |
2073 |
2087 |
2074 ML{*fun true1_tac ctxt = CSUBGOAL (fn (goal, i) => |
2088 ML{*fun true1_tac ctxt = |
2075 CONVERSION |
2089 CONVERSION |
2076 (Conv.params_conv ~1 (fn ctxt => |
2090 (Conv.params_conv ~1 (fn ctxt => |
2077 (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv |
2091 (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv |
2078 Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i)*} |
2092 Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt)*} |
2079 |
2093 |
2080 text {* |
2094 text {* |
2081 We call the conversions with the argument @{ML "~1"}. This is to |
2095 We call the conversions with the argument @{ML "~1"}. This is to |
2082 analyse all parameters, premises and conclusions. If we call them with |
2096 analyse all parameters, premises and conclusions. If we call them with |
2083 a non-negative number, say @{text n}, then these conversions will |
2097 a non-negative number, say @{text n}, then these conversions will |
2098 |
2112 |
2099 text {* |
2113 text {* |
2100 As you can see, the premises are rewritten according to @{ML if_true1_conv}, while |
2114 As you can see, the premises are rewritten according to @{ML if_true1_conv}, while |
2101 the conclusion according to @{ML all_true1_conv}. |
2115 the conclusion according to @{ML all_true1_conv}. |
2102 |
2116 |
2103 To sum up this section, conversions are not as powerful as the simplifier |
2117 To sum up this section, conversions are more general than the simplifier |
2104 and simprocs; the advantage of conversions, however, is that you never have |
2118 or simprocs, but you have to do more work yourself. Also conversions are |
2105 to worry about non-termination. |
2119 often much less efficient than the simplifier. The advantage of conversions, |
|
2120 however, that they provide much less room for non-termination. |
2106 |
2121 |
2107 \begin{exercise}\label{ex:addconversion} |
2122 \begin{exercise}\label{ex:addconversion} |
2108 Write a tactic that does the same as the simproc in exercise |
2123 Write a tactic that does the same as the simproc in exercise |
2109 \ref{ex:addsimproc}, but is based in conversions. That means replace terms |
2124 \ref{ex:addsimproc}, but is based in conversions. That means replace terms |
2110 of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make |
2125 of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make |