--- a/ProgTutorial/Tactical.thy Sun Jul 26 18:13:50 2009 +0200
+++ b/ProgTutorial/Tactical.thy Mon Jul 27 10:37:28 2009 +0200
@@ -1844,59 +1844,50 @@
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
+ val ctrm = Thm.capply (Thm.capply add two) ten
in
- Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+ Thm.beta_conversion true ctrm
end"
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
- Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
- since the pretty-printer for @{ML_type cterm}s eta-normalises terms.
- But how we constructed the term (using the function
- @{ML [index] capply in Thm}, which is the application @{ML $} for @{ML_type cterm}s)
- ensures that the left-hand side must contain beta-redexes. Indeed
- if we obtain the ``raw'' representation of the produced theorem, we
- can see the difference:
+ If you run this example, you will notice that the actual response is the
+ seemingly nonsensical @{term
+ "2 + 10 \<equiv> 2 + (10::nat)"}. The reason is that the pretty-printer for
+ @{ML_type cterm}s eta-normalises terms and therefore produces this output.
+ If we get hold of the ``raw'' representation of the produced theorem,
+ we obtain the expected result.
+
@{ML_response [display,gray]
"let
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
- val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
+ val ctrm = Thm.capply (Thm.capply add two) ten
in
- Thm.prop_of thm
+ Thm.prop_of (Thm.beta_conversion true ctrm)
end"
"Const (\"==\",\<dots>) $
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
(Const (\"HOL.plus_class.plus\",\<dots>) $ \<dots> $ \<dots>)"}
- The argument @{ML true} in @{ML Thm.beta_conversion} indicates that
+ The argument @{ML true} in @{ML beta_conversion in Thm} indicates that
the right-hand side should be fully beta-normalised. If instead
@{ML false} is given, then only a single beta-reduction is performed
- on the outer-most level. For example
-
- @{ML_response_fake [display,gray]
- "let
- val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
- val two = @{cterm \"2::nat\"}
-in
- Thm.beta_conversion false (Thm.capply add two)
-end"
- "((\<lambda>x y. x + y) 2) \<equiv> \<lambda>y. 2 + y"}
-
- Again, we actually see as output only the fully eta-normalised term.
+ on the outer-most level.
The main point of conversions is that they can be used for rewriting
- @{ML_type cterm}s. To do this you can use the function @{ML
- "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
- want to rewrite a @{ML_type cterm} according to the meta-equation:
+ @{ML_type cterm}s. One example is the function
+ @{ML [index] rewr_conv in Conv}, which expects a meta-equation as an
+ argument. Suppose the following meta-equation.
+
*}
lemma true_conj1: "True \<and> P \<equiv> P" by simp
text {*
- You can see how this function works in the example rewriting
- @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
+ It can be used for example to rewrite @{term "True \<and> (Foo \<longrightarrow> Bar)"}
+ to @{term "Foo \<longrightarrow> Bar"}. The code is as follows.
@{ML_response_fake [display,gray]
"let
@@ -1909,7 +1900,7 @@
Note, however, that the function @{ML [index] rewr_conv in Conv} only rewrites the
outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
exactly the
- left-hand side of the theorem, then @{ML [index] rewr_conv in Conv} fails by raising
+ left-hand side of the theorem, then @{ML [index] rewr_conv in Conv} fails, raising
the exception @{ML CTERM}.
This very primitive way of rewriting can be made more powerful by
@@ -1928,8 +1919,8 @@
"(\<lambda>x. x \<and> False) True \<equiv> False"}
where we first beta-reduce the term and then rewrite according to
- @{thm [source] true_conj1}. (Recall the problem with the pretty-printer
- normalising all terms.)
+ @{thm [source] true_conj1}. (When running this example recall the
+ problem with the pretty-printer normalising all terms.)
The conversion combinator @{ML [index] else_conv} tries out the
first one, and if it does not apply, tries the second. For example
@@ -1954,48 +1945,56 @@
For example
@{ML_response_fake [display,gray]
- "Conv.try_conv (Conv.rewr_conv @{thm true_conj1}) @{cterm \"True \<or> P\"}"
+"let
+ val conv = Conv.try_conv (Conv.rewr_conv @{thm true_conj1})
+ val ctrm = @{cterm \"True \<or> P\"}
+in
+ conv ctrm
+end"
"True \<or> P \<equiv> True \<or> P"}
Apart from the function @{ML beta_conversion in Thm}, which is able to fully
beta-normalise a term, the conversions so far are restricted in that they
only apply to the outer-most level of a @{ML_type cterm}. In what follows we
- will lift this restriction. The combinator @{ML [index] arg_conv in Conv} will apply
- the conversion to the first argument of an application, that is the term
- must be of the form @{ML "t1 $ t2" for t1 t2} and the conversion is applied
- to @{text t2}. For example
+ will lift this restriction. The combinators @{ML [index] fun_conv in Conv}
+ and @{ML [index] arg_conv in Conv} will apply
+ a conversion to the first, respectively second, argument of an application.
+ For example
@{ML_response_fake [display,gray]
"let
- val conv = Conv.rewr_conv @{thm true_conj1}
+ val conv = Conv.arg_conv (Conv.rewr_conv @{thm true_conj1})
val ctrm = @{cterm \"P \<or> (True \<and> Q)\"}
in
- Conv.arg_conv conv ctrm
+ conv ctrm
end"
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
The reason for this behaviour is that @{text "(op \<or>)"} expects two
arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
- conversion is then applied to @{text "t2"} which in the example above
- stands for @{term "True \<and> Q"}. The function @{ML [index] fun_conv in Conv} applies
- the conversion to the first argument of an application.
+ conversion is then applied to @{text "t2"}, which in the example above
+ stands for @{term "True \<and> Q"}. The function @{ML fun_conv in Conv} would apply
+ the conversion to the term @{text "(Const \<dots> $ t1)"}.
- The function @{ML [index] abs_conv in Conv} applies a conversion under an abstraction.
- For example:
+ The function @{ML [index] abs_conv in Conv} applies a conversion under an
+ abstraction. For example:
@{ML_response_fake [display,gray]
"let
val conv = Conv.rewr_conv @{thm true_conj1}
- val ctrm = @{cterm \"\<lambda>P. True \<and> P \<and> Foo\"}
+ val ctrm = @{cterm \"\<lambda>P. True \<and> (P \<and> Foo)\"}
in
Conv.abs_conv (K conv) @{context} ctrm
end"
- "\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
+ "\<lambda>P. True \<and> (P \<and> Foo) \<equiv> \<lambda>P. P \<and> Foo"}
- Note that this conversion needs a context as an argument. The conversion that
- goes under an application is @{ML [index] combination_conv in Conv}. It expects two conversions
- as arguments, each of which is applied to the corresponding ``branch'' of the
- application.
+ Note that this conversion needs a context as an argument. We also give the
+ conversion as @{text "(K conv)"}, which is a function that ignores its
+ argument (the argument being a sufficiently freshened version of the
+ variable that is abstracted and a context). The conversion that goes under
+ an application is @{ML [index] combination_conv in Conv}. It expects two
+ conversions as arguments, each of which is applied to the corresponding
+ ``branch'' of the application.
We can now apply all these functions in a conversion that recursively
descends a term and applies a ``@{thm [source] true_conj1}''-conversion
@@ -2013,8 +2012,8 @@
| _ => Conv.all_conv ctrm*}
text {*
- This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
- it descends under applications (Line 6 and 7) and abstractions
+ This function ``fires'' if the terms is of the form @{text "(True \<and> \<dots>)"}.
+ It descends under applications (Line 6 and 7) and abstractions
(Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
we need to transform the @{ML_type cterm} into a @{ML_type term} in order
to be able to pattern-match the term. To see this
@@ -2022,10 +2021,10 @@
@{ML_response_fake [display,gray]
"let
- val ctxt = @{context}
+ val conv = all_true1_conv @{context}
val ctrm = @{cterm \"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x\"}
in
- all_true1_conv ctxt ctrm
+ conv ctrm
end"
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
@@ -2049,11 +2048,11 @@
@{ML_response_fake [display,gray]
"let
- val ctxt = @{context}
+ val conv = if_true1_conv @{context}
val ctrm =
@{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
in
- if_true1_conv ctxt ctrm
+ conv ctrm
end"
"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False
\<equiv> if P (1 \<noteq> 2) then True \<and> True else True \<and> False"}
@@ -2068,21 +2067,33 @@
lemma foo_test: "P \<or> (True \<and> \<not>P)" by simp
text {*
- Using the conversion you can transform this theorem into a new theorem
- as follows
+ Using the conversion @{ML all_true1_conv} you can transform this theorem into a
+ new theorem as follows
@{ML_response_fake [display,gray]
- "Conv.fconv_rule (all_true1_conv @{context}) @{thm foo_test}"
+"let
+ val conv = Conv.fconv_rule (all_true1_conv @{context})
+ val thm = @{thm foo_test}
+in
+ conv thm
+end"
"?P \<or> \<not> ?P"}
Finally, conversions can also be turned into tactics and then applied to
goal states. This can be done with the help of the function @{ML [index] CONVERSION},
and also some predefined conversion combinators that traverse a goal
- state. The combinators for the goal state are: @{ML [index] params_conv in Conv} for
- converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML [index] prems_conv in Conv} for applying a conversion to all
- premises of a goal, and @{ML [index] concl_conv in Conv} for applying a conversion to
- the conclusion of a goal.
+ state. The combinators for the goal state are:
+
+ \begin{itemize}
+ \item @{ML [index] params_conv in Conv} for converting under parameters
+ (i.e.~where goals are of the form @{text "\<And>x. P x \<Longrightarrow> Q x"})
+
+ \item @{ML [index] prems_conv in Conv} for applying a conversion to all
+ premises of a goal, and
+
+ \item @{ML [index] concl_conv in Conv} for applying a conversion to the
+ conclusion of a goal.
+ \end{itemize}
Assume we want to apply @{ML all_true1_conv} only in the conclusion
of the goal, and @{ML if_true1_conv} should only apply to the premises.
@@ -2125,8 +2136,7 @@
\begin{exercise}\label{ex:addconversion}
Write a tactic that does the same as the simproc in exercise
- \ref{ex:addsimproc}, but is based in conversions. That means replace terms
- of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
+ \ref{ex:addsimproc}, but is based on conversions. You can make
the same assumptions as in \ref{ex:addsimproc}.
\end{exercise}