redefined the functions warning and tracing in order to properly match more antiquotations
--- a/CookBook/Base.thy Wed Mar 04 14:26:21 2009 +0000
+++ b/CookBook/Base.thy Thu Mar 05 16:46:43 2009 +0000
@@ -15,5 +15,9 @@
Toplevel.generic_theory (ML_Context.exec (fn () => ML_Context.eval true pos txt))));
*}
+ML {*
+ fun warning str = str
+ fun tracing str = str
+*}
end
--- a/CookBook/FirstSteps.thy Wed Mar 04 14:26:21 2009 +0000
+++ b/CookBook/FirstSteps.thy Thu Mar 05 16:46:43 2009 +0000
@@ -44,7 +44,7 @@
evaluated by using the advance and undo buttons of your Isabelle
environment. The code inside the \isacommand{ML}-command can also contain
value and function bindings, and even those can be undone when the proof
- script is retracted. As mentioned earlier, we will drop the
+ script is retracted. As mentioned in the Introduction, we will drop the
\isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
show code. The lines prefixed with @{text [quotes] ">"} are not part of the
code, rather they indicate what the response is when the code is evaluated.
@@ -73,14 +73,14 @@
in your code. This can be done in a ``quick-and-dirty'' fashion using
the function @{ML "warning"}. For example
- @{ML_response_fake [display,gray] "warning \"any string\"" "\"any string\""}
+ @{ML_response [display,gray] "warning \"any string\"" "\"any string\""}
will print out @{text [quotes] "any string"} inside the response buffer
of Isabelle. This function expects a string as argument. If you develop under PolyML,
then there is a convenient, though again ``quick-and-dirty'', method for
converting values into strings, namely the function @{ML makestring}:
- @{ML_response_fake [display,gray] "warning (makestring 1)" "\"1\""}
+ @{ML_response [display,gray] "warning (makestring 1)" "\"1\""}
However @{ML makestring} only works if the type of what is converted is monomorphic
and not a function.
@@ -91,7 +91,7 @@
function @{ML tracing} is more appropriate. This function writes all output into
a separate tracing buffer. For example:
- @{ML_response_fake [display,gray] "tracing \"foo\"" "\"foo\""}
+ @{ML_response [display,gray] "tracing \"foo\"" "\"foo\""}
It is also possible to redirect the ``channel'' where the string @{text "foo"} is
printed to a separate file, e.g.~to prevent ProofGeneral from choking on massive
@@ -429,12 +429,6 @@
Again, this way or referencing simpsets makes you independent from additions
of lemmas to the simpset by the user that potentially cause loops.
- \begin{readmore}
- The infrastructure for simpsets is implemented in @{ML_file "Pure/meta_simplifier.ML"}
- and @{ML_file "Pure/simplifier.ML"}. Discrimination nets are implemented
- in @{ML_file "Pure/net.ML"}.
- \end{readmore}
-
While antiquotations have many applications, they were originally introduced in order
to avoid explicit bindings for theorems such as:
*}
@@ -725,8 +719,10 @@
the smallest name that is still unique, whereas @{ML base_name in Sign} always
strips off all qualifiers.
+ (FIXME: These are probably now NameSpace functions)
+
\begin{readmore}
- FIXME
+ FIXME: Find the right files to do with naming.
\end{readmore}
*}
@@ -771,6 +767,19 @@
(Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
end" "0 + 0"}
+ In Isabelle also types need can be certified. For example, you obtain
+ the certified type for the Isablle type @{typ "nat \<Rightarrow> bool"} on the ML-level
+ as follows:
+
+ @{ML_response_fake [display,gray]
+ "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
+ "nat \<Rightarrow> bool"}
+
+ \begin{readmore}
+ For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see
+ the file @{ML_file "Pure/thm.ML"}.
+ \end{readmore}
+
\begin{exercise}
Check that the function defined in Exercise~\ref{fun:revsum} returns a
result that type-checks.
@@ -830,7 +839,9 @@
\begin{readmore}
See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
- checking and pretty-printing of terms are defined.
+ checking and pretty-printing of terms are defined. Fuctions related to the
+ type inference are implemented in @{ML_file "Pure/type.ML"} and
+ @{ML_file "Pure/type_infer.ML"}.
\end{readmore}
*}
@@ -1025,7 +1036,7 @@
@{ML_response_fake [display,gray]
"!my_thms" "[\"True\"]"}
- You can also add theorems using the command \isacommand{declare}
+ You can also add theorems using the command \isacommand{declare}.
*}
declare test[my_thms] trueI_2[my_thms add]
--- a/CookBook/Parsing.thy Wed Mar 04 14:26:21 2009 +0000
+++ b/CookBook/Parsing.thy Thu Mar 05 16:46:43 2009 +0000
@@ -406,7 +406,7 @@
*}
ML{*fun filtered_input str =
- filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
+ filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
text {*
@@ -594,9 +594,9 @@
The functions to do with input and output of XML and YXML are defined
in @{ML_file "Pure/General/xml.ML"} and @{ML_file "Pure/General/yxml.ML"}.
\end{readmore}
+
*}
-
section {* Parsing Specifications\label{sec:parsingspecs} *}
text {*
--- a/CookBook/Tactical.thy Wed Mar 04 14:26:21 2009 +0000
+++ b/CookBook/Tactical.thy Thu Mar 05 16:46:43 2009 +0000
@@ -1094,6 +1094,11 @@
@{ML_file "Provers/splitter.ML"}.
\end{readmore}
+ \begin{readmore}
+ FIXME: Find the right place Discrimination nets are implemented
+ in @{ML_file "Pure/net.ML"}.
+ \end{readmore}
+
The most common combinators to modify simpsets are
\begin{isabelle}
@@ -1711,7 +1716,7 @@
"*** Exception- CTERM (\"no conversion\", []) raised"}
A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
- produces an equation between a term and its beta-normal form. For example
+ produces a meta-equation between a term and its beta-normal form. For example
@{ML_response_fake [display,gray]
"let
@@ -1724,7 +1729,7 @@
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
- since the pretty printer for @{ML_type cterm}s already beta-normalises terms.
+ since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
But by the way how we constructed the term (using the function
@{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s),
we can be sure the left-hand side must contain beta-redexes. Indeed
@@ -1736,9 +1741,9 @@
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
+ val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
in
- #prop (rep_thm
- (Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)))
+ #prop (rep_thm thm)
end"
"Const (\"==\",\<dots>) $
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
@@ -1763,15 +1768,15 @@
The main point of conversions is that they can be used for rewriting
@{ML_type cterm}s. To do this you can use the function @{ML
- "Conv.rewr_conv"} which expects a meta-equation as an argument. Suppose we
- want to rewrite a @{ML_type cterm} according to the (meta)equation:
+ "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
+ want to rewrite a @{ML_type cterm} according to the meta-equation:
*}
lemma true_conj1: "True \<and> P \<equiv> P" by simp
text {*
- You can see how this function works with the example
- @{term "True \<and> (Foo \<longrightarrow> Bar)"}:
+ You can see how this function works in the example rewriting
+ the @{ML_type cterm} @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
@{ML_response_fake [display,gray]
"let
@@ -1782,7 +1787,8 @@
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
- outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match the
+ outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
+ exactly the
left-hand side of the theorem, then @{ML Conv.rewr_conv} raises
the exception @{ML CTERM}.
@@ -1841,7 +1847,7 @@
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
The reason for this behaviour is that @{text "(op \<or>)"} expects two
- arguments. So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
+ arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
conversion is then applied to @{text "t2"} which in the example above
stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
the conversion to the first argument of an application.
@@ -1858,13 +1864,14 @@
end"
"\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
- The conversion that goes under an application is
- @{ML Conv.combination_conv}. It expects two conversions as arguments,
- each of which is applied to the corresponding ``branch'' of the application.
+ Note that this conversion needs a context as an argument. The conversion that
+ goes under an application is @{ML Conv.combination_conv}. It expects two conversions
+ as arguments, each of which is applied to the corresponding ``branch'' of the
+ application.
- We can now apply all these functions in a
- conversion that recursively descends a term and applies a conversion in all
- possible positions.
+ We can now apply all these functions in a conversion that recursively
+ descends a term and applies a ``@{thm [source] true_conj1}''-conversion
+ in all possible positions.
*}
ML %linenosgray{*fun all_true1_conv ctxt ctrm =
@@ -1873,17 +1880,17 @@
(Conv.arg_conv (all_true1_conv ctxt) then_conv
Conv.rewr_conv @{thm true_conj1}) ctrm
| _ $ _ => Conv.combination_conv
- (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
+ (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm*}
text {*
- This functions descends under applications (Line 6 and 7) and abstractions
- (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"}
- (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2
+ This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
+ it descends under applications (Line 6 and 7) and abstractions
+ (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
we need to transform the @{ML_type cterm} into a @{ML_type term} in order
to be able to pattern-match the term. To see this
- conversion in action, consider the following example
+ conversion in action, consider the following example:
@{ML_response_fake [display,gray]
"let
@@ -1894,8 +1901,6 @@
end"
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
- where the conversion is applied ``deep'' inside the term.
-
To see how much control you have about rewriting by using conversions, let us
make the task a bit more complicated by rewriting according to the rule
@{text true_conj1}, but only in the first arguments of @{term If}s. Then
@@ -1907,7 +1912,7 @@
Const (@{const_name If}, _) $ _ =>
Conv.arg_conv (all_true1_conv ctxt) ctrm
| _ $ _ => Conv.combination_conv
- (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
+ (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm *}
@@ -1918,7 +1923,7 @@
"let
val ctxt = @{context}
val ctrm =
- @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
+ @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
in
if_true1_conv ctxt ctrm
end"
@@ -1944,15 +1949,15 @@
Finally, conversions can also be turned into tactics and then applied to
goal states. This can be done with the help of the function @{ML CONVERSION},
- and also some predefined conversion combinator which traverse a goal
+ and also some predefined conversion combinators that traverse a goal
state. The combinators for the goal state are: @{ML Conv.params_conv} for
- going under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML Conv.prems_conv} for applying the conversion to all
- premises of a goal, and @{ML Conv.concl_conv} for applying the conversion to
+ converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
+ Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
+ premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
the conclusion of a goal.
Assume we want to apply @{ML all_true1_conv} only in the conclusion
- of the goal, and @{ML if_true1_conv} should only be applied to the premises.
+ of the goal, and @{ML if_true1_conv} should only apply to the premises.
Here is a tactic doing exactly that:
*}
@@ -2000,8 +2005,9 @@
\end{exercise}
\begin{exercise}
- Compare which way of rewriting such terms is more efficient. For this
- you might have to construct quite large terms.
+ Compare which way (either Exercise ~\ref{addsimproc} or \ref{ex:addconversion}) of
+ rewriting such terms is faster. For this you might have to construct quite
+ large terms. Also see Recipe \ref{rec:timing} for information about timing.
\end{exercise}
\begin{readmore}
Binary file cookbook.pdf has changed