--- a/CookBook/Appendix.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Appendix.thy Thu Mar 12 08:11:02 2009 +0100
@@ -5,21 +5,22 @@
text {* \appendix *}
-text {*
- Possible topics: translations/print translations
-*}
-
chapter {* Recipes *}
text {*
- Possible topics: translations/print translations
+ Possible further topics:
+
+ \begin{itemize}
+ \item translations/print translations;
+ @{ML "ProofContext.print_syntax"}
- User Space Type Systems (in the already existing form)
+ \item user space type systems (in the form that already exists)
+
+ \item unification and typing algorithms
+
+ \item useful datastructures: discrimination nets, association lists
+ \end{itemize}
*}
-end
-
-
-
-
+end
\ No newline at end of file
--- a/CookBook/Base.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Base.thy Thu Mar 12 08:11:02 2009 +0100
@@ -14,6 +14,10 @@
(OuterParse.ML_source >> (fn (txt, pos) =>
Toplevel.generic_theory (ML_Context.exec (fn () => ML_Context.eval true pos txt))));
*}
-
-
+(*
+ML {*
+ fun warning str = str
+ fun tracing str = str
+*}
+*)
end
--- a/CookBook/FirstSteps.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/FirstSteps.thy Thu Mar 12 08:11:02 2009 +0100
@@ -18,6 +18,9 @@
\ldots
\end{tabular}
\end{center}
+
+ We also generally assume you are working with HOL. The given examples might
+ need to be adapted slightly if you work in a different logic.
*}
section {* Including ML-Code *}
@@ -41,7 +44,7 @@
evaluated by using the advance and undo buttons of your Isabelle
environment. The code inside the \isacommand{ML}-command can also contain
value and function bindings, and even those can be undone when the proof
- script is retracted. As mentioned earlier, we will drop the
+ script is retracted. As mentioned in the Introduction, we will drop the
\isacommand{ML}~@{text "\<verbopen> \<dots> \<verbclose>"} scaffolding whenever we
show code. The lines prefixed with @{text [quotes] ">"} are not part of the
code, rather they indicate what the response is when the code is evaluated.
@@ -151,15 +154,26 @@
text {*
The easiest way to get the string of a theorem is to transform it
- into a @{ML_type cterm} using the function @{ML crep_thm}.
+ into a @{ML_type cterm} using the function @{ML crep_thm}. Theorems
+ also include schematic variables, such as @{text "?P"}, @{text "?Q"}
+ and so on. In order to improve the readability of theorems we convert
+ these schematic variables into free variables using the
+ function @{ML Variable.import_thms}.
*}
-ML{*fun str_of_thm ctxt thm =
- let
- val {prop, ...} = crep_thm thm
- in
- str_of_cterm ctxt prop
- end*}
+ML{*fun no_vars ctxt thm =
+let
+ val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
+in
+ thm'
+end
+
+fun str_of_thm ctxt thm =
+let
+ val {prop, ...} = crep_thm (no_vars ctxt thm)
+in
+ str_of_cterm ctxt prop
+end*}
text {*
Again the function @{ML commas} helps with printing more than one theorem.
@@ -168,7 +182,6 @@
ML{*fun str_of_thms ctxt thms =
commas (map (str_of_thm ctxt) thms)*}
-
section {* Combinators\label{sec:combinators} *}
text {*
@@ -395,29 +408,23 @@
ML{*fun get_thm_names_from_ss simpset =
let
- val ({rules,...}, _) = MetaSimplifier.rep_ss simpset
+ val {simps,...} = MetaSimplifier.dest_ss simpset
in
- map #name (Net.entries rules)
+ map #1 simps
end*}
text {*
- The function @{ML rep_ss in MetaSimplifier} returns a record containing all
- information about the simpset. The rules of a simpset are stored in a
- \emph{discrimination net} (a data structure for fast indexing). From this
- net you can extract the entries using the function @{ML Net.entries}.
- You can now use @{ML get_thm_names_from_ss} to obtain all names of theorems stored
- in the current simpset---this simpset can be referred to using the antiquotation
+ The function @{ML dest_ss in MetaSimplifier} returns a record containing all
+ information stored in the simpset. We are only interested in the You can now use @{ML get_thm_names_from_ss} to obtain all names of theorems stored
+ in the current simpset. This simpset can be referred to using the antiquotation
@{text "@{simpset}"}.
@{ML_response_fake [display,gray]
"get_thm_names_from_ss @{simpset}"
"[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
- \begin{readmore}
- The infrastructure for simpsets is implemented in @{ML_file "Pure/meta_simplifier.ML"}
- and @{ML_file "Pure/simplifier.ML"}. Discrimination nets are implemented
- in @{ML_file "Pure/net.ML"}.
- \end{readmore}
+ Again, this way or referencing simpsets makes you independent from additions
+ of lemmas to the simpset by the user that potentially cause loops.
While antiquotations have many applications, they were originally introduced in order
to avoid explicit bindings for theorems such as:
@@ -427,14 +434,13 @@
text {*
These bindings are difficult to maintain and also can be accidentally
- overwritten by the user. This often breakes Isabelle
+ overwritten by the user. This often broke Isabelle
packages. Antiquotations solve this problem, since they are ``linked''
statically at compile-time. However, this static linkage also limits their
usefulness in cases where data needs to be build up dynamically. In the
course of this chapter you will learn more about these antiquotations:
they can simplify Isabelle programming since one can directly access all
kinds of logical elements from th ML-level.
-
*}
section {* Terms and Types *}
@@ -465,7 +471,7 @@
\end{readmore}
Sometimes the internal representation of terms can be surprisingly different
- from what you see at the user level, because the layers of
+ from what you see at the user-level, because the layers of
parsing/type-checking/pretty printing can be quite elaborate.
\begin{exercise}
@@ -513,7 +519,7 @@
*}
-section {* Constructing Terms and Types Manually *}
+section {* Constructing Terms and Types Manually\label{sec:terms_types_manually} *}
text {*
While antiquotations are very convenient for constructing terms, they can
@@ -545,19 +551,18 @@
the given arguments
@{ML_response [display,gray] "make_imp @{term S} @{term T} @{typ nat}"
- "Const \<dots> $
- Abs (\"x\", Type (\"nat\",[]),
- Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $
- (Free (\"T\",\<dots>) $ \<dots>))"}
+"Const \<dots> $
+ Abs (\"x\", Type (\"nat\",[]),
+ Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $ (Free (\"T\",\<dots>) $ \<dots>))"}
whereas with @{ML make_wrong_imp} we obtain a term involving the @{term "P"}
and @{text "Q"} from the antiquotation.
@{ML_response [display,gray] "make_wrong_imp @{term S} @{term T} @{typ nat}"
- "Const \<dots> $
- Abs (\"x\", \<dots>,
- Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $
- (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
+"Const \<dots> $
+ Abs (\"x\", \<dots>,
+ Const \<dots> $ (Const \<dots> $ (Free (\"P\",\<dots>) $ \<dots>)) $
+ (Const \<dots> $ (Free (\"Q\",\<dots>) $ \<dots>)))"}
Although types of terms can often be inferred, there are many
situations where you need to construct types manually, especially
@@ -566,15 +571,15 @@
*}
-ML{*fun make_fun_type tau1 tau2 = Type ("fun",[tau1,tau2]) *}
+ML{*fun make_fun_type tau1 tau2 = Type ("fun", [tau1, tau2]) *}
-text {* This can be equally written as: *}
+text {* This can be equally written with the combinator @{ML "-->"} as: *}
ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
text {*
A handy function for manipulating terms is @{ML map_types}: it takes a
- function and applies it to every type in the term. You can, for example,
+ function and applies it to every type in a term. You can, for example,
change every @{typ nat} in a term into an @{typ int} using the function:
*}
@@ -701,16 +706,18 @@
text {*
Occasional you have to calculate what the ``base'' name of a given
constant is. For this you can use the function @{ML Sign.extern_const} or
- @{ML Sign.base_name}. For example:
+ @{ML Long_Name.base_name}. For example:
@{ML_response [display,gray] "Sign.extern_const @{theory} \"List.list.Nil\"" "\"Nil\""}
The difference between both functions is that @{ML extern_const in Sign} returns
- the smallest name that is still unique, whereas @{ML base_name in Sign} always
+ the smallest name that is still unique, whereas @{ML base_name in Long_Name} always
strips off all qualifiers.
\begin{readmore}
- FIXME
+ Functions about naming are implemented in @{ML_file "Pure/General/name_space.ML"};
+ functions about signatures in @{ML_file "Pure/sign.ML"}.
+
\end{readmore}
*}
@@ -755,6 +762,19 @@
(Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
end" "0 + 0"}
+ In Isabelle also types need can be certified. For example, you obtain
+ the certified type for the Isablle type @{typ "nat \<Rightarrow> bool"} on the ML-level
+ as follows:
+
+ @{ML_response_fake [display,gray]
+ "ctyp_of @{theory} (@{typ nat} --> @{typ bool})"
+ "nat \<Rightarrow> bool"}
+
+ \begin{readmore}
+ For functions related to @{ML_type cterm}s and @{ML_type ctyp}s see
+ the file @{ML_file "Pure/thm.ML"}.
+ \end{readmore}
+
\begin{exercise}
Check that the function defined in Exercise~\ref{fun:revsum} returns a
result that type-checks.
@@ -814,8 +834,12 @@
\begin{readmore}
See @{ML_file "Pure/Syntax/syntax.ML"} where more functions about reading,
- checking and pretty-printing of terms are defined.
+ checking and pretty-printing of terms are defined. Fuctions related to the
+ type inference are implemented in @{ML_file "Pure/type.ML"} and
+ @{ML_file "Pure/type_infer.ML"}.
\end{readmore}
+
+ (FIXME: say something about sorts)
*}
@@ -894,8 +918,7 @@
theorem is proven. In particular, it is not possible to find out
what are all theorems that have a given attribute in common, unless of course
the function behind the attribute stores the theorems in a retrivable
- datastructure. This can be easily done by the recipe described in
- \ref{rec:named}.
+ datastructure.
If you want to print out all currently known attributes a theorem
can have, you can use the function:
@@ -919,7 +942,7 @@
context (which we ignore in the code above) and a theorem (@{text thm}), and
returns another theorem (namely @{text thm} resolved with the lemma
@{thm [source] sym}: @{thm sym[no_vars]}). The function @{ML "Thm.rule_attribute"} then returns
- an attribute (of type @{ML_type "attribute"}).
+ an attribute.
Before we can use the attribute, we need to set it up. This can be done
using the function @{ML Attrib.add_attributes} as follows.
@@ -927,8 +950,7 @@
setup {*
Attrib.add_attributes
- [("my_sym", Attrib.no_args my_symmetric, "applying the sym rule")]
-*}
+ [("my_sym", Attrib.no_args my_symmetric, "applying the sym rule")] *}
text {*
The attribute does not expect any further arguments (unlike @{text "[OF
@@ -949,28 +971,183 @@
@{text "> "}~@{thm test[my_sym]}
\end{isabelle}
+ The purpose of @{ML Thm.rule_attribute} is to directly manipulate theorems.
+ Another usage of attributes is to add and delete theorems from stored data.
+ For example the attribute @{text "[simp]"} adds or deletes a theorem from the
+ current simpset. For these applications, you can use @{ML Thm.declaration_attribute}.
+ To illustrate this function, let us introduce a reference containing a list
+ of theorems.
*}
+ML{*val my_thms = ref ([]:thm list)*}
+
+text {*
+ A word of warning: such references must not be used in any code that
+ is meant to be more than just for testing purposes! Here it is only used
+ to illustrate matters. We will show later how to store data properly without
+ using references. The function @{ML Thm.declaration_attribute} expects us to
+ provide two functions that add and delete theorems from this list.
+ For this we use the two functions:
+*}
+
+ML{*fun my_thms_add thm ctxt =
+ (my_thms := Thm.add_thm thm (!my_thms); ctxt)
+
+fun my_thms_del thm ctxt =
+ (my_thms := Thm.del_thm thm (!my_thms); ctxt)*}
+
text {*
-What are:
+ These functions take a theorem and a context and, for what we are explaining
+ here it is sufficient that they just return the context unchanged. They change
+ however the reference @{ML my_thms}, whereby the function @{ML Thm.add_thm}
+ adds a theorem if it is not already included in the list, and @{ML
+ Thm.del_thm} deletes one. Both functions use the predicate @{ML
+ Thm.eq_thm_prop} which compares theorems according to their proved
+ propositions (modulo alpha-equivalence).
-@{text "declaration_attribute"}
+ You can turn both functions into attributes using
+*}
+
+ML{*val my_add = Thm.declaration_attribute my_thms_add
+val my_del = Thm.declaration_attribute my_thms_del *}
+
+text {*
+ and set up the attributes as follows
+*}
+
+setup {*
+ Attrib.add_attributes
+ [("my_thms", Attrib.add_del_args my_add my_del,
+ "maintaining a list of my_thms")] *}
+
+text {*
+ Now if you prove the lemma attaching the attribute @{text "[my_thms]"}
+*}
+
+lemma trueI_2[my_thms]: "True" by simp
+
+text {*
+ then you can see the lemma is added to the initially empty list.
+
+ @{ML_response_fake [display,gray]
+ "!my_thms" "[\"True\"]"}
+
+ You can also add theorems using the command \isacommand{declare}.
+*}
+
+declare test[my_thms] trueI_2[my_thms add]
+
+text {*
+ The @{text "add"} is the default operation and does not need
+ to be given. This declaration will cause the theorem list to be
+ updated as follows.
+
+ @{ML_response_fake [display,gray]
+ "!my_thms"
+ "[\"True\", \"Suc (Suc 0) = 2\"]"}
+
+ The theorem @{thm [source] trueI_2} only appears once, since the
+ function @{ML Thm.add_thm} tests for duplicates, before extending
+ the list. Deletion from the list works as follows:
+*}
+
+declare test[my_thms del]
+
+text {* After this, the theorem list is again:
+
+ @{ML_response_fake [display,gray]
+ "!my_thms"
+ "[\"True\"]"}
+
+ We used in this example two functions declared as @{ML Thm.declaration_attribute},
+ but there can be any number of them. We just have to change the parser for reading
+ the arguments accordingly.
+
+ However, as said at the beginning of this example, using references for storing theorems is
+ \emph{not} the received way of doing such things. The received way is to
+ start a ``data slot'' in a context by using the functor @{text GenericDataFun}:
+*}
-@{text "theory_attributes"}
+ML {*structure Data = GenericDataFun
+ (type T = thm list
+ val empty = []
+ val extend = I
+ fun merge _ = Thm.merge_thms) *}
+
+text {*
+ To use this data slot, you only have to change @{ML my_thms_add} and
+ @{ML my_thms_del} to:
+*}
+
+ML{*val thm_add = Data.map o Thm.add_thm
+val thm_del = Data.map o Thm.del_thm*}
+
+text {*
+ where @{ML Data.map} updates the data appropriately in the context. Since storing
+ theorems in a special list is such a common task, there is the functor @{text NamedThmsFun},
+ which does most of the work for you. To obtain such a named theorem lists, you just
+ declare
+*}
+
+ML{*structure FooRules = NamedThmsFun
+ (val name = "foo"
+ val description = "Rules for foo"); *}
+
+text {*
+ and set up the @{ML_struct FooRules} with the command
+*}
+
+setup {* FooRules.setup *}
-@{text "proof_attributes"}
+text {*
+ This code declares a data slot where the theorems are stored,
+ an attribute @{text foo} (with the @{text add} and @{text del} options
+ for adding and deleting theorems) and an internal ML interface to retrieve and
+ modify the theorems.
+
+ Furthermore, the facts are made available on the user-level under the dynamic
+ fact name @{text foo}. For example you can declare three lemmas to be of the kind
+ @{text foo} by:
+*}
+
+lemma rule1[foo]: "A" sorry
+lemma rule2[foo]: "B" sorry
+lemma rule3[foo]: "C" sorry
+
+text {* and undeclare the first one by: *}
+
+declare rule1[foo del]
+
+text {* and query the remaining ones with:
+
+ \begin{isabelle}
+ \isacommand{thm}~@{text "foo"}\\
+ @{text "> ?C"}\\
+ @{text "> ?B"}
+ \end{isabelle}
+
+ On the ML-level the rules marked with @{text "foo"} can be retrieved
+ using the function @{ML FooRules.get}:
+
+ @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"}
+
+ \begin{readmore}
+ For more information see @{ML_file "Pure/Tools/named_thms.ML"} and also
+ the recipe in Section~\ref{recipe:storingdata} about storing arbitrary
+ data.
+ \end{readmore}
+
+ (FIXME What are: @{text "theory_attributes"}, @{text "proof_attributes"}?)
\begin{readmore}
FIXME: @{ML_file "Pure/more_thm.ML"} @{ML_file "Pure/Isar/attrib.ML"}
\end{readmore}
-
-
*}
-section {* Theories and Local Theories *}
+section {* Theories, Contexts and Local Theories (TBD) *}
text {*
(FIXME: expand)
@@ -992,7 +1169,7 @@
-section {* Storing Theorems\label{sec:storing} *}
+section {* Storing Theorems\label{sec:storing} (TBD) *}
text {* @{ML PureThy.add_thms_dynamic} *}
@@ -1001,28 +1178,41 @@
(* FIXME: some code below *)
(*<*)
+(*
setup {*
- Sign.add_consts_i [("bar", @{typ "nat"},NoSyn)]
+ Sign.add_consts_i [(Binding"bar", @{typ "nat"},NoSyn)]
*}
-
+*)
lemma "bar = (1::nat)"
oops
+(*
setup {*
Sign.add_consts_i [("foo", @{typ "nat"},NoSyn)]
#> PureThy.add_defs false [((Binding.name "foo_def",
Logic.mk_equals (Const ("FirstSteps.foo", @{typ "nat"}), @{term "1::nat"})), [])]
#> snd
*}
-
+*)
+(*
lemma "foo = (1::nat)"
apply(simp add: foo_def)
done
thm foo_def
+*)
(*>*)
-section {* Misc *}
+section {* Pretty-Printing (TBD) *}
+
+text {*
+ @{ML Pretty.big_list},
+ @{ML Pretty.brk},
+ @{ML Pretty.block},
+ @{ML Pretty.chunks}
+*}
+
+section {* Misc (TBD) *}
ML {*DatatypePackage.get_datatype @{theory} "List.list"*}
--- a/CookBook/Intro.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Intro.thy Thu Mar 12 08:11:02 2009 +0100
@@ -14,7 +14,7 @@
examples included in the tutorial. The code is as far as possible checked
against recent versions of Isabelle. If something does not work, then
please let us know. If you have comments, criticism or like to add to the
- tutorial, feel free---you are most welcome! The tutorial is meant to be
+ tutorial, please feel free---you are most welcome! The tutorial is meant to be
gentle and comprehensive. To achieve this we need your feedback.
*}
@@ -38,7 +38,7 @@
part of the distribution of Isabelle):
\begin{description}
- \item[The Implementation Manual] describes Isabelle
+ \item[The Isabelle/Isar Implementation Manual] describes Isabelle
from a high-level perspective, documenting both the underlying
concepts and some of the interfaces.
@@ -48,9 +48,9 @@
now, but some parts, particularly the chapters on tactics, are still
useful.
- \item[The Isar Reference Manual] is also an older document that provides
- material about Isar and its implementation. Some material in it
- is still useful.
+ \item[The Isar Reference Manual] provides specification material (like grammars,
+ examples and so on) about Isar and its implementation. It is currently in
+ the process of being updated.
\end{description}
Then of course there is:
@@ -60,7 +60,7 @@
things really work. Therefore you should not hesitate to look at the
way things are actually implemented. More importantly, it is often
good to look at code that does similar things as you want to do and
- to learn from other people's code.
+ to learn from that code.
\end{description}
*}
@@ -108,6 +108,10 @@
Further information or pointers to files.
\end{readmore}
+ A few exercises a scattered around the text. Their solutions are given
+ in Appendix~\ref{ch:solutions}. Of course, you learn most, if you first try
+ to solve the exercises on your own, and then look at the solutions.
+
*}
section {* Acknowledgements *}
@@ -115,7 +119,7 @@
text {*
Financial support for this tutorial was provided by the German
Research Council (DFG) under grant number URB 165/5-1. The following
- people contributed:
+ people contributed to the text:
\begin{itemize}
\item {\bf Stefan Berghofer} wrote nearly all of the ML-code of the
@@ -126,17 +130,30 @@
\item {\bf Sascha Böhme} contributed the recipes in \ref{rec:timeout},
\ref{rec:config}, \ref{rec:storingdata}, \ref{rec:external} and \ref{rec:oracle}.
- He also wrote section \ref{sec:conversion}.
+ He also wrote section \ref{sec:conversion} and helped with recipe \ref{rec:timing}.
\item {\bf Jeremy Dawson} wrote the first version of the chapter
about parsing.
\item {\bf Alexander Krauss} wrote the first version of the ``first-steps''
- chapter and also contributed recipe \ref{rec:named}.
+ chapter and also contributed the material on @{text NamedThmsFun}.
\end{itemize}
Please let me know of any omissions. Responsibility for any remaining
- errors lies with me.
+ errors lies with me.\bigskip
+
+ {\Large\bf
+ This document is still in the process of being written! All of the
+ text is still under constructions. Sections and
+ chapters that are under \underline{heavy} construction are marked
+ with TBD.}
+
+
+ \vfill
+ This document was compiled with:\\
+ \input{version}
*}
+
+
end
\ No newline at end of file
--- a/CookBook/Package/Ind_Code.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Package/Ind_Code.thy Thu Mar 12 08:11:02 2009 +0100
@@ -1,26 +1,323 @@
theory Ind_Code
-imports "../Base" Simple_Inductive_Package
+imports "../Base" Simple_Inductive_Package Ind_Prelims
begin
+section {* Code *}
+
+subsection {* Definitions *}
+
+text {*
+ If we give it a term and a constant name, it will define the
+ constant as the term.
+*}
+
+ML{*fun make_defs ((binding, syn), trm) lthy =
+let
+ val arg = ((binding, syn), (Attrib.empty_binding, trm))
+ val ((_, (_ , thm)), lthy) = LocalTheory.define Thm.internalK arg lthy
+in
+ (thm, lthy)
+end*}
+
+text {*
+ Returns the definition and the local theory in which this definition has
+ been made. What is @{ML Thm.internalK}?
+*}
+
+ML {*let
+ val lthy = TheoryTarget.init NONE @{theory}
+in
+ make_defs ((Binding.name "MyTrue", NoSyn), @{term "True"}) lthy
+end*}
+
+text {*
+ Why is the output of MyTrue blue?
+*}
+
text {*
+ Constructs the term for the definition: the main arguments are a predicate
+ and the types of the arguments, it also expects orules which are the
+ intro rules in the object logic; preds which are all predicates. returns the
+ term.
+*}
- @{ML_chunk [display,gray] definitions_aux}
- @{ML_chunk [display,gray,linenos] definitions}
+ML %linenosgray{*fun defs_aux lthy orules preds (pred, arg_types) =
+let
+ fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
+
+ val fresh_args =
+ arg_types
+ |> map (pair "z")
+ |> Variable.variant_frees lthy orules
+ |> map Free
+in
+ list_comb (pred, fresh_args)
+ |> fold_rev (curry HOLogic.mk_imp) orules
+ |> fold_rev mk_all preds
+ |> fold_rev lambda fresh_args
+end*}
+
+text {*
+ The lines 5-9 produce fresh arguments with which the predicate can be applied.
+ For this it pairs every type with a string @{text [quotes] "z"} (Line 7); then
+ generates variants for all these strings (names) so that they are unique w.r.t.~to
+ the intro rules; in Line 9 it generates the corresponding variable terms for these
+ unique names.
+
+ The unique free variables are applied to the predicate (Line 11); then
+ the intro rules are attached as preconditions (Line 12); in Line 13 we
+ quantify over all predicates; and in line 14 we just abstract over all
+ the (fresh) arguments of the predicate.
+*}
+text {*
+ The arguments of the main function for the definitions are
+ the intro rules; the predicates and their names, their syntax
+ annotations and the argument types of each predicate. It
+ returns a theorem list (the definitions) and a local
+ theory where the definitions are made
+*}
+
+ML %linenosgray{*fun definitions rules preds prednames syns arg_typss lthy =
+let
+ val thy = ProofContext.theory_of lthy
+ val orules = map (ObjectLogic.atomize_term thy) rules
+ val defs = map (defs_aux lthy orules preds) (preds ~~ arg_typss)
+in
+ fold_map make_defs (prednames ~~ syns ~~ defs) lthy
+end*}
+
+text {*
+ In line 4 we generate the intro rules in the object logic; for this we have to
+ obtain the theory behind the local theory (Line 3); with this we can
+ call @{ML defs_aux} to generate the terms for the left-hand sides.
+ The actual definitions are made in Line 7.
+*}
+
+
+subsection {* Induction Principles *}
+
+ML{*fun inst_spec ct =
+ Drule.instantiate' [SOME (ctyp_of_term ct)] [NONE, SOME ct] @{thm spec}*}
+
+text {*
+ Instantiates the @{text "?x"} in @{thm spec} with a @{ML_type cterm}.
*}
text {*
+ Instantiates universal qantifications in the premises
+*}
- @{ML_chunk [display,gray] induction_rules}
+lemma "\<forall>x\<^isub>1 x\<^isub>2 x\<^isub>3. P (x\<^isub>1::nat) (x\<^isub>2::nat) (x\<^isub>3::nat) \<Longrightarrow> True"
+apply (tactic {* EVERY' (map (dtac o inst_spec)
+ [@{cterm "y\<^isub>1::nat"},@{cterm "y\<^isub>2::nat"},@{cterm "y\<^isub>3::nat"}]) 1*})
+txt {* \begin{minipage}{\textwidth}
+ @{subgoals}
+ \end{minipage}*}
+(*<*)oops(*>*)
+
+
+lemma
+ assumes "even n"
+ shows "P 0 \<Longrightarrow>
+ (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+apply(atomize (full))
+apply(cut_tac prems)
+apply(unfold even_def)
+apply(drule spec[where x=P])
+apply(drule spec[where x=Q])
+apply(assumption)
+done
+text {*
+ The tactic for proving the induction rules:
*}
+ML{*fun induction_tac defs prems insts =
+ EVERY1 [K (print_tac "start"),
+ ObjectLogic.full_atomize_tac,
+ cut_facts_tac prems,
+ K (rewrite_goals_tac defs),
+ EVERY' (map (dtac o inst_spec) insts),
+ assume_tac]*}
+
+lemma
+ assumes "even n"
+ shows "P 0 \<Longrightarrow>
+ (\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
+apply(tactic {*
+ let
+ val defs = [@{thm even_def}, @{thm odd_def}]
+ val insts = [@{cterm "P::nat\<Rightarrow>bool"}, @{cterm "Q::nat\<Rightarrow>bool"}]
+ in
+ induction_tac defs @{thms prems} insts
+ end *})
+done
+
text {*
-
- @{ML_chunk [display,gray] intro_rules}
+ While the generic proof is relatively simple, it is a bit harder to
+ set up the goals for the induction principles.
+*}
+ML %linenosgray{*fun inductions rules defs preds tyss lthy1 =
+let
+ val Ps = replicate (length preds) "P"
+ val (newprednames, lthy2) = Variable.variant_fixes Ps lthy1
+
+ val thy = ProofContext.theory_of lthy2
+
+ val tyss' = map (fn tys => tys ---> HOLogic.boolT) tyss
+ val newpreds = map Free (newprednames ~~ tyss')
+ val cnewpreds = map (cterm_of thy) newpreds
+ val rules' = map (subst_free (preds ~~ newpreds)) rules
+
+ fun prove_induction ((pred, newpred), tys) =
+ let
+ val zs = replicate (length tys) "z"
+ val (newargnames, lthy3) = Variable.variant_fixes zs lthy2;
+ val newargs = map Free (newargnames ~~ tys)
+
+ val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
+ val goal = Logic.list_implies
+ (rules', HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
+ in
+ Goal.prove lthy3 [] [prem] goal
+ (fn {prems, ...} => induction_tac defs prems cnewpreds)
+ |> singleton (ProofContext.export lthy3 lthy1)
+ end
+in
+ map prove_induction (preds ~~ newpreds ~~ tyss)
+end*}
+
+(*
+ML {*
+ let
+ val rules = [@{term "even 0"},
+ @{term "\<And>n::nat. odd n \<Longrightarrow> even (Suc n)"},
+ @{term "\<And>n::nat. even n \<Longrightarrow> odd (Suc n)"}]
+ val defs = [@{thm even_def}, @{thm odd_def}]
+ val preds = [@{term "even::nat\<Rightarrow>bool"}, @{term "odd::nat\<Rightarrow>bool"}]
+ val tyss = [[@{typ "nat"}],[@{typ "nat"}]]
+ in
+ inductions rules defs preds tyss @{context}
+ end
*}
+*)
+
+subsection {* Introduction Rules *}
+
+ML{*val all_elims = fold (fn ct => fn th => th RS inst_spec ct)
+val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp})*}
+
+ML{*fun subproof2 prem params2 prems2 =
+ SUBPROOF (fn {prems, ...} =>
+ let
+ val prem' = prems MRS prem;
+ val prem'' =
+ case prop_of prem' of
+ _ $ (Const (@{const_name All}, _) $ _) =>
+ prem' |> all_elims params2
+ |> imp_elims prems2
+ | _ => prem';
+ in
+ rtac prem'' 1
+ end)*}
+
+ML{*fun subproof1 rules preds i =
+ SUBPROOF (fn {params, prems, context = ctxt', ...} =>
+ let
+ val (prems1, prems2) = chop (length prems - length rules) prems;
+ val (params1, params2) = chop (length params - length preds) params;
+ in
+ rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
+ THEN
+ EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
+ end)*}
+
+ML{*
+fun introductions_tac defs rules preds i ctxt =
+ EVERY1 [ObjectLogic.rulify_tac,
+ K (rewrite_goals_tac defs),
+ REPEAT o (resolve_tac [@{thm allI},@{thm impI}]),
+ subproof1 rules preds i ctxt]*}
+
+ML{*fun introductions rules preds defs lthy =
+let
+ fun prove_intro (i, goal) =
+ Goal.prove lthy [] [] goal
+ (fn {context, ...} => introductions_tac defs rules preds i context)
+in
+ map_index prove_intro rules
+end*}
+
+ML %linenosgray{*fun add_inductive_i pred_specs rule_specs lthy =
+let
+ val syns = map snd pred_specs
+ val pred_specs' = map fst pred_specs
+ val prednames = map fst pred_specs'
+ val preds = map (fn (p, ty) => Free (Binding.name_of p, ty)) pred_specs'
+
+ val tyss = map (binder_types o fastype_of) preds
+ val (attrs, rules) = split_list rule_specs
+
+ val (defs, lthy') = definitions rules preds prednames syns tyss lthy
+ val ind_rules = inductions rules defs preds tyss lthy'
+ val intro_rules = introductions rules preds defs lthy'
+
+ val mut_name = space_implode "_" (map Binding.name_of prednames)
+ val case_names = map (Binding.name_of o fst) attrs
+in
+ lthy'
+ |> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
+ ((Binding.qualify false mut_name a, atts), [([th], [])])) (rule_specs ~~ intro_rules))
+ |-> (fn intross => LocalTheory.note Thm.theoremK
+ ((Binding.qualify false mut_name (Binding.name "intros"), []), maps snd intross))
+ |>> snd
+ ||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
+ ((Binding.qualify false (Binding.name_of R) (Binding.name "induct"),
+ [Attrib.internal (K (RuleCases.case_names case_names)),
+ Attrib.internal (K (RuleCases.consumes 1)),
+ Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
+ (pred_specs ~~ ind_rules)) #>> maps snd)
+ |> snd
+end*}
+
+
+ML{*fun read_specification' vars specs lthy =
+let
+ val specs' = map (fn (a, s) => [(a, [s])]) specs
+ val ((varst, specst), _) =
+ Specification.read_specification vars specs' lthy
+ val specst' = map (apsnd the_single) specst
+in
+ (varst, specst')
+end*}
+
+ML{*fun add_inductive pred_specs rule_specs lthy =
+let
+ val (pred_specs', rule_specs') =
+ read_specification' pred_specs rule_specs lthy
+in
+ add_inductive_i pred_specs' rule_specs' lthy
+end*}
+
+ML{*val spec_parser =
+ OuterParse.opt_target --
+ OuterParse.fixes --
+ Scan.optional
+ (OuterParse.$$$ "where" |--
+ OuterParse.!!!
+ (OuterParse.enum1 "|"
+ (SpecParse.opt_thm_name ":" -- OuterParse.prop))) []*}
+
+ML{*val specification =
+ spec_parser >>
+ (fn ((loc, pred_specs), rule_specs) =>
+ Toplevel.local_theory loc (add_inductive pred_specs rule_specs))*}
+
+ML{*val _ = OuterSyntax.command "simple_inductive" "define inductive predicates"
+ OuterKeyword.thy_decl specification*}
text {*
Things to include at the end:
@@ -34,5 +331,11 @@
*}
+simple_inductive
+ Even and Odd
+where
+ Even0: "Even 0"
+| EvenS: "Odd n \<Longrightarrow> Even (Suc n)"
+| OddS: "Even n \<Longrightarrow> Odd (Suc n)"
end
--- a/CookBook/Package/Ind_Intro.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Package/Ind_Intro.thy Thu Mar 12 08:11:02 2009 +0100
@@ -2,7 +2,7 @@
imports Main
begin
-chapter {* How to Write a Definitional Package\label{chp:package} *}
+chapter {* How to Write a Definitional Package\label{chp:package} (TBD) *}
text {*
\begin{flushright}
--- a/CookBook/Package/Ind_Prelims.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Package/Ind_Prelims.thy Thu Mar 12 08:11:02 2009 +0100
@@ -75,10 +75,10 @@
*}
lemma %linenos trcl_induct:
- assumes asm: "trcl R x y"
+ assumes "trcl R x y"
shows "(\<And>x. P x x) \<Longrightarrow> (\<And>x y z. R x y \<Longrightarrow> P y z \<Longrightarrow> P x z) \<Longrightarrow> P x y"
apply(atomize (full))
-apply(cut_tac asm)
+apply(cut_tac prems)
apply(unfold trcl_def)
apply(drule spec[where x=P])
apply(assumption)
@@ -212,11 +212,11 @@
*}
simple_inductive
- even\<iota> and odd\<iota>
+ even and odd
where
- even0: "even\<iota> 0"
-| evenS: "odd\<iota> n \<Longrightarrow> even\<iota> (Suc n)"
-| oddS: "even\<iota> n \<Longrightarrow> odd\<iota> (Suc n)"
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
text {*
Since the predicates @{term even} and @{term odd} are mutually inductive, each
@@ -224,11 +224,11 @@
below @{text "P"} and @{text "Q"}).
*}
-definition "even \<equiv>
+definition "even\<iota> \<equiv>
\<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
\<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> P n"
-definition "odd \<equiv>
+definition "odd\<iota> \<equiv>
\<lambda>n. \<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
\<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q n"
@@ -238,11 +238,11 @@
*}
lemma even_induct:
- assumes asm: "even n"
+ assumes "even n"
shows "P 0 \<Longrightarrow>
(\<And>m. Q m \<Longrightarrow> P (Suc m)) \<Longrightarrow> (\<And>m. P m \<Longrightarrow> Q (Suc m)) \<Longrightarrow> P n"
apply(atomize (full))
-apply(cut_tac asm)
+apply(cut_tac prems)
apply(unfold even_def)
apply(drule spec[where x=P])
apply(drule spec[where x=Q])
@@ -263,7 +263,7 @@
apply (unfold odd_def even_def)
apply (rule allI impI)+
proof -
- case (goal1 P)
+ case (goal1 P Q)
have p1: "\<forall>P Q. P 0 \<longrightarrow> (\<forall>m. Q m \<longrightarrow> P (Suc m))
\<longrightarrow> (\<forall>m. P m \<longrightarrow> Q (Suc m)) \<longrightarrow> Q m" by fact
have r1: "P 0" by fact
@@ -303,10 +303,10 @@
*}
lemma accpart_induct:
- assumes asm: "accpart R x"
+ assumes "accpart R x"
shows "(\<And>x. (\<And>y. R y x \<Longrightarrow> P y) \<Longrightarrow> P x) \<Longrightarrow> P x"
apply(atomize (full))
-apply(cut_tac asm)
+apply(cut_tac prems)
apply(unfold accpart_def)
apply(drule spec[where x=P])
apply(assumption)
--- a/CookBook/Package/Simple_Inductive_Package.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Package/Simple_Inductive_Package.thy Thu Mar 12 08:11:02 2009 +0100
@@ -1,8 +1,64 @@
theory Simple_Inductive_Package
-imports Main
-uses ("simple_inductive_package.ML")
+imports Main "../Base"
+uses "simple_inductive_package.ML"
begin
+(*
+simple_inductive
+ trcl :: "('a \<Rightarrow> 'a \<Rightarrow> bool) \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ base: "trcl R x x"
+| step: "trcl R x y \<Longrightarrow> R y z \<Longrightarrow> trcl R x z"
+
+thm trcl_def
+thm trcl.induct
+thm trcl.intros
+
+simple_inductive
+ even and odd
+where
+ even0: "even 0"
+| evenS: "odd n \<Longrightarrow> even (Suc n)"
+| oddS: "even n \<Longrightarrow> odd (Suc n)"
+
+thm even_def odd_def
+thm even.induct odd.induct
+thm even0
+thm evenS
+thm oddS
+thm even_odd.intros
+
+simple_inductive
+ accpart for r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+where
+ accpartI: "(\<forall>y. r y x \<longrightarrow> accpart r y) \<Longrightarrow> accpart r x"
+
+thm accpart_def
+thm accpart.induct
+thm accpartI
+
+locale rel =
+ fixes r :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
+
+simple_inductive (in rel) accpart'
+where
+ accpartI': "\<And>x. (\<And>y. r y x \<Longrightarrow> accpart' y) \<Longrightarrow> accpart' x"
+
+
+context rel
+begin
+ thm accpartI'
+ thm accpart'.induct
+end
+
+thm rel.accpartI'
+thm rel.accpart'.induct
+
+
+lemma "True" by simp
+*)
+
use_chunks "simple_inductive_package.ML"
+
end
--- a/CookBook/Package/simple_inductive_package.ML Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Package/simple_inductive_package.ML Thu Mar 12 08:11:02 2009 +0100
@@ -18,104 +18,141 @@
structure SimpleInductivePackage: SIMPLE_INDUCTIVE_PACKAGE =
struct
-fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
-
-(* @chunk definitions_aux *)
-fun definitions_aux s ((binding, syn), (attr, trm)) lthy =
+(* @chunk make_definitions *)
+fun make_defs ((binding, syn), trm) lthy =
let
- val ((_, (_, thm)), lthy) =
- LocalTheory.define s ((binding, syn), (attr, trm)) lthy
+ val arg = ((binding, syn), (Attrib.empty_binding, trm))
+ val ((_, (_ , thm)), lthy) = LocalTheory.define Thm.internalK arg lthy
in
(thm, lthy)
end
(* @end *)
+(* @chunk definitions_aux *)
+fun defs_aux lthy orules preds params (pred, arg_types) =
+let
+ fun mk_all x P = HOLogic.all_const (fastype_of x) $ lambda x P
+
+ val fresh_args =
+ arg_types
+ |> map (pair "z")
+ |> Variable.variant_frees lthy orules
+ |> map Free
+in
+ list_comb (pred, fresh_args)
+ |> fold_rev (curry HOLogic.mk_imp) orules
+ |> fold_rev mk_all preds
+ |> fold_rev lambda (params @ fresh_args)
+end
+(* @end *)
+
(* @chunk definitions *)
-fun definitions params rules preds preds' Tss lthy =
+fun definitions rules params preds prednames syns arg_typess lthy =
let
val thy = ProofContext.theory_of lthy
- val rules' = map (ObjectLogic.atomize_term thy) rules
+ val orules = map (ObjectLogic.atomize_term thy) rules
+ val defs =
+ map (defs_aux lthy orules preds params) (preds ~~ arg_typess)
in
- fold_map (fn ((((R, _), syn), pred), Ts) =>
- let
- val zs = map Free (Variable.variant_frees lthy rules' (map (pair "z") Ts))
-
- val t0 = list_comb (pred, zs);
- val t1 = fold_rev (curry HOLogic.mk_imp) rules' t0;
- val t2 = fold_rev mk_all preds' t1;
- val t3 = fold_rev lambda (params @ zs) t2;
- in
- definitions_aux Thm.internalK ((R, syn), (Attrib.empty_binding, t3))
- end) (preds ~~ preds' ~~ Tss) lthy
+ fold_map make_defs (prednames ~~ syns ~~ defs) lthy
end
(* @end *)
fun inst_spec ct =
Drule.instantiate' [SOME (ctyp_of_term ct)] [NONE, SOME ct] @{thm spec};
+(* @chunk induction_tac *)
+fun induction_tac defs prems insts =
+ EVERY1 [ObjectLogic.full_atomize_tac,
+ cut_facts_tac prems,
+ K (rewrite_goals_tac defs),
+ EVERY' (map (dtac o inst_spec) insts),
+ assume_tac]
+(* @end *)
+
+(* @chunk induction_rules *)
+fun inductions rules defs parnames preds Tss lthy1 =
+let
+ val (_, lthy2) = Variable.add_fixes parnames lthy1
+
+ val Ps = replicate (length preds) "P"
+ val (newprednames, lthy3) = Variable.variant_fixes Ps lthy2
+
+ val thy = ProofContext.theory_of lthy3
+
+ val Tss' = map (fn Ts => Ts ---> HOLogic.boolT) Tss
+ val newpreds = map Free (newprednames ~~ Tss')
+ val cnewpreds = map (cterm_of thy) newpreds
+ val rules' = map (subst_free (preds ~~ newpreds)) rules
+
+ fun prove_induction ((pred, newpred), Ts) =
+ let
+ val (newargnames, lthy4) =
+ Variable.variant_fixes (replicate (length Ts) "z") lthy3;
+ val newargs = map Free (newargnames ~~ Ts)
+
+ val prem = HOLogic.mk_Trueprop (list_comb (pred, newargs))
+ val goal = Logic.list_implies
+ (rules', HOLogic.mk_Trueprop (list_comb (newpred, newargs)))
+ in
+ Goal.prove lthy4 [] [prem] goal
+ (fn {prems, ...} => induction_tac defs prems cnewpreds)
+ |> singleton (ProofContext.export lthy4 lthy1)
+ end
+in
+ map prove_induction (preds ~~ newpreds ~~ Tss)
+end
+(* @end *)
+
val all_elims = fold (fn ct => fn th => th RS inst_spec ct);
val imp_elims = fold (fn th => fn th' => [th', th] MRS @{thm mp});
-(* @chunk induction_rules *)
-fun INDUCTION rules preds' Tss defs lthy1 lthy2 =
-let
- val (Pnames, lthy3) = Variable.variant_fixes (replicate (length preds') "P") lthy2;
- val Ps = map (fn (s, Ts) => Free (s, Ts ---> HOLogic.boolT)) (Pnames ~~ Tss);
- val cPs = map (cterm_of (ProofContext.theory_of lthy3)) Ps;
- val rules'' = map (subst_free (preds' ~~ Ps)) rules;
-
- fun prove_indrule ((R, P), Ts) =
+(* @chunk subproof1 *)
+fun subproof2 prem params2 prems2 =
+ SUBPROOF (fn {prems, ...} =>
let
- val (znames, lthy4) = Variable.variant_fixes (replicate (length Ts) "z") lthy3;
- val zs = map Free (znames ~~ Ts)
-
- val prem = HOLogic.mk_Trueprop (list_comb (R, zs))
- val goal = Logic.list_implies (rules'', HOLogic.mk_Trueprop (list_comb (P, zs)))
- in
- Goal.prove lthy4 [] [prem] goal
- (fn {prems, ...} => EVERY1
- ([ObjectLogic.full_atomize_tac,
- cut_facts_tac prems,
- K (rewrite_goals_tac defs)] @
- map (fn ct => dtac (inst_spec ct)) cPs @
- [assume_tac])) |>
- singleton (ProofContext.export lthy4 lthy1)
- end;
-in
- map prove_indrule (preds' ~~ Ps ~~ Tss)
- end
+ val prem' = prems MRS prem;
+ val prem'' =
+ case prop_of prem' of
+ _ $ (Const (@{const_name All}, _) $ _) =>
+ prem' |> all_elims params2
+ |> imp_elims prems2
+ | _ => prem';
+ in
+ rtac prem'' 1
+ end)
(* @end *)
+(* @chunk subproof2 *)
+fun subproof1 rules preds i =
+ SUBPROOF (fn {params, prems, context = ctxt', ...} =>
+ let
+ val (prems1, prems2) = chop (length prems - length rules) prems;
+ val (params1, params2) = chop (length params - length preds) params;
+ in
+ rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
+ THEN
+ EVERY1 (map (fn prem => subproof2 prem params2 prems2 ctxt') prems1)
+ end)
+(* @end *)
+
+fun introductions_tac defs rules preds i ctxt =
+ EVERY1 [ObjectLogic.rulify_tac,
+ K (rewrite_goals_tac defs),
+ REPEAT o (resolve_tac [@{thm allI},@{thm impI}]),
+ subproof1 rules preds i ctxt]
+
+
(* @chunk intro_rules *)
-fun INTROS rules preds' defs lthy1 lthy2 =
+fun introductions rules parnames preds defs lthy1 =
let
- fun prove_intro (i, r) =
- Goal.prove lthy2 [] [] r
- (fn {prems, context = ctxt} => EVERY
- [ObjectLogic.rulify_tac 1,
- rewrite_goals_tac defs,
- REPEAT (resolve_tac [@{thm allI},@{thm impI}] 1),
- SUBPROOF (fn {params, prems, context = ctxt', ...} =>
- let
- val (prems1, prems2) = chop (length prems - length rules) prems;
- val (params1, params2) = chop (length params - length preds') params;
- in
- rtac (ObjectLogic.rulify (all_elims params1 (nth prems2 i))) 1
- THEN
- EVERY1 (map (fn prem =>
- SUBPROOF (fn {prems = prems', concl, ...} =>
- let
-
- val prem' = prems' MRS prem;
- val prem'' = case prop_of prem' of
- _ $ (Const (@{const_name All}, _) $ _) =>
- prem' |> all_elims params2
- |> imp_elims prems2
- | _ => prem';
- in rtac prem'' 1 end) ctxt') prems1)
- end) ctxt 1]) |>
- singleton (ProofContext.export lthy2 lthy1)
+ val (_, lthy2) = Variable.add_fixes parnames lthy1
+
+ fun prove_intro (i, goal) =
+ Goal.prove lthy2 [] [] goal
+ (fn {context, ...} => introductions_tac defs rules preds i context)
+ |> singleton (ProofContext.export lthy2 lthy1)
in
map_index prove_intro rules
end
@@ -124,29 +161,32 @@
(* @chunk add_inductive_i *)
fun add_inductive_i preds params specs lthy =
let
- val params' = map (fn (p, T) => Free (Binding.base_name p, T)) params;
- val preds' = map (fn ((R, T), _) => list_comb (Free (Binding.base_name R, T), params')) preds;
+ val params' = map (fn (p, T) => Free (Binding.name_of p, T)) params;
+ val preds' = map (fn ((R, T), _) => list_comb (Free (Binding.name_of R, T), params')) preds;
val Tss = map (binder_types o fastype_of) preds';
- val (ass,rules) = split_list specs;
+ val (ass, rules) = split_list specs; (* FIXME: ass not used? *)
+
+ val prednames = map (fst o fst) preds
+ val syns = map snd preds
+ val parnames = map (Binding.name_of o fst) params
- val (defs, lthy1) = definitions params' rules preds preds' Tss lthy
- val (_, lthy2) = Variable.add_fixes (map (Binding.base_name o fst) params) lthy1;
+ val (defs, lthy1) = definitions rules params' preds' prednames syns Tss lthy;
- val inducts = INDUCTION rules preds' Tss defs lthy1 lthy2
+ val inducts = inductions rules defs parnames preds' Tss lthy1
+
+ val intros = introductions rules parnames preds' defs lthy1
- val intros = INTROS rules preds' defs lthy1 lthy2
-
- val mut_name = space_implode "_" (map (Binding.base_name o fst o fst) preds);
- val case_names = map (Binding.base_name o fst o fst) specs
+ val mut_name = space_implode "_" (map (Binding.name_of o fst o fst) preds);
+ val case_names = map (Binding.name_of o fst o fst) specs
in
lthy1
|> LocalTheory.notes Thm.theoremK (map (fn (((a, atts), _), th) =>
- ((Binding.qualify mut_name a, atts), [([th], [])])) (specs ~~ intros))
+ ((Binding.qualify false mut_name a, atts), [([th], [])])) (specs ~~ intros))
|-> (fn intross => LocalTheory.note Thm.theoremK
- ((Binding.qualify mut_name (Binding.name "intros"), []), maps snd intross))
+ ((Binding.qualify false mut_name (Binding.name "intros"), []), maps snd intross))
|>> snd
||>> (LocalTheory.notes Thm.theoremK (map (fn (((R, _), _), th) =>
- ((Binding.qualify (Binding.base_name R) (Binding.name "induct"),
+ ((Binding.qualify false (Binding.name_of R) (Binding.name "induct"),
[Attrib.internal (K (RuleCases.case_names case_names)),
Attrib.internal (K (RuleCases.consumes 1)),
Attrib.internal (K (Induct.induct_pred ""))]), [([th], [])]))
--- a/CookBook/Parsing.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Parsing.thy Thu Mar 12 08:11:02 2009 +0100
@@ -11,7 +11,7 @@
Theory commands, such as \isacommand{definition}, \isacommand{inductive} and so
on, belong to the outer syntax, whereas items inside double quotation marks, such
as terms, types and so on, belong to the inner syntax. For parsing inner syntax,
- Isabelle uses a rather general and sophisticated algorithm due to Earley, which
+ Isabelle uses a rather general and sophisticated algorithm, which
is driven by priority grammars. Parsers for outer syntax are built up by functional
parsing combinators. These combinators are a well-established technique for parsing,
which has, for example, been described in Paulson's classic ML-book \cite{paulson-ml2}.
@@ -348,7 +348,7 @@
text {*
Most of the time, however, Isabelle developers have to deal with parsing
- tokens, not strings. These token parsers will have the type:
+ tokens, not strings. These token parsers have the type:
*}
ML{*type 'a parser = OuterLex.token list -> 'a * OuterLex.token list*}
@@ -406,7 +406,7 @@
*}
ML{*fun filtered_input str =
- filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
+ filter OuterLex.is_proper (OuterSyntax.scan Position.none str) *}
text {*
@@ -594,14 +594,14 @@
The functions to do with input and output of XML and YXML are defined
in @{ML_file "Pure/General/xml.ML"} and @{ML_file "Pure/General/yxml.ML"}.
\end{readmore}
+
*}
-
section {* Parsing Specifications\label{sec:parsingspecs} *}
text {*
There are a number of special purpose parsers that help with parsing
- specifications of functions, inductive definitions and so on. In
+ specifications of function definitions, inductive predicates and so on. In
Capter~\ref{chp:package}, for example, we will need to parse specifications
for inductive predicates of the form:
*}
@@ -613,6 +613,8 @@
| evenS: "odd n \<Longrightarrow> even (Suc n)"
| oddS: "even n \<Longrightarrow> odd (Suc n)"
+text {* and *}
+
simple_inductive
trcl\<iota> for R :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
where
@@ -692,8 +694,8 @@
*}
text {*
- Whenever types are given, they are stored in the @{ML SOME}s. They types are
- not yet given to the variable: this must be done by type inference later
+ Whenever types are given, they are stored in the @{ML SOME}s. The types are
+ not yet used to type the variables: this must be done by type-inference later
on. Since types are part of the inner syntax they are strings with some
encoded information (see previous section). If a syntax translation is
present for a variable, then it is stored in the @{ML Mixfix} datastructure;
@@ -853,7 +855,7 @@
@{text [display] "$ ISABELLE_LOGS=\"$(isabelle getenv -b ISABELLE_OUTPUT)\"/log"}
- on the Unix prompt. If you now also type @{text "ls $ISABELLE_LOGS"}, then the
+ on the Unix prompt. If you now type @{text "ls $ISABELLE_LOGS"}, then the
directory should include the files:
@{text [display]
--- a/CookBook/ROOT.ML Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/ROOT.ML Thu Mar 12 08:11:02 2009 +0100
@@ -15,9 +15,9 @@
use_thy "Package/Ind_Code";
use_thy "Appendix";
-use_thy "Recipes/NamedThms";
use_thy "Recipes/Antiquotes";
use_thy "Recipes/TimeLimit";
+use_thy "Recipes/Timing";
use_thy "Recipes/Config";
use_thy "Recipes/StoringData";
use_thy "Recipes/ExternalSolver";
--- a/CookBook/Recipes/Antiquotes.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Recipes/Antiquotes.thy Thu Mar 12 08:11:02 2009 +0100
@@ -16,6 +16,9 @@
various entities in a document. They can also be used for sophisticated
\LaTeX-hacking. If you type @{text "Ctrl-c Ctrl-a h A"} inside ProofGeneral, you
obtain a list of all currently available document antiquotations and their options.
+ You obtain the same list on the ML-level by typing
+
+ @{ML [display,gray] "ThyOutput.print_antiquotations ()"}
Below we give the code for two additional antiquotations that can be used to typeset
ML-code and also to check whether the given code actually compiles. This
@@ -35,28 +38,27 @@
ML%linenosgray{*fun ml_val code_txt = "val _ = " ^ code_txt
-fun output_ml src ctxt code_txt =
+fun output_ml {context = ctxt, ...} code_txt =
(ML_Context.eval_in (SOME ctxt) false Position.none (ml_val code_txt);
- ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt
- (space_explode "\n" code_txt))
+ ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
-val _ = ThyOutput.add_commands
- [("ML_checked", ThyOutput.args (Scan.lift Args.name) output_ml)]*}
+val _ = ThyOutput.antiquotation "ML_checked" (Scan.lift Args.name) output_ml*}
text {*
Note that the parser @{ML "(Scan.lift Args.name)"} in line 9 parses a string,
- in this case the code given as argument. As mentioned before, this argument
+ in this case the code. As mentioned before, the code
is sent to the ML-compiler in the line 4 using the function @{ML ml_val},
which constructs the appropriate ML-expression.
If the code is ``approved'' by the compiler, then the output function @{ML
- "ThyOutput.output_list (fn _ => fn s => Pretty.str s)"} in the next line pretty prints the
+ "ThyOutput.output"} in the next line pretty prints the
code. This function expects that the code is a list of strings where each
string correspond to a line in the output. Therefore the use of
@{ML "(space_explode \"\\n\" txt)" for txt}
- which produces this list according to linebreaks. There are a number of options for antiquotations
- that are observed by @{ML ThyOutput.output_list} when printing the code (including
- @{text "[display]"}, @{text "[quotes]"} and @{text "[source]"}).
+ which produces this list according to linebreaks. There are a number of options
+ for antiquotations that are observed by @{ML ThyOutput.output} when printing the
+ code (including @{text "[display]"} and @{text "[quotes]"}). Line 7 sets
+ up the new antiquotation.
\begin{readmore}
For more information about options of antiquotations see \rsccite{sec:antiq}).
@@ -67,52 +69,47 @@
can improve the code above slightly by writing
*}
-ML%linenosgray{*fun output_ml src ctxt (code_txt,pos) =
+ML%linenosgray{*fun output_ml {context = ctxt, ...} (code_txt,pos) =
(ML_Context.eval_in (SOME ctxt) false pos (ml_val code_txt);
- ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt
- (space_explode "\n" code_txt))
+ ThyOutput.output (map Pretty.str (space_explode "\n" code_txt)))
-val _ = ThyOutput.add_commands
- [("ML_checked", ThyOutput.args
- (Scan.lift (OuterParse.position Args.name)) output_ml)] *}
+val _ = ThyOutput.antiquotation "ML_checked"
+ (Scan.lift (OuterParse.position Args.name)) output_ml *}
text {*
where in Lines 1 and 2 the positional information is properly treated.
- (FIXME: say something about OuterParse.position)
-
We can now write in a document @{text "@{ML_checked \"2 + 3\"}"} in order to
obtain @{ML_checked "2 + 3"} and be sure that this code compiles until
somebody changes the definition of \mbox{@{ML "(op +)"}}.
- The second antiquotation we describe extends the first by allowing also to give
- a pattern that specifies what the result of the ML-code should be and to check
+ The second antiquotation we describe extends the first by a pattern that
+ specifies what the result of the ML-code should be and check
the consistency of the actual result with the given pattern. For this we are going
to implement the antiquotation
- @{text [display] "@{ML_resp \"a_piece_of_code\" \"pattern\"}"}
+ @{text [display] "@{ML_resp \"a_piece_of_code\" \"a_pattern\"}"}
- To add some convenience and also to deal with large outputs,
- the user can give a partial specification by giving the abbreviation
- @{text [quotes] "\<dots>"}. For example @{text "(\<dots>,\<dots>)"} for a pair.
+ To add some convenience and also to deal with large outputs, the user can
+ give a partial specification inside the pattern by giving abbreviations of
+ the form @{text [quotes] "\<dots>"}. For example @{text "(\<dots>, \<dots>)"} to specify a
+ pair.
- Whereas in the antiquotation @{text "@{ML_checked \"piece_of_code\"}"} above,
- we have sent the expression
- @{text [quotes] "val _ = piece_of_code"} to the compiler, in the second the
- wildcard @{text "_"} we will be replaced by a proper pattern. To do this we
- need to replace the @{text [quotes] "\<dots>"} by
- @{text [quotes] "_"} before sending the code to the compiler. The following
- function will do this:
-
+ Whereas in the antiquotation @{text "@{ML_checked \"piece_of_code\"}"}
+ above, we have sent the expression @{text [quotes] "val _ = piece_of_code"}
+ to the compiler, in the second the wildcard @{text "_"} we will be replaced
+ by the given pattern. To do this we need to replace the @{text [quotes] "\<dots>"}
+ by @{text [quotes] "_"} before sending the code to the compiler. The
+ following function will do this:
*}
ML{*fun ml_pat (code_txt, pat) =
- let val pat' =
+let val pat' =
implode (map (fn "\<dots>" => "_" | s => s) (Symbol.explode pat))
- in
- "val " ^ pat' ^ " = " ^ code_txt
- end*}
+in
+ "val " ^ pat' ^ " = " ^ code_txt
+end*}
text {*
Next we like to add a response indicator to the result using:
@@ -126,19 +123,17 @@
The rest of the code of the antiquotation is
*}
-ML{*fun output_ml_resp src ctxt ((code_txt,pat),pos) =
- (ML_Context.eval_in (SOME ctxt) false pos (ml_pat (code_txt,pat));
+ML{*fun output_ml_resp {context = ctxt, ...} ((code_txt, pat), pos) =
+ (ML_Context.eval_in (SOME ctxt) false pos (ml_pat (code_txt, pat));
let
val output = (space_explode "\n" code_txt) @ (add_resp_indicator pat)
in
- ThyOutput.output_list (fn _ => fn s => Pretty.str s) src ctxt output
+ ThyOutput.output (map Pretty.str output)
end)
-val _ = ThyOutput.add_commands
- [("ML_resp",
- ThyOutput.args
- (Scan.lift (OuterParse.position (Args.name -- Args.name)))
- output_ml_resp)]*}
+val _ = ThyOutput.antiquotation "ML_resp"
+ (Scan.lift (OuterParse.position (Args.name -- Args.name)))
+ output_ml_resp*}
text {*
This extended antiquotation allows us to write
@@ -151,22 +146,15 @@
or
- @{text [display] "@{ML_resp [display] \"let val i = 3 in (i * i,\"foo\") end\" \"(9,\<dots>)\"}"}
+ @{text [display] "@{ML_resp [display] \"let val i = 3 in (i * i, \"foo\") end\" \"(9, \<dots>)\"}"}
to obtain
- @{ML_resp [display] "let val i = 3 in (i * i,\"foo\") end" "(9,\<dots>)"}
+ @{ML_resp [display] "let val i = 3 in (i * i, \"foo\") end" "(9, \<dots>)"}
- In both cases, the check by the compiler ensures that code and result match. A limitation
- of this antiquotation, however, is that the hints can only be given in case
- they can be constructed as a pattern. This excludes values that are abstract datatypes, like
- theorems or cterms.
-
+ In both cases, the check by the compiler ensures that code and result
+ match. A limitation of this antiquotation, however, is that the pattern can
+ only be given for values that can be constructed. This excludes
+ values that are abstract datatypes, like theorems or cterms.
*}
-
-
-end
-
-
-
-
+end
\ No newline at end of file
--- a/CookBook/Recipes/Config.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Recipes/Config.thy Thu Mar 12 08:11:02 2009 +0100
@@ -12,9 +12,9 @@
{\bf Solution:} This can be achieved using configuration values.\smallskip
- Assume you want to control three values, namely @{text bval} containing a
+ Assume you want to control three values, say @{text bval} containing a
boolean, @{text ival} containing an integer and @{text sval}
- containing a string. These values can be declared on the ML-level with
+ containing a string. These values can be declared on the ML-level by
*}
ML{*val (bval, setup_bval) = Attrib.config_bool "bval" false
@@ -23,7 +23,7 @@
text {*
where each value needs to be given a default. To enable these values, they need to
- be set up by
+ be set up with
*}
setup {* setup_bval *}
@@ -51,18 +51,18 @@
@{ML_response [display,gray] "Config.put sval \"foo\" @{context}; Config.get @{context} sval" "foo"}
- The same can be achived using the command \isacommand{setup}.
+ The same can be achieved using the command \isacommand{setup}.
*}
setup {* Config.put_thy sval "bar" *}
text {*
- The retrival of this value yields now
+ Now the retrival of this value yields:
@{ML_response [display,gray] "Config.get @{context} sval" "\"bar\""}
We can apply a function to a value using @{ML Config.map}. For example incrementing
- @{ML ival} can be done by
+ @{ML ival} can be done by:
@{ML_response [display,gray]
"let
@@ -80,5 +80,4 @@
multithreaded execution of Isabelle.
*}
-
end
\ No newline at end of file
--- a/CookBook/Recipes/NamedThms.thy Thu Feb 26 13:46:05 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,80 +0,0 @@
-theory NamedThms
-imports "../Base"
-begin
-
-section {* Accumulate a List of Theorems under a Name\label{rec:named} *}
-
-
-text {*
- {\bf Problem:}
- Your tool @{text foo} works with special rules, called @{text foo}-rules.
- Users should be able to declare @{text foo}-rules in the theory,
- which are then used in a method.\smallskip
-
- {\bf Solution:} This can be achieved using named theorem lists.\smallskip
-
- Named theorem lists can be set up using the code
-
- *}
-
-ML{*structure FooRules = NamedThmsFun (
- val name = "foo"
- val description = "Rules for foo"); *}
-
-text {*
- and the command
-*}
-
-setup {* FooRules.setup *}
-
-text {*
- This code declares a context data slot where the theorems are stored,
- an attribute @{text foo} (with the usual @{text add} and @{text del} options
- for adding and deleting theorems) and an internal ML interface to retrieve and
- modify the theorems.
-
- Furthermore, the facts are made available on the user level under the dynamic
- fact name @{text foo}. For example we can declare three lemmas to be of the kind
- @{text foo} by:
-*}
-
-lemma rule1[foo]: "A" sorry
-lemma rule2[foo]: "B" sorry
-lemma rule3[foo]: "C" sorry
-
-text {* and undeclare the first one by: *}
-
-declare rule1[foo del]
-
-text {* and query the remaining ones with:
-
- \begin{isabelle}
- \isacommand{thm}~@{text "foo"}\\
- @{text "> ?C"}\\
- @{text "> ?B"}\\
- \end{isabelle}
-
- On the ML-level the rules marked with @{text "foo"} an be retrieved
- using the function @{ML FooRules.get}:
-
- @{ML_response_fake [display,gray] "FooRules.get @{context}" "[\"?C\",\"?B\"]"}
-
- \begin{readmore}
- For more information see @{ML_file "Pure/Tools/named_thms.ML"} and also
- the recipe in Section~\ref{recipe:storingdata} about storing arbitrary
- data.
- \end{readmore}
- *}
-
-text {*
- (FIXME: maybe add a comment about the case when the theorems
- to be added need to satisfy certain properties)
-
-*}
-
-
-end
-
-
-
-
--- a/CookBook/Recipes/Oracle.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Recipes/Oracle.thy Thu Mar 12 08:11:02 2009 +0100
@@ -19,7 +19,7 @@
\begin{readmore}
A short introduction to oracles can be found in [isar-ref: no suitable label
for section 3.11]. A simple example, which we will slightly extend here,
- is given in @{ML_file "FOL/ex/IffOracle.thy"}. The raw interface for adding
+ is given in @{ML_file "FOL/ex/Iff_Oracle.thy"}. The raw interface for adding
oracles is @{ML add_oracle in Thm} in @{ML_file "Pure/thm.ML"}.
\end{readmore}
--- a/CookBook/Recipes/TimeLimit.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Recipes/TimeLimit.thy Thu Mar 12 08:11:02 2009 +0100
@@ -11,9 +11,8 @@
{\bf Solution:} This can be achieved using the function
@{ML timeLimit in TimeLimit}.\smallskip
- Assume you defined the Ackermann function:
-
- *}
+ Assume you defined the Ackermann function on the ML-level.
+*}
ML{*fun ackermann (0, n) = n + 1
| ackermann (m, 0) = ackermann (m - 1, 1)
@@ -26,7 +25,7 @@
@{ML_response_fake [display,gray] "ackermann (4, 12)" "\<dots>"}
takes a bit of time before it finishes. To avoid this, the call can be encapsulated
- in a time limit of five seconds. For this you have to write:
+ in a time limit of five seconds. For this you have to write
@{ML_response [display,gray]
"TimeLimit.timeLimit (Time.fromSeconds 5) ackermann (4, 12)
@@ -37,7 +36,7 @@
is reached.
Note that @{ML "timeLimit" in TimeLimit} is only meaningful when you use PolyML,
- because PolyML has a rich infrastructure for multithreading programming on
+ because PolyML has the infrastructure for multithreading programming on
which @{ML "timeLimit" in TimeLimit} relies.
\begin{readmore}
@@ -48,5 +47,4 @@
*}
-
end
\ No newline at end of file
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/CookBook/Recipes/Timing.thy Thu Mar 12 08:11:02 2009 +0100
@@ -0,0 +1,67 @@
+theory Timing
+imports "../Base"
+begin
+
+section {* Measuring Time\label{rec:timing} *}
+
+text {*
+ {\bf Problem:}
+ You want to measure the running time of a tactic or function.\smallskip
+
+ {\bf Solution:} Time can be measured using the function
+ @{ML start_timing} and @{ML end_timing}.\smallskip
+
+ Suppose you defined the Ackermann function inside Isabelle.
+*}
+
+fun
+ ackermann:: "(nat \<times> nat) \<Rightarrow> nat"
+where
+ "ackermann (0, n) = n + 1"
+ | "ackermann (m, 0) = ackermann (m - 1, 1)"
+ | "ackermann (m, n) = ackermann (m - 1, ackermann (m, n - 1))"
+
+text {*
+ You can measure how long the simplifier takes to verify a datapoint
+ of this function. The timing can be done using the following wrapper function:
+*}
+
+ML{*fun timing_wrapper tac st =
+let
+ val t_start = start_timing ();
+ val res = tac st;
+ val t_end = end_timing t_start;
+in
+ (warning (#message t_end); res)
+end*}
+
+text {*
+ Note that this function, in addition to a tactic for which it measures the
+ time, also takes a state @{text "st"} as argument and applies this state to
+ the tactic. The reason is that tactics are lazy functions and you need to force
+ them to run, otherwise the timing will be meaningless. The time between start
+ and finish of the tactic will be calculated as the end time minus the start time.
+ An example for the wrapper is the proof
+
+*}
+
+lemma "ackermann (3, 4) = 125"
+apply(tactic {*
+ timing_wrapper (simp_tac (@{simpset} addsimps @{thms "nat_number"}) 1) *})
+done
+
+text {*
+ where it returns something on the scale of 3 seconds. We choose to return
+ this information as a string, but the timing information is also accessible
+ in number format.
+
+ \begin{readmore}
+ Basic functions regarding timing are defined in @{ML_file
+ "Pure/ML-Systems/polyml_common.ML"} (for the PolyML compiler). Some more
+ advanced functions are defined in @{ML_file "Pure/General/output.ML"}.
+ \end{readmore}
+*}
+
+
+
+end
\ No newline at end of file
--- a/CookBook/Solutions.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Solutions.thy Thu Mar 12 08:11:02 2009 +0100
@@ -1,8 +1,8 @@
theory Solutions
-imports Base
+imports Base
begin
-chapter {* Solutions to Most Exercises *}
+chapter {* Solutions to Most Exercises\label{ch:solutions} *}
text {* \solution{fun:revsum} *}
@@ -24,18 +24,18 @@
ML{*val any = Scan.one (Symbol.not_eof)
val scan_cmt =
- let
- val begin_cmt = Scan.this_string "(*"
- val end_cmt = Scan.this_string "*)"
- in
- begin_cmt |-- Scan.repeat (Scan.unless end_cmt any) --| end_cmt
- >> (enclose "(**" "**)" o implode)
- end
+let
+ val begin_cmt = Scan.this_string "(*"
+ val end_cmt = Scan.this_string "*)"
+in
+ begin_cmt |-- Scan.repeat (Scan.unless end_cmt any) --| end_cmt
+ >> (enclose "(**" "**)" o implode)
+end
val parser = Scan.repeat (scan_cmt || any)
val scan_all =
- Scan.finite Symbol.stopper parser >> implode #> fst *}
+ Scan.finite Symbol.stopper parser >> implode #> fst *}
text {*
By using @{text "#> fst"} in the last line, the function
@@ -84,15 +84,121 @@
text {* and a test case is the lemma *}
lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
- apply(tactic {* simp_tac (HOL_ss addsimprocs [@{simproc add_sp}]) 1 *})
+ apply(tactic {* simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]) 1 *})
txt {*
where the simproc produces the goal state
\begin{minipage}{\textwidth}
@{subgoals [display]}
- \end{minipage}
+ \end{minipage}\bigskip
+*}(*<*)oops(*>*)
+
+text {* \solution{ex:addconversion} *}
+
+text {*
+ The following code assumes the function @{ML dest_sum} from the previous
+ exercise.
+*}
+
+
+ML{*fun add_simple_conv ctxt ctrm =
+let
+ val trm = Thm.term_of ctrm
+in
+ get_sum_thm ctxt trm (dest_sum trm)
+end
+
+fun add_conv ctxt ctrm =
+ (case Thm.term_of ctrm of
+ @{term "(op +)::nat \<Rightarrow> nat \<Rightarrow> nat"} $ _ $ _ =>
+ (Conv.binop_conv (add_conv ctxt)
+ then_conv (Conv.try_conv (add_simple_conv ctxt))) ctrm
+ | _ $ _ => Conv.combination_conv
+ (add_conv ctxt) (add_conv ctxt) ctrm
+ | Abs _ => Conv.abs_conv (fn (_, ctxt) => add_conv ctxt) ctxt ctrm
+ | _ => Conv.all_conv ctrm)
+
+val add_tac = CSUBGOAL (fn (goal, i) =>
+ let
+ val ctxt = ProofContext.init (Thm.theory_of_cterm goal)
+ in
+ CONVERSION
+ (Conv.params_conv ~1 (fn ctxt =>
+ (Conv.prems_conv ~1 (add_conv ctxt) then_conv
+ Conv.concl_conv ~1 (add_conv ctxt))) ctxt) i
+ end)*}
+
+text {*
+ A test case is as follows
+*}
+
+lemma "P (Suc (99 + 1)) ((0 + 0)::nat) (Suc (3 + 3 + 3)) (4 + 1)"
+ apply(tactic {* add_tac 1 *})?
+txt {*
+ where the conversion produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage}\bigskip
*}(*<*)oops(*>*)
+subsection {* Tests start here *}
+lemma cheat: "P" sorry
+
+ML{*fun timing_wrapper tac st =
+let
+ val t_start = start_timing ();
+ val res = tac st;
+ val t_end = end_timing t_start;
+in
+ (warning (#message t_end); res)
+end*}
+
+ML{*
+fun create_term1 0 = @{term "0::nat"}
+ | create_term1 n =
+ Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
+ $ (HOLogic.mk_number @{typ "nat"} n) $ (create_term1 (n-1))
+*}
+
+ML{*
+fun create_term2 0 = @{term "0::nat"}
+ | create_term2 n =
+ Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
+ $ (create_term2 (n-1)) $ (HOLogic.mk_number @{typ nat} n)
+*}
+
+ML{*
+fun create_term n =
+ HOLogic.mk_Trueprop
+ (@{term "P::nat\<Rightarrow> bool"} $
+ (Const (@{const_name "plus"}, @{typ "nat\<Rightarrow>nat\<Rightarrow>nat"})
+ $ (create_term1 n) $ (create_term2 n)))
+*}
+
+ML {*
+warning (Syntax.string_of_term @{context} (create_term 4))
+*}
+
+ML {*
+val _ = Goal.prove @{context} [] [] (create_term 100)
+ (fn _ => timing_wrapper (EVERY1 [add_tac, rtac @{thm cheat}]))
+*}
+
+ML {*
+val _ = Goal.prove @{context} [] [] (create_term 100)
+ (fn _ => timing_wrapper (EVERY1 [simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]), rtac @{thm cheat}]))
+*}
+
+ML {*
+val _ = Goal.prove @{context} [] [] (create_term 400)
+ (fn _ => timing_wrapper (EVERY1 [add_tac, rtac @{thm cheat}]))
+*}
+
+ML {*
+val _ = Goal.prove @{context} [] [] (create_term 400)
+ (fn _ => timing_wrapper (EVERY1 [simp_tac (HOL_basic_ss addsimprocs [@{simproc add_sp}]), rtac @{thm cheat}]))
+*}
end
\ No newline at end of file
--- a/CookBook/Tactical.thy Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/Tactical.thy Thu Mar 12 08:11:02 2009 +0100
@@ -1,6 +1,5 @@
theory Tactical
imports Base FirstSteps
-uses "infix_conv.ML"
begin
chapter {* Tactical Reasoning\label{chp:tactical} *}
@@ -11,7 +10,7 @@
considerably the burden of manual reasoning, for example, when introducing
new definitions. These proof procedures are centred around refining a goal
state using tactics. This is similar to the \isacommand{apply}-style
- reasoning at the user level, where goals are modified in a sequence of proof
+ reasoning at the user-level, where goals are modified in a sequence of proof
steps until all of them are solved. However, there are also more structured
operations available on the ML-level that help with the handling of
variables and assumptions.
@@ -66,7 +65,7 @@
and the file @{ML_file "Pure/goal.ML"}. See @{ML_file
"Pure/tactic.ML"} and @{ML_file "Pure/tctical.ML"} for the code of basic
tactics and tactic combinators; see also Chapters 3 and 4 in the old
- Isabelle Reference Manual.
+ Isabelle Reference Manual, and Chapter 3 in the Isabelle/Isar Implementation Manual.
\end{readmore}
Note that in the code above we use antiquotations for referencing the theorems. Many theorems
@@ -101,7 +100,7 @@
text {*
By using @{text "tactic \<verbopen> \<dots> \<verbclose>"} you can call from the
- user level of Isabelle the tactic @{ML foo_tac} or
+ user-level of Isabelle the tactic @{ML foo_tac} or
any other function that returns a tactic.
The tactic @{ML foo_tac} is just a sequence of simple tactics stringed
@@ -289,8 +288,8 @@
printed by the Isabelle system and by @{ML my_print_tac}. The latter shows
the goal state as represented internally (highlighted boxes). This
illustrates that every goal state in Isabelle is represented by a theorem:
- when we start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
- @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when we finish the proof the
+ when you start the proof of \mbox{@{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}} the theorem is
+ @{text "(\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B) \<Longrightarrow> (\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B)"}; when you finish the proof the
theorem is @{text "\<lbrakk>A; B\<rbrakk> \<Longrightarrow> A \<and> B"}.\label{fig:goalstates}}
\end{figure}
*}
@@ -328,8 +327,9 @@
text {*
Let us start with the tactic @{ML print_tac}, which is quite useful for low-level
- debugging of tactics. It just prints out a message and the current goal state.
- Processing the proof
+ debugging of tactics. It just prints out a message and the current goal state
+ (unlike @{ML my_print_tac}, it prints the goal state as the user would see it).
+ For example, processing the proof
*}
lemma shows "False \<Longrightarrow> True"
@@ -386,7 +386,7 @@
text {*
Note the number in each tactic call. Also as mentioned in the previous section, most
- basic tactics take such an argument; it addresses the subgoal they are analysing.
+ basic tactics take such a number as argument; it addresses the subgoal they are analysing.
In the proof below, we first split up the conjunction in the second subgoal by
focusing on this subgoal first.
*}
@@ -478,21 +478,10 @@
takes an additional number as argument that makes explicit which premise
should be instantiated.
- To improve readability of the theorems we produce below, we shall use
- the following function
-*}
-
-ML{*fun no_vars ctxt thm =
-let
- val ((_, [thm']), _) = Variable.import_thms true [thm] ctxt
-in
- thm'
-end*}
-
-text {*
- that transform the schematic variables of a theorem into free variables.
- Using this function for the first @{ML RS}-expression above would produce
- the more readable result:
+ To improve readability of the theorems we produce below, we shall use the
+ function @{ML no_vars} from Section~\ref{sec:printing}, which transforms
+ schematic variables into free ones. Using this function for the first @{ML
+ RS}-expression above produces the more readable result:
@{ML_response_fake [display,gray]
"no_vars @{context} (@{thm disjI1} RS @{thm conjI})" "\<lbrakk>P; Q\<rbrakk> \<Longrightarrow> (P \<or> Qa) \<and> Q"}
@@ -662,17 +651,20 @@
also described in \isccite{sec:results}.
\end{readmore}
- A similar but less powerful function than @{ML SUBPROOF} is @{ML SUBGOAL}.
- It allows you to inspect a given subgoal. With this you can implement
- a tactic that applies a rule according to the topmost logic connective in the
- subgoal (to illustrate this we only analyse a few connectives). The code
- of the tactic is as follows.\label{tac:selecttac}
+ Similar but less powerful functions than @{ML SUBPROOF} are @{ML SUBGOAL}
+ and @{ML CSUBGOAL}. They allow you to inspect a given subgoal (the former
+ presents the subgoal as a @{ML_type term}, while the latter as a @{ML_type
+ cterm}). With this you can implement a tactic that applies a rule according
+ to the topmost logic connective in the subgoal (to illustrate this we only
+ analyse a few connectives). The code of the tactic is as
+ follows.\label{tac:selecttac}
+
*}
-ML %linenosgray{*fun select_tac (t,i) =
+ML %linenosgray{*fun select_tac (t, i) =
case t of
- @{term "Trueprop"} $ t' => select_tac (t',i)
- | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t',i)
+ @{term "Trueprop"} $ t' => select_tac (t', i)
+ | @{term "op \<Longrightarrow>"} $ _ $ t' => select_tac (t', i)
| @{term "op \<and>"} $ _ $ _ => rtac @{thm conjI} i
| @{term "op \<longrightarrow>"} $ _ $ _ => rtac @{thm impI} i
| @{term "Not"} $ _ => rtac @{thm notI} i
@@ -689,10 +681,12 @@
construct the patterns, the pattern in Line 8 cannot be constructed in this
way. The reason is that an antiquotation would fix the type of the
quantified variable. So you really have to construct the pattern using the
- basic term-constructors. This is not necessary in other cases, because their type
- is always fixed to function types involving only the type @{typ bool}. For the
- final pattern, we chose to just return @{ML all_tac}. Consequently,
- @{ML select_tac} never fails.
+ basic term-constructors. This is not necessary in other cases, because their
+ type is always fixed to function types involving only the type @{typ
+ bool}. (See Section \ref{sec:terms_types_manually} about constructing terms
+ manually.) For the catch-all pattern, we chose to just return @{ML all_tac}.
+ Consequently, @{ML select_tac} never fails.
+
Let us now see how to apply this tactic. Consider the four goals:
*}
@@ -727,8 +721,7 @@
the first goal. To do this can result in quite messy code. In contrast,
the ``reverse application'' is easy to implement.
- The tactic @{ML "CSUBGOAL"} is similar to @{ML SUBGOAL} except that it takes
- a @{ML_type cterm} instead of a @{ML_type term}. Of course, this example is
+ Of course, this example is
contrived: there are much simpler methods available in Isabelle for
implementing a proof procedure analysing a goal according to its topmost
connective. These simpler methods use tactic combinators, which we will
@@ -922,7 +915,7 @@
@{ML REPEAT1} is similar, but runs the tactic at least once (failing if
this is not possible).
- If you are after the ``primed'' version of @{ML repeat_xmp} then you
+ If you are after the ``primed'' version of @{ML repeat_xmp}, then you
need to implement it as
*}
@@ -1013,14 +1006,416 @@
*}
-section {* Rewriting and Simplifier Tactics *}
+section {* Simplifier Tactics *}
+
+text {*
+ A lot of convenience in the reasoning with Isabelle derives from its
+ powerful simplifier. The power of simplifier is a strength and a weakness at
+ the same time, because you can easily make the simplifier to run into a loop and its
+ behaviour can be difficult to predict. There is also a multitude
+ of options that you can configurate to control the behaviour of the simplifier.
+ We describe some of them in this and the next section.
+
+ There are the following five main tactics behind
+ the simplifier (in parentheses is their user-level counterpart):
+
+ \begin{isabelle}
+ \begin{center}
+ \begin{tabular}{l@ {\hspace{2cm}}l}
+ @{ML simp_tac} & @{text "(simp (no_asm))"} \\
+ @{ML asm_simp_tac} & @{text "(simp (no_asm_simp))"} \\
+ @{ML full_simp_tac} & @{text "(simp (no_asm_use))"} \\
+ @{ML asm_lr_simp_tac} & @{text "(simp (asm_lr))"} \\
+ @{ML asm_full_simp_tac} & @{text "(simp)"}
+ \end{tabular}
+ \end{center}
+ \end{isabelle}
+
+ All of the tactics take a simpset and an interger as argument (the latter as usual
+ to specify the goal to be analysed). So the proof
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(simp)
+done
+
+text {*
+ corresponds on the ML-level to the tactic
+*}
+
+lemma "Suc (1 + 2) < 3 + 2"
+apply(tactic {* asm_full_simp_tac @{simpset} 1 *})
+done
+
+text {*
+ If the simplifier cannot make any progress, then it leaves the goal unchanged,
+ i.e.~does not raise any error message. That means if you use it to unfold a
+ definition for a constant and this constant is not present in the goal state,
+ you can still safely apply the simplifier.
+
+ When using the simplifier, the crucial information you have to provide is
+ the simpset. If this information is not handled with care, then the
+ simplifier can easily run into a loop. Therefore a good rule of thumb is to
+ use simpsets that are as minimal as possible. It might be surprising that a
+ simpset is more complex than just a simple collection of theorems used as
+ simplification rules. One reason for the complexity is that the simplifier
+ must be able to rewrite inside terms and should also be able to rewrite
+ according to rules that have precoditions.
+
+
+ The rewriting inside terms requires congruence rules, which
+ are meta-equalities typical of the form
+
+ \begin{isabelle}
+ \begin{center}
+ \mbox{\inferrule{@{text "t\<^isub>1 \<equiv> s\<^isub>1 \<dots> t\<^isub>n \<equiv> s\<^isub>n"}}
+ {@{text "constr t\<^isub>1\<dots>t\<^isub>n \<equiv> constr s\<^isub>1\<dots>s\<^isub>n"}}}
+ \end{center}
+ \end{isabelle}
+
+ with @{text "constr"} being a term-constructor, like @{const "If"} or @{const "Let"}.
+ Every simpset contains only
+ one concruence rule for each term-constructor, which however can be
+ overwritten. The user can declare lemmas to be congruence rules using the
+ attribute @{text "[cong]"}. In HOL, the user usually states these lemmas as
+ equations, which are then internally transformed into meta-equations.
+
+
+ The rewriting with rules involving preconditions requires what is in
+ Isabelle called a subgoaler, a solver and a looper. These can be arbitrary
+ tactics that can be installed in a simpset and which are called during
+ various stages during simplification. However, simpsets also include
+ simprocs, which can produce rewrite rules on demand (see next
+ section). Another component are split-rules, which can simplify for example
+ the ``then'' and ``else'' branches of if-statements under the corresponding
+ precoditions.
+
+
+ \begin{readmore}
+ For more information about the simplifier see @{ML_file "Pure/meta_simplifier.ML"}
+ and @{ML_file "Pure/simplifier.ML"}. The simplifier for HOL is set up in
+ @{ML_file "HOL/Tools/simpdata.ML"}. Generic splitters are implemented in
+ @{ML_file "Provers/splitter.ML"}.
+ \end{readmore}
+
+ \begin{readmore}
+ FIXME: Find the right place Discrimination nets are implemented
+ in @{ML_file "Pure/net.ML"}.
+ \end{readmore}
+
+ The most common combinators to modify simpsets are
+
+ \begin{isabelle}
+ \begin{tabular}{ll}
+ @{ML addsimps} & @{ML delsimps}\\
+ @{ML addcongs} & @{ML delcongs}\\
+ @{ML addsimprocs} & @{ML delsimprocs}\\
+ \end{tabular}
+ \end{isabelle}
+
+ (FIXME: What about splitters? @{ML addsplits}, @{ML delsplits})
+*}
+
+text_raw {*
+\begin{figure}[tp]
+\begin{isabelle}*}
+ML{*fun print_ss ctxt ss =
+let
+ val {simps, congs, procs, ...} = MetaSimplifier.dest_ss ss
+
+ fun name_thm (nm, thm) =
+ " " ^ nm ^ ": " ^ (str_of_thm ctxt thm)
+ fun name_ctrm (nm, ctrm) =
+ " " ^ nm ^ ": " ^ (str_of_cterms ctxt ctrm)
+
+ val s1 = ["Simplification rules:"]
+ val s2 = map name_thm simps
+ val s3 = ["Congruences rules:"]
+ val s4 = map name_thm congs
+ val s5 = ["Simproc patterns:"]
+ val s6 = map name_ctrm procs
+in
+ (s1 @ s2 @ s3 @ s4 @ s5 @ s6)
+ |> separate "\n"
+ |> implode
+ |> warning
+end*}
+text_raw {*
+\end{isabelle}
+\caption{The function @{ML MetaSimplifier.dest_ss} returns a record containing
+ all printable information stored in a simpset. We are here only interested in the
+ simplifcation rules, congruence rules and simprocs.\label{fig:printss}}
+\end{figure} *}
+
+text {*
+ To see how they work, consider the two functions in Figure~\ref{fig:printss}, which
+ print out some parts of a simpset. If you use them to print out the components of the
+ empty simpset, in ML @{ML empty_ss} and the most primitive simpset
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} empty_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ you can see it contains nothing. This simpset is usually not useful, except as a
+ building block to build bigger simpsets. For example you can add to @{ML empty_ss}
+ the simplification rule @{thm [source] Diff_Int} as follows:
+*}
+
+ML{*val ss1 = empty_ss addsimps [@{thm Diff_Int} RS @{thm eq_reflection}] *}
+
+text {*
+ Printing then out the components of the simpset gives:
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss1"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+Simproc patterns:"}
+
+ (FIXME: Why does it print out ??.unknown)
+
+ Adding also the congruence rule @{thm [source] UN_cong}
+*}
+
+ML{*val ss2 = ss1 addcongs [@{thm UN_cong} RS @{thm eq_reflection}] *}
text {*
- @{ML rewrite_goals_tac}
+ gives
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} ss2"
+"Simplification rules:
+ ??.unknown: A - B \<inter> C \<equiv> A - B \<union> (A - C)
+Congruences rules:
+ UNION: \<lbrakk>A = B; \<And>x. x \<in> B \<Longrightarrow> C x = D x\<rbrakk> \<Longrightarrow> \<Union>x\<in>A. C x \<equiv> \<Union>x\<in>B. D x
+Simproc patterns:"}
+
+ Notice that we had to add these lemmas as meta-equations. The @{ML empty_ss}
+ expects this form of the simplification and congruence rules. However, even
+ when adding these lemmas to @{ML empty_ss} we do not end up with anything useful yet.
+
+ In the context of HOL, the first really useful simpset is @{ML HOL_basic_ss}. While
+ printing out the components of this simpset
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_basic_ss"
+"Simplification rules:
+Congruences rules:
+Simproc patterns:"}
+
+ also produces ``nothing'', the printout is misleading. In fact
+ the @{ML HOL_basic_ss} is setup so that it can solve goals of the
+ form @{thm TrueI}, @{thm refl[no_vars]}, @{term "t \<equiv> t"} and @{thm FalseE[no_vars]};
+ and also resolve with assumptions. For example:
+*}
+
+lemma
+ "True" and "t = t" and "t \<equiv> t" and "False \<Longrightarrow> Foo" and "\<lbrakk>A; B; C\<rbrakk> \<Longrightarrow> A"
+apply(tactic {* ALLGOALS (simp_tac HOL_basic_ss) *})
+done
+
+text {*
+ This behaviour is not because of simplification rules, but how the subgoaler, solver
+ and looper are set up. @{ML HOL_basic_ss} is usually a good start to build your
+ own simpsets, because of the low danger of causing loops via interacting simplifiction
+ rules.
+
+ The simpset @{ML HOL_ss} is an extention of @{ML HOL_basic_ss} containing
+ already many useful simplification and congruence rules for the logical
+ connectives in HOL.
+
+ @{ML_response_fake [display,gray]
+ "print_ss @{context} HOL_ss"
+"Simplification rules:
+ Pure.triv_forall_equality: (\<And>x. PROP V) \<equiv> PROP V
+ HOL.the_eq_trivial: THE x. x = y \<equiv> y
+ HOL.the_sym_eq_trivial: THE ya. y = ya \<equiv> y
+ \<dots>
+Congruences rules:
+ HOL.simp_implies: \<dots>
+ \<Longrightarrow> (PROP P =simp=> PROP Q) \<equiv> (PROP P' =simp=> PROP Q')
+ op -->: \<lbrakk>P \<equiv> P'; P' \<Longrightarrow> Q \<equiv> Q'\<rbrakk> \<Longrightarrow> P \<longrightarrow> Q \<equiv> P' \<longrightarrow> Q'
+Simproc patterns:
+ \<dots>"}
+
- @{ML ObjectLogic.full_atomize_tac}
-
- @{ML ObjectLogic.rulify_tac}
+ The simplifier is often used to unfold definitions in a proof. For this the
+ simplifier contains the @{ML rewrite_goals_tac}. Suppose for example the
+ definition
+*}
+
+definition "MyTrue \<equiv> True"
+
+lemma shows "MyTrue \<Longrightarrow> True \<and> True"
+apply(rule conjI)
+apply(tactic {* rewrite_goals_tac [@{thm MyTrue_def}] *})
+txt{* then the tactic produces the goal state
+
+ \begin{minipage}{\textwidth}
+ @{subgoals [display]}
+ \end{minipage} *}
+(*<*)oops(*>*)
+
+text {*
+ As you can see, the tactic unfolds the definitions in all subgoals.
+*}
+
+
+text_raw {*
+\begin{figure}[tp]
+\begin{isabelle} *}
+types prm = "(nat \<times> nat) list"
+consts perm :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet> _" [80,80] 80)
+
+primrec (perm_nat)
+ "[]\<bullet>c = c"
+ "(ab#pi)\<bullet>c = (if (pi\<bullet>c)=fst ab then snd ab
+ else (if (pi\<bullet>c)=snd ab then fst ab else (pi\<bullet>c)))"
+
+primrec (perm_prod)
+ "pi\<bullet>(x, y) = (pi\<bullet>x, pi\<bullet>y)"
+
+primrec (perm_list)
+ "pi\<bullet>[] = []"
+ "pi\<bullet>(x#xs) = (pi\<bullet>x)#(pi\<bullet>xs)"
+
+lemma perm_append[simp]:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "((pi\<^isub>1@pi\<^isub>2)\<bullet>c) = (pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c))"
+by (induct pi\<^isub>1) (auto)
+
+lemma perm_eq[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "(pi\<bullet>c = pi\<bullet>d) = (c = d)"
+by (induct pi) (auto)
+
+lemma perm_rev[simp]:
+ fixes c::"nat" and pi::"prm"
+ shows "pi\<bullet>((rev pi)\<bullet>c) = c"
+by (induct pi arbitrary: c) (auto)
+
+lemma perm_compose:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>(pi\<^isub>1\<bullet>c)"
+by (induct pi\<^isub>2) (auto)
+text_raw {*
+\end{isabelle}\medskip
+\caption{A simple theory about permutations over @{typ nat}. The point is that the
+ lemma @{thm [source] perm_compose} cannot be directly added to the simplifier, as
+ it would cause the simplifier to loop. It can still be used as a simplification
+ rule if the permutation is sufficiently protected.\label{fig:perms}
+ (FIXME: Uses old primrec.)}
+\end{figure} *}
+
+
+text {*
+ The simplifier is often used in order to bring terms into a normal form.
+ Unfortunately, often the situation arises that the corresponding
+ simplification rules will cause the simplifier to run into an infinite
+ loop. Consider for example the simple theory about permutations over natural
+ numbers shown in Figure~\ref{fig:perms}. The purpose of the lemmas is to
+ push permutations as far inside as possible, where they might disappear by
+ Lemma @{thm [source] perm_rev}. However, to fully normalise all instances,
+ it would be desirable to add also the lemma @{thm [source] perm_compose} to
+ the simplifier for pushing permutations over other permutations. Unfortunately,
+ the right-hand side of this lemma is again an instance of the left-hand side
+ and so causes an infinite loop. The seems to be no easy way to reformulate
+ this rule and so one ends up with clunky proofs like:
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(simp)
+apply(rule trans)
+apply(rule perm_compose)
+apply(simp)
+done
+
+text {*
+ It is however possible to create a single simplifier tactic that solves
+ such proofs. The trick is to introduce an auxiliary constant for permutations
+ and split the simplification into two phases (below actually three). Let
+ assume the auxiliary constant is
+*}
+
+definition
+ perm_aux :: "prm \<Rightarrow> 'a \<Rightarrow> 'a" ("_ \<bullet>aux _" [80,80] 80)
+where
+ "pi \<bullet>aux c \<equiv> pi \<bullet> c"
+
+text {* Now the two lemmas *}
+
+lemma perm_aux_expand:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>c) = pi\<^isub>1 \<bullet>aux (pi\<^isub>2\<bullet>c)"
+unfolding perm_aux_def by (rule refl)
+
+lemma perm_compose_aux:
+ fixes c::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(pi\<^isub>2\<bullet>aux c) = (pi\<^isub>1\<bullet>pi\<^isub>2) \<bullet>aux (pi\<^isub>1\<bullet>c)"
+unfolding perm_aux_def by (rule perm_compose)
+
+text {*
+ are simple consequence of the definition and @{thm [source] perm_compose}.
+ More importantly, the lemma @{thm [source] perm_compose_aux} can be safely
+ added to the simplifier, because now the right-hand side is not anymore an instance
+ of the left-hand side. In a sense it freezes all redexes of permutation compositions
+ after one step. In this way, we can split simplification of permutations
+ into three phases without the user not noticing anything about the auxiliary
+ contant. We first freeze any instance of permutation compositions in the term using
+ lemma @{thm [source] "perm_aux_expand"} (Line 9);
+ then simplifiy all other permutations including pusing permutations over
+ other permutations by rule @{thm [source] perm_compose_aux} (Line 10); and
+ finally ``unfreeze'' all instances of permutation compositions by unfolding
+ the definition of the auxiliary constant.
+*}
+
+ML %linenosgray{*val perm_simp_tac =
+let
+ val thms1 = [@{thm perm_aux_expand}]
+ val thms2 = [@{thm perm_append}, @{thm perm_eq}, @{thm perm_rev},
+ @{thm perm_compose_aux}] @ @{thms perm_prod.simps} @
+ @{thms perm_list.simps} @ @{thms perm_nat.simps}
+ val thms3 = [@{thm perm_aux_def}]
+in
+ simp_tac (HOL_basic_ss addsimps thms1)
+ THEN' simp_tac (HOL_basic_ss addsimps thms2)
+ THEN' simp_tac (HOL_basic_ss addsimps thms3)
+end*}
+
+text {*
+ For all three phases we have to build simpsets addig specific lemmas. As is sufficient
+ for our purposes here, we can add these lemma to @{ML HOL_basic_ss} in order to obtain
+ the desired results. Now we can solve the following lemma
+*}
+
+lemma
+ fixes c d::"nat" and pi\<^isub>1 pi\<^isub>2::"prm"
+ shows "pi\<^isub>1\<bullet>(c, pi\<^isub>2\<bullet>((rev pi\<^isub>1)\<bullet>d)) = (pi\<^isub>1\<bullet>c, (pi\<^isub>1\<bullet>pi\<^isub>2)\<bullet>d)"
+apply(tactic {* perm_simp_tac 1 *})
+done
+
+
+text {*
+ in one step. This tactic can deal with most instances of normalising permutations;
+ in order to solve all cases we have to deal with corner-cases such as the
+ lemma being an exact instance of the permutation composition lemma. This can
+ often be done easier by implementing a simproc or a conversion. Both will be
+ explained in the next two chapters.
+
+ (FIXME: Is it interesting to say something about @{term "op =simp=>"}?)
+
+ (FIXME: What are the second components of the congruence rules---something to
+ do with weak congruence constants?)
+
+ (FIXME: Anything interesting to say about @{ML Simplifier.clear_ss}?)
+
+ (FIXME: @{ML ObjectLogic.full_atomize_tac},
+ @{ML ObjectLogic.rulify_tac})
*}
@@ -1349,7 +1744,7 @@
"*** Exception- CTERM (\"no conversion\", []) raised"}
A more interesting conversion is the function @{ML "Thm.beta_conversion"}: it
- produces an equation between a term and its beta-normal form. For example
+ produces a meta-equation between a term and its beta-normal form. For example
@{ML_response_fake [display,gray]
"let
@@ -1362,7 +1757,7 @@
"((\<lambda>x y. x + y) 2) 10 \<equiv> 2 + 10"}
Note that the actual response in this example is @{term "2 + 10 \<equiv> 2 + (10::nat)"},
- since the pretty printer for @{ML_type cterm}s already beta-normalises terms.
+ since the pretty-printer for @{ML_type cterm}s already beta-normalises terms.
But by the way how we constructed the term (using the function
@{ML Thm.capply}, which is the application @{ML $} for @{ML_type cterm}s),
we can be sure the left-hand side must contain beta-redexes. Indeed
@@ -1374,9 +1769,9 @@
val add = @{cterm \"\<lambda>x y. x + (y::nat)\"}
val two = @{cterm \"2::nat\"}
val ten = @{cterm \"10::nat\"}
+ val thm = Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)
in
- #prop (rep_thm
- (Thm.beta_conversion true (Thm.capply (Thm.capply add two) ten)))
+ #prop (rep_thm thm)
end"
"Const (\"==\",\<dots>) $
(Abs (\"x\",\<dots>,Abs (\"y\",\<dots>,\<dots>)) $\<dots>$\<dots>) $
@@ -1401,15 +1796,15 @@
The main point of conversions is that they can be used for rewriting
@{ML_type cterm}s. To do this you can use the function @{ML
- "Conv.rewr_conv"} which expects a meta-equation as an argument. Suppose we
- want to rewrite a @{ML_type cterm} according to the (meta)equation:
+ "Conv.rewr_conv"}, which expects a meta-equation as an argument. Suppose we
+ want to rewrite a @{ML_type cterm} according to the meta-equation:
*}
lemma true_conj1: "True \<and> P \<equiv> P" by simp
text {*
- You can see how this function works with the example
- @{term "True \<and> (Foo \<longrightarrow> Bar)"}:
+ You can see how this function works in the example rewriting
+ the @{ML_type cterm} @{term "True \<and> (Foo \<longrightarrow> Bar)"} to @{term "Foo \<longrightarrow> Bar"}.
@{ML_response_fake [display,gray]
"let
@@ -1420,7 +1815,8 @@
"True \<and> (Foo \<longrightarrow> Bar) \<equiv> Foo \<longrightarrow> Bar"}
Note, however, that the function @{ML Conv.rewr_conv} only rewrites the
- outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match the
+ outer-most level of the @{ML_type cterm}. If the given @{ML_type cterm} does not match
+ exactly the
left-hand side of the theorem, then @{ML Conv.rewr_conv} raises
the exception @{ML CTERM}.
@@ -1458,7 +1854,7 @@
Here the conversion of @{thm [source] true_conj1} only applies
in the first case, but fails in the second. The whole conversion
- does not fail, however, because the combinator @{ML else_conv} will then
+ does not fail, however, because the combinator @{ML Conv.else_conv} will then
try out @{ML Conv.all_conv}, which always succeeds.
Apart from the function @{ML beta_conversion in Thm}, which is able to fully
@@ -1479,9 +1875,10 @@
"P \<or> (True \<and> Q) \<equiv> P \<or> Q"}
The reason for this behaviour is that @{text "(op \<or>)"} expects two
- arguments. So the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
+ arguments. Therefore the term must be of the form @{text "(Const \<dots> $ t1) $ t2"}. The
conversion is then applied to @{text "t2"} which in the example above
- stands for @{term "True \<and> Q"}.
+ stands for @{term "True \<and> Q"}. The function @{ML Conv.fun_conv} applies
+ the conversion to the first argument of an application.
The function @{ML Conv.abs_conv} applies a conversion under an abstractions.
For example:
@@ -1495,13 +1892,14 @@
end"
"\<lambda>P. True \<and> P \<and> Foo \<equiv> \<lambda>P. P \<and> Foo"}
- The conversion that goes under an application is
- @{ML Conv.combination_conv}. It expects two conversions as arguments,
- each of which is applied to the corresponding ``branch'' of the application.
+ Note that this conversion needs a context as an argument. The conversion that
+ goes under an application is @{ML Conv.combination_conv}. It expects two conversions
+ as arguments, each of which is applied to the corresponding ``branch'' of the
+ application.
- We can now apply all these functions in a
- conversion that recursively descends a term and applies a conversion in all
- possible positions.
+ We can now apply all these functions in a conversion that recursively
+ descends a term and applies a ``@{thm [source] true_conj1}''-conversion
+ in all possible positions.
*}
ML %linenosgray{*fun all_true1_conv ctxt ctrm =
@@ -1510,17 +1908,17 @@
(Conv.arg_conv (all_true1_conv ctxt) then_conv
Conv.rewr_conv @{thm true_conj1}) ctrm
| _ $ _ => Conv.combination_conv
- (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
+ (all_true1_conv ctxt) (all_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => all_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm*}
text {*
- This functions descends under applications (Line 6 and 7) and abstractions
- (Line 8); and ``fires'' if the outer-most constant is an @{text "True \<and> \<dots>"}
- (Lines 3-5); otherwise it leaves the term unchanged (Line 9). In Line 2
+ This function ``fires'' if the terms is of the form @{text "True \<and> \<dots>"};
+ it descends under applications (Line 6 and 7) and abstractions
+ (Line 8); otherwise it leaves the term unchanged (Line 9). In Line 2
we need to transform the @{ML_type cterm} into a @{ML_type term} in order
to be able to pattern-match the term. To see this
- conversion in action, consider the following example
+ conversion in action, consider the following example:
@{ML_response_fake [display,gray]
"let
@@ -1531,8 +1929,6 @@
end"
"distinct [1, x] \<longrightarrow> True \<and> 1 \<noteq> x \<equiv> distinct [1, x] \<longrightarrow> 1 \<noteq> x"}
- where the conversion is applied ``deep'' inside the term.
-
To see how much control you have about rewriting by using conversions, let us
make the task a bit more complicated by rewriting according to the rule
@{text true_conj1}, but only in the first arguments of @{term If}s. Then
@@ -1544,7 +1940,7 @@
Const (@{const_name If}, _) $ _ =>
Conv.arg_conv (all_true1_conv ctxt) ctrm
| _ $ _ => Conv.combination_conv
- (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
+ (if_true1_conv ctxt) (if_true1_conv ctxt) ctrm
| Abs _ => Conv.abs_conv (fn (_, ctxt) => if_true1_conv ctxt) ctxt ctrm
| _ => Conv.all_conv ctrm *}
@@ -1555,7 +1951,7 @@
"let
val ctxt = @{context}
val ctrm =
- @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
+ @{cterm \"if P (True \<and> 1 \<noteq> 2) then True \<and> True else True \<and> False\"}
in
if_true1_conv ctxt ctrm
end"
@@ -1581,15 +1977,15 @@
Finally, conversions can also be turned into tactics and then applied to
goal states. This can be done with the help of the function @{ML CONVERSION},
- and also some predefined conversion combinator which traverse a goal
+ and also some predefined conversion combinators that traverse a goal
state. The combinators for the goal state are: @{ML Conv.params_conv} for
- going under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
- Q"}); the function @{ML Conv.prems_conv} for applying the conversion to all
- premises of a goal, and @{ML Conv.concl_conv} for applying the conversion to
+ converting under parameters (i.e.~where goals are of the form @{text "\<And>x. P \<Longrightarrow>
+ Q"}); the function @{ML Conv.prems_conv} for applying a conversion to all
+ premises of a goal, and @{ML Conv.concl_conv} for applying a conversion to
the conclusion of a goal.
Assume we want to apply @{ML all_true1_conv} only in the conclusion
- of the goal, and @{ML if_true1_conv} should only be applied to the premises.
+ of the goal, and @{ML if_true1_conv} should only apply to the premises.
Here is a tactic doing exactly that:
*}
@@ -1599,8 +1995,8 @@
in
CONVERSION
(Conv.params_conv ~1 (fn ctxt =>
- (Conv.prems_conv ~1 (if_true1_conv ctxt)) then_conv
- Conv.concl_conv ~1 (all_true1_conv ctxt)) ctxt) i
+ (Conv.prems_conv ~1 (if_true1_conv ctxt) then_conv
+ Conv.concl_conv ~1 (all_true1_conv ctxt))) ctxt) i
end)*}
text {*
@@ -1628,16 +2024,33 @@
and simprocs; the advantage of conversions, however, is that you never have
to worry about non-termination.
+ (FIXME: explain @{ML Conv.try_conv})
+
+ \begin{exercise}\label{ex:addconversion}
+ Write a tactic that does the same as the simproc in exercise
+ \ref{ex:addsimproc}, but is based in conversions. That means replace terms
+ of the form @{term "t\<^isub>1 + t\<^isub>2"} by their result. You can make
+ the same assumptions as in \ref{ex:addsimproc}.
+ \end{exercise}
+
+ \begin{exercise}
+ Compare which way (either Exercise ~\ref{addsimproc} or \ref{ex:addconversion}) of
+ rewriting such terms is faster. For this you might have to construct quite
+ large terms. Also see Recipe \ref{rec:timing} for information about timing.
+ \end{exercise}
+
\begin{readmore}
See @{ML_file "Pure/conv.ML"} for more information about conversion combinators.
Further conversions are defined in @{ML_file "Pure/thm.ML"},
@{ML_file "Pure/drule.ML"} and @{ML_file "Pure/meta_simplifier.ML"}.
\end{readmore}
+
*}
-section {* Structured Proofs *}
+
+section {* Structured Proofs (TBD) *}
text {* TBD *}
--- a/CookBook/antiquote_setup.ML Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/antiquote_setup.ML Thu Mar 12 08:11:02 2009 +0100
@@ -48,9 +48,9 @@
val output_fn = Chunks.output_list (fn _ => fn s => Pretty.str s)
(* checks and prints open expressions *)
-fun output_ml src node =
+fun output_ml () =
let
- fun output src ctxt ((txt,ovars),pos) =
+ fun output {state: Toplevel.state, source = src, context = ctxt} ((txt,ovars),pos) =
(eval_fn ctxt pos (ml_val_open ovars txt);
output_fn src ctxt (transform_cmts_str txt))
@@ -58,62 +58,63 @@
(Scan.optional (Args.$$$ "for" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [] --
Scan.optional (Args.$$$ "in" |-- OuterParse.!!! (Scan.repeat1 Args.name)) [])))
in
- ThyOutput.args parser output src node
+ ThyOutput.antiquotation "ML" parser output
end
(* checks and prints types and structures *)
-fun output_exp ml src node =
+fun output_exp ml =
let
- fun output src ctxt (txt,pos) =
+ fun output {state: Toplevel.state, source = src, context = ctxt} (txt,pos) =
(eval_fn ctxt pos (ml txt);
output_fn src ctxt (string_explode "" txt))
in
- ThyOutput.args single_arg output src node
+ ThyOutput.antiquotation "ML_type" single_arg output
end
(* checks and expression agains a result pattern *)
-fun output_ml_response src node =
+fun output_ml_response () =
let
- fun output src ctxt ((lhs,pat),pos) =
+ fun output {state: Toplevel.state, source = src, context = ctxt} ((lhs,pat),pos) =
(eval_fn ctxt pos (ml_pat (lhs,pat));
output_fn src ctxt ((string_explode "" lhs) @ (string_explode "> " pat)))
in
- ThyOutput.args two_args output src node
+ ThyOutput.antiquotation "ML_response" two_args output
end
(* checks the expressions, but does not check it against a result pattern *)
-fun output_ml_response_fake src node =
+fun output_ml_response_fake () =
let
- fun output src ctxt ((lhs,pat),pos) =
+ fun output {state: Toplevel.state, source = src, context = ctxt} ((lhs, pat), pos) =
(eval_fn ctxt pos (ml_val lhs);
output_fn src ctxt ((string_explode "" lhs) @ (string_explode "> " pat)))
in
- ThyOutput.args two_args output src node
+ ThyOutput.antiquotation "ML_response_fake" two_args output
end
(* just prints an expression and a result pattern *)
-fun output_ml_response_fake_both src node =
+fun output_ml_response_fake_both () =
let
- fun ouput src ctxt ((lhs,pat), _) =
+ fun ouput {state: Toplevel.state, source = src, context = ctxt} ((lhs,pat), _) =
output_fn src ctxt ((string_explode "" lhs) @ (string_explode "> " pat))
in
- ThyOutput.args two_args ouput src node
+ ThyOutput.antiquotation "ML_response_fake_both" two_args ouput
end
(* checks whether a file exists in the Isabelle distribution *)
-fun check_file_exists src node =
+fun check_file_exists () =
let
fun check txt =
if File.exists (Path.append (Path.explode ("~~/src")) (Path.explode txt)) then ()
else error ("Source file " ^ (quote txt) ^ " does not exist.")
in
- ThyOutput.args (Scan.lift Args.name)
- (ThyOutput.output (fn _ => fn s => (check s; Pretty.str s))) src node
+ ThyOutput.antiquotation "ML_file" (Scan.lift Args.name)
+ (fn _ => fn s => (check s; ThyOutput.output [Pretty.str s]))
end
(* replaces the official subgoal antiquotation with one *)
(* that is closer to the actual output *)
-fun output_goals src node =
+(*
+fun output_goals {state = node, source: Args.src, context: Proof.context} _ =
let
fun subgoals 0 = ""
| subgoals 1 = "goal (1 subgoal):"
@@ -129,22 +130,22 @@
val {prop, ...} = rep_thm (Proof.get_goal state |> snd |> snd);
val (As, B) = Logic.strip_horn prop;
- val output = (case (length As) of
+ val output' = (case (length As) of
0 => goals
| n => (Pretty.str (subgoals n))::goals)
in
- ThyOutput.args (Scan.succeed ())
- (Chunks.output (fn _ => fn _ => Pretty.chunks output)) src node
+ output
end
+*)
+
-val _ = ThyOutput.add_commands
- [("ML", output_ml),
- ("ML_file", check_file_exists),
- ("ML_response", output_ml_response),
- ("ML_response_fake", output_ml_response_fake),
- ("ML_response_fake_both", output_ml_response_fake_both),
- ("ML_struct", output_exp ml_struct),
- ("ML_type", output_exp ml_type),
- ("subgoals", output_goals)];
+val _ = output_ml ();
+val _ = check_file_exists ();
+val _ = output_ml_response ();
+val _ = output_ml_response_fake ();
+val _ = output_ml_response_fake_both ();
+val _ = output_exp ml_struct;
+val _ = output_exp ml_type;
+(*val _ = output_goals*)
end;
--- a/CookBook/chunks.ML Thu Feb 26 13:46:05 2009 +0100
+++ b/CookBook/chunks.ML Thu Mar 12 08:11:02 2009 +0100
@@ -139,7 +139,7 @@
|| Scan.one not_eof >> ML_Lex.content_of)) #>
fst;
-fun output_chunk src ctxt name =
+fun output_chunk {state: Toplevel.state, source = src, context = ctxt} name =
let
val toks = the_chunk (ProofContext.theory_of ctxt) name;
val (spc, toks') = (case toks of
@@ -164,7 +164,6 @@
output_list (fn _ => fn s => Pretty.str s) src ctxt
end;
-val _ = ThyOutput.add_commands
- [("ML_chunk", ThyOutput.args (Scan.lift Args.name) output_chunk)];
+val _ = ThyOutput.antiquotation "ML_chunk" (Scan.lift Args.name) output_chunk;
end;
--- a/CookBook/infix_conv.ML Thu Feb 26 13:46:05 2009 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,9 +0,0 @@
-signature INFIX_CONV =
-sig
- val then_conv : Thm.conv * Thm.conv -> Thm.conv
- val else_conv : Thm.conv * Thm.conv -> Thm.conv
-end
-
-structure InfixConv : INFIX_CONV = Conv
-
-open InfixConv
--- a/IsaMakefile Thu Feb 26 13:46:05 2009 +0100
+++ b/IsaMakefile Thu Mar 12 08:11:02 2009 +0100
@@ -19,8 +19,6 @@
rail CookBook/generated/root
cp CookBook/generated/root.rao CookBook/document
-## CookBook
-
tutorial: CookBook/ROOT.ML \
CookBook/document/root.tex \
CookBook/document/root.bib \
@@ -28,8 +26,9 @@
CookBook/*.ML \
CookBook/Recipes/*.thy \
CookBook/Package/*.thy \
- CookBook/Package/*.ML
+ CookBook/Package/*.ML
$(USEDIR) HOL CookBook
+ $(ISATOOL) version > CookBook/generated/version
$(ISATOOL) document -o pdf CookBook/generated
@cp CookBook/document.pdf cookbook.pdf
Binary file cookbook.pdf has changed