--- a/CookBook/FirstSteps.thy Fri Jan 16 14:57:36 2009 +0000
+++ b/CookBook/FirstSteps.thy Fri Jan 23 17:50:35 2009 +0000
@@ -2,7 +2,6 @@
imports Base
begin
-
chapter {* First Steps *}
text {*
@@ -29,19 +28,22 @@
The easiest and quickest way to include code in a theory is
by using the \isacommand{ML}-command. For example\smallskip
+\begin{isabelle}
+\begin{graybox}
\isa{\isacommand{ML}
\isacharverbatimopen\isanewline
\hspace{5mm}@{ML "3 + 4"}\isanewline
\isacharverbatimclose\isanewline
@{text "> 7"}\smallskip}
+\end{graybox}
+\end{isabelle}
- The @{text [quotes] "> 7"} indicates the response Isabelle prints out when the
- \isacommand{ML}-command is evaluated. Like ``normal'' Isabelle proof scripts,
+ Like ``normal'' Isabelle proof scripts,
\isacommand{ML}-commands can be evaluated by using the advance and undo buttons of
your Isabelle environment. The code inside the \isacommand{ML}-command
can also contain value and function bindings, and even those can be
- undone when the proof script is retracted. For better readability, we will in what
- follows drop the \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose}
+ undone when the proof script is retracted. As mentioned earlier, we will
+ drop the \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose}
whenever we show code and its response.
Once a portion of code is relatively stable, one usually wants to
@@ -73,7 +75,7 @@
will print out @{text [quotes] "any string"} inside the response buffer
of Isabelle. This function expects a string as argument. If you develop under PolyML,
then there is a convenient, though again ``quick-and-dirty'', method for
- converting values into strings, namely @{ML makestring}:
+ converting values into strings, namely using the function @{ML makestring}:
@{ML_response_fake [display,gray] "warning (makestring 1)" "\"1\""}
@@ -109,9 +111,16 @@
text {*
Calling @{ML "redirect_tracing"} with @{ML "(TextIO.openOut \"foo.bar\")"}
will cause that all tracing information is printed into the file @{text "foo.bar"}.
+
+ Error messages can be printed using the function @{ML error} as in
+
+ @{ML_response_fake [display,gray] "if 0=1 then 1 else (error \"foo\")" "\"foo\""}
+
*}
+
+
section {* Antiquotations *}
text {*
@@ -145,9 +154,12 @@
some data structure and return it. Instead, it is literally
replaced with the value representing the theory name.
- In a similar way you can use antiquotations to refer to theorems or simpsets:
+ In a similar way you can use antiquotations to refer to theorems:
@{ML_response_fake [display,gray] "@{thm allI}" "(\<And>x. ?P x) \<Longrightarrow> \<forall>x. ?P x"}
+
+ or simpsets:
+
@{ML_response_fake [display,gray]
"let
val ({rules,...},_) = MetaSimplifier.rep_ss @{simpset}
@@ -155,24 +167,31 @@
map #name (Net.entries rules)
end" "[\"Nat.of_nat_eq_id\", \"Int.of_int_eq_id\", \"Nat.One_nat_def\", \<dots>]"}
- The second example extracts the theorem names that are stored in a simpset.
- For this the function @{ML rep_ss in MetaSimplifier} returns a record
- containing information about the simpset. The rules of a simpset are stored
+ The code above extracts the theorem names that are stored in a simpset.
+ We refer to the current simpset with the antiquotation @{text "@{simpset}"}.
+ The function @{ML rep_ss in MetaSimplifier} returns a record
+ containing all information about the simpset. The rules of a simpset are stored
in a discrimination net (a datastructure for fast indexing). From this net
we can extract the entries using the function @{ML Net.entries}.
- While antiquotations have many applications, they were originally introduced to
- avoid explicit bindings for theorems such as
+ \begin{readmore}
+ The infrastructure for simpsets is implemented in @{ML_file "Pure/meta_simplifier.ML"}
+ and @{ML_file "Pure/simplifier.ML"}.
+ \end{readmore}
+
+ While antiquotations have many applications, they were originally introduced in order
+ to avoid explicit bindings for theorems such as
*}
ML{*val allI = thm "allI" *}
text {*
- These bindings were difficult to maintain and also could accidentally
- be overwritten by the user. This usually broke definitional
+ These bindings were difficult to maintain and also could be accidentally
+ overwritten by the user. This usually broke definitional
packages. Antiquotations solve this problem, since they are ``linked''
- statically at compile-time. However, that also sometimes limits their
- usefulness. In the course of this introduction, we will learn more about
+ statically at compile-time. However, this static linkage also limits their
+ usefulness in cases where data needs to be build up dynamically. In the course of
+ this introduction, we will learn more about
these antiquotations: they greatly simplify Isabelle programming since one
can directly access all kinds of logical elements from ML.
@@ -185,8 +204,9 @@
\mbox{@{text "@{term \<dots>}"}}. For example
@{ML_response [display,gray]
- "@{term \"(a::nat) + b = c\"}"
- "Const (\"op =\", \<dots>) $ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
+"@{term \"(a::nat) + b = c\"}"
+"Const (\"op =\", \<dots>) $
+ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
This will show the term @{term "(a::nat) + b = c"}, but printed using the internal
representation of this term. This internal representation corresponds to the
@@ -222,7 +242,7 @@
can use the following ML function to set the limit to a value high
enough:
- @{ML [display] "print_depth 50"}
+ @{ML [display,gray] "print_depth 50"}
\end{exercise}
The antiquotation @{text "@{prop \<dots>}"} constructs terms of propositional type,
@@ -265,7 +285,8 @@
ML{*fun make_imp P Q tau =
let
val x = Free ("x",tau)
- in Logic.all x (Logic.mk_implies (P $ x, Q $ x))
+ in
+ Logic.all x (Logic.mk_implies (P $ x, Q $ x))
end *}
text {*
@@ -278,7 +299,7 @@
text {*
To see this apply @{text "@{term S}"}, @{text "@{term T}"} and @{text "@{typ nat}"}
- to both functions. With @{ML make_imp} we obtain the correct term involving
+ to both functions. With @{ML make_imp} we obtain the intended term involving
@{term "S"}, @{text "T"} and @{text "@{typ nat}"}
@{ML_response [display,gray] "make_imp @{term S} @{term T} @{typ nat}"
@@ -287,7 +308,7 @@
Const \<dots> $ (Free (\"S\",\<dots>) $ \<dots>) $
(Free (\"T\",\<dots>) $ \<dots>))"}
- With @{ML make_wrong_imp} we obtain an incorrect term involving the @{term "P"}
+ With @{ML make_wrong_imp} we obtain a term involving the @{term "P"}
and @{text "Q"} from the antiquotation.
@{ML_response [display,gray] "make_wrong_imp @{term S} @{term T} @{typ nat}"
@@ -316,15 +337,14 @@
(FIXME: Is it useful to explain @{text "@{const_syntax}"}?)
- Similarly, types can be constructed manually, for example as follows:
+ Similarly, types can be constructed manually. For example a function type
+ can be constructed as follows:
*}
ML{*fun make_fun_type tau1 tau2 = Type ("fun",[tau1,tau2]) *}
-text {*
- which can be equally written as
-*}
+text {* This can be equally written as *}
ML{*fun make_fun_type tau1 tau2 = tau1 --> tau2 *}
@@ -351,7 +371,7 @@
\begin{exercise}\label{fun:makesum}
Write a function which takes two terms representing natural numbers
- in unary (like @{term "Suc (Suc (Suc 0))"}), and produce the unary
+ in unary notation (like @{term "Suc (Suc (Suc 0))"}), and produce the
number representing their sum.
\end{exercise}
@@ -393,8 +413,7 @@
val zero = @{term \"0::nat\"}
in
cterm_of @{theory}
- (Const (@{const_name plus}, natT --> natT --> natT)
- $ zero $ zero)
+ (Const (@{const_name plus}, natT --> natT --> natT) $ zero $ zero)
end" "0 + 0"}
\begin{exercise}
@@ -408,8 +427,8 @@
text {*
Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm}
- that can only be built by going through interfaces. In effect all proofs
- are checked.
+ that can only be built by going through interfaces. As a consequence of this is that
+ every proof is correct by construction (FIXME reference LCF-philosophy)
To see theorems in ``action'', let us give a proof for the following statement
*}
@@ -421,8 +440,7 @@
text {*
on the ML-level:\footnote{Note that @{text "|>"} is reverse
- application. This combinator, and several variants are defined in
- @{ML_file "Pure/General/basics.ML"}.}
+ application. See Section~\ref{sec:combinators}.}
@{ML_response_fake [display,gray]
"let
@@ -467,121 +485,26 @@
*}
-
-section {* Tactical Reasoning *}
-
-text {*
- The goal-oriented tactical style reasoning of the ML level is similar
- to the @{text apply}-style at the user level, i.e.~the reasoning is centred
- around a \emph{goal}, which is modified in a sequence of proof steps
- until it is solved.
-
- A goal (or goal state) is a special @{ML_type thm}, which by
- convention is an implication of the form:
-
- @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #(C)"}
-
- where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are the open
- subgoals.
- Since the goal @{term C} can potentially be an implication, there is a
- @{text "#"} wrapped around it, which prevents that premises are
- misinterpreted as open subgoals. The wrapper @{text "# :: prop \<Rightarrow>
- prop"} is just the identity function and used as a syntactic marker.
-
- \begin{readmore}
- For more on goals see \isccite{sec:tactical-goals}. (FIXME: in which
- file is most code for dealing with goals?)
- \end{readmore}
-
- Tactics are functions that map a goal state to a (lazy)
- sequence of successor states, hence the type of a tactic is
-
- @{ML_type[display] "thm -> thm Seq.seq"}
-
- \begin{readmore}
- See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy
- sequences. However in day-to-day Isabelle programming, one rarely
- constructs sequences explicitly, but uses the predefined tactic
- combinators (tacticals) instead. See @{ML_file "Pure/tctical.ML"}
- for the code; see Chapters 3 and 4 in the old Isabelle Reference Manual.
- \end{readmore}
-
- While tactics can operate on the subgoals (the @{text "A\<^isub>i"} above), they
- are expected to leave the conclusion @{term C} intact, with the
- exception of possibly instantiating schematic variables.
-
- To see how tactics work, let us transcribe a simple @{text apply}-style
- proof into ML:
-*}
-
-lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
-apply (erule disjE)
- apply (rule disjI2)
- apply assumption
-apply (rule disjI1)
-apply assumption
-done
-
-text {*
- To start the proof, the function @{ML "Goal.prove"}~@{text "ctxt xs As C tac"} sets
- up a goal state for proving the goal @{text C} under the assumptions @{text As}
- (empty in the proof at hand)
- with the variables @{text xs} that will be generalised once the
- goal is proved. The @{text "tac"} is the tactic which proves the goal and which
- can make use of the local assumptions (there are none in this example).
-
-@{ML_response_fake [display,gray]
-"let
- val ctxt = @{context}
- val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
-in
- Goal.prove ctxt [\"P\", \"Q\"] [] goal (fn _ =>
- eresolve_tac [disjE] 1
- THEN resolve_tac [disjI2] 1
- THEN assume_tac 1
- THEN resolve_tac [disjI1] 1
- THEN assume_tac 1)
-end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
-
- \begin{readmore}
- To learn more about the function @{ML Goal.prove} see \isccite{sec:results} and
- the file @{ML_file "Pure/goal.ML"}.
- \end{readmore}
-
-
- An alternative way to transcribe this proof is as follows
-
-@{ML_response_fake [display,gray]
-"let
- val ctxt = @{context}
- val goal = @{prop \"P \<or> Q \<Longrightarrow> Q \<or> P\"}
-in
- Goal.prove ctxt [\"P\", \"Q\"] [] goal (fn _ =>
- (eresolve_tac [disjE]
- THEN' resolve_tac [disjI2]
- THEN' assume_tac
- THEN' resolve_tac [disjI1]
- THEN' assume_tac) 1)
-end" "?P \<or> ?Q \<Longrightarrow> ?Q \<or> ?P"}
-
- (FIXME: are there any advantages/disadvantages about this way?)
-*}
-
section {* Storing Theorems *}
section {* Theorem Attributes *}
-
-section {* Combinators *}
+section {* Operations on Constants (Names) *}
text {*
- Perhaps one of the most puzzling aspects for a beginner when looking at
+ (FIXME how can I extract the constant name without the theory name etc)
+*}
+
+section {* Combinators\label{sec:combinators} *}
+
+text {*
+ Perhaps one of the most puzzling aspects for a beginner when reading at
existing Isabelle code is the liberal use of combinators. At first they
- seem to obstruct reading the code, but after getting familiar with them
- they actually ease the reading and also the programming.
+ seem to obstruct the comprehension of the code, but after getting familiar
+ with them they actually ease the reading and also the programming.
\begin{readmore}
- The most frequently used combinator are defined in @{ML_file "Pure/library.ML"}
+ The most frequently used combinator are defined in the files @{ML_file "Pure/library.ML"}
and @{ML_file "Pure/General/basics.ML"}.
\end{readmore}
@@ -590,10 +513,17 @@
ML{*fun I x = x*}
+text {* Another frequently used combinator is @{ML K} *}
+
+ML{*fun K x = fn _ => x*}
+
text {*
- Another frequently used combinator is @{ML K}
+ which ``wraps'' a function around the argument @{text "x"}. This function
+ ignores its argument.
-
+ @{ML "(op |>)"}
*}
+ML{*fun x |> f = f x*}
+
end
\ No newline at end of file