diff -r a0edabf14457 -r 3d4b49921cdb CookBook/FirstSteps.thy --- a/CookBook/FirstSteps.thy Mon Nov 24 02:51:08 2008 +0100 +++ b/CookBook/FirstSteps.thy Tue Nov 25 05:19:27 2008 +0000 @@ -7,10 +7,9 @@ text {* - Isabelle programming is done in Standard ML. - Just like lemmas and proofs, SML-code in Isabelle is part of a - theory. If you want to follow the code written in this chapter, we - assume you are working inside the theory defined by + Isabelle programming is done in SML. Just like lemmas and proofs, SML-code + in Isabelle is part of a theory. If you want to follow the code written in + this chapter, we assume you are working inside the theory defined by \begin{center} \begin{tabular}{@ {}l} @@ -64,16 +63,16 @@ text {* - During developments you might find it necessary to quickly inspect some data + During development you might find it necessary to inspect some data in your code. This can be done in a ``quick-and-dirty'' fashion using the function @{ML "warning"}. For example @{ML [display] "warning \"any string\""} - will print out @{ML_text [quotes] "any string"} inside the response buffer of Isabelle. - If you develop under PolyML, then there is a convenient, though again - ``quick-and-dirty'', method for converting values into strings, - for example: + will print out @{ML_text [quotes] "any string"} inside the response buffer + of Isabelle. This function expects a string. If you develop under PolyML, + then there is a convenient, though again ``quick-and-dirty'', method for + converting values into strings, for example: @{ML [display] "warning (makestring 1)"} @@ -82,13 +81,13 @@ The funtion @{ML "warning"} should only be used for testing purposes, because any output this funtion generates will be overwritten as soon as an error is - raised. Therefore for printing anything more serious and elaborate, the + raised. For printing anything more serious and elaborate, the function @{ML tracing} should be used. This function writes all output into a separate tracing buffer. For example @{ML [display] "tracing \"foo\""} - It is also possible to redirect the channel where the @{ML_text "foo"} is + It is also possible to redirect the ``channel'' where the @{ML_text "foo"} is printed to a separate file, e.g. to prevent Proof General from choking on massive amounts of trace output. This rediretion can be achieved using the code *} @@ -156,7 +155,7 @@ @{ML [display] "@{thm allI}"} @{ML [display] "@{simpset}"} - While antiquotations nowadays have many applications, they were originally introduced to + While antiquotations have many applications, they were originally introduced to avoid explicit bindings for theorems such as *} @@ -168,7 +167,7 @@ These bindings were difficult to maintain and also could be accidentally overwritten by the user. This usually broke definitional packages. Antiquotations solve this problem, since they are ``linked'' - statically at compile-time. However, that also sometimes limits there + statically at compile-time. However, that also sometimes limits their applicability. In the course of this introduction, we will learn more about these antiquotations: they greatly simplify Isabelle programming since one can directly access all kinds of logical elements from ML. @@ -184,7 +183,7 @@ @{ML_response [display] "@{term \"(a::nat) + b = c\"}" "Const (\"op =\", \) $ (Const (\"HOL.plus_class.plus\", \) $ \ $ \) $ \"} - This will show the term @{term "(a::nat) + b = c"}, but printed out using the internal + This will show the term @{term "(a::nat) + b = c"}, but printed using the internal representation of this term. This internal representation corresponds to the datatype @{ML_type "term"}. @@ -222,7 +221,7 @@ @{ML [display] "print_depth 50"} The antiquotation @{text "@{prop \}"} constructs terms of propositional type, - inserting the invisible @{text "Trueprop"} coercions whenever necessary. + inserting the invisible @{text "Trueprop"}-coercions whenever necessary. Consider for example @{ML_response [display] "@{term \"P x\"}" "Free (\"P\", \) $ Free (\"x\", \)"} @@ -234,7 +233,7 @@ @{ML_response [display] "@{term \"P x \ Q x\"}" "Const (\"==>\", \) $ \ $ \"} @{ML_response [display] "@{prop \"P x \ Q x\"}" "Const (\"==>\", \) $ \ $ \"} - which does not. + which does not (since it is already constructed using the meta-implication). Types can be constructed using the antiquotation @{text "@{typ \}"}. For example @@ -283,7 +282,7 @@ to both functions. One tricky point in constructing terms by hand is to obtain the - fully qualified name for constants. For example the names for @{text "zero"} or + fully qualified name for constants. For example the names for @{text "zero"} and @{text "+"} are more complex than one first expects, namely \begin{center} @@ -300,7 +299,7 @@ (FIXME: Is it useful to explain @{text "@{const_syntax}"}?) - Similarly, types can be constructed for example as follows: + Similarly, types can be constructed manually, for example as follows: *} @@ -352,14 +351,14 @@ We can freely construct and manipulate terms, since they are just arbitrary unchecked trees. However, we eventually want to see if a term is well-formed, or type checks, relative to a theory. - Type-checking is done via the function @{ML cterm_of}, which turns + Type-checking is done via the function @{ML cterm_of}, which converts a @{ML_type term} into a @{ML_type cterm}, a \emph{certified} term. Unlike @{ML_type term}s, which are just trees, @{ML_type "cterm"}s are abstract objects that are guaranteed to be - type-correct, and that can only be constructed via the official - interfaces. + type-correct, and that can only be constructed via the ``official + interfaces''. - Type checking is always relative to a theory context. For now we can use + Type checking is always relative to a theory context. For now we use the @{ML "@{theory}"} antiquotation to get hold of the current theory. For example we can write @@ -383,7 +382,7 @@ \begin{exercise} Check that the function defined in Exercise~\ref{fun:revsum} returns a - result that type checks. + result that type-checks. \end{exercise} *} @@ -391,9 +390,9 @@ section {* Theorems *} text {* - Just like @{ML_type cterm}s, theorems (of type @{ML_type thm}) are - abstract objects that can only be built by going through the kernel - interfaces, which means that all your proofs will be checked. + Just like @{ML_type cterm}s, theorems are abstract objects of type @{ML_type thm} + that can only be built by going through interfaces, which means that all your proofs + will be checked. To see theorems in ``action'', let us give a proof for the following statement *} @@ -444,7 +443,7 @@ \begin{readmore} - For how the functions @{text "assume"}, @{text "forall_elim"} etc work + For the functions @{text "assume"}, @{text "forall_elim"} etc see \isccite{sec:thms}. The basic functions for theorems are defined in @{ML_file "Pure/thm.ML"}. \end{readmore} @@ -469,7 +468,7 @@ subgoals. Since the goal @{term C} can potentially be an implication, there is a @{text "#"} wrapped around it, which prevents that premises are - misinterpreted as open subgoals. The protection @{text "# :: prop \ + misinterpreted as open subgoals. The wrapper @{text "# :: prop \ prop"} is just the identity function and used as a syntactic marker. \begin{readmore} @@ -484,8 +483,8 @@ See @{ML_file "Pure/General/seq.ML"} for the implementation of lazy sequences. However in day-to-day Isabelle programming, one rarely constructs sequences explicitly, but uses the predefined tactic - combinators (tacticals) instead (see @{ML_file "Pure/tctical.ML"}). - (FIXME: Pointer to the old reference manual) + combinators (tacticals) instead. See @{ML_file "Pure/tctical.ML"} + for the code; see Chapters 3 and 4 in the old Isabelle Reference Manual. \end{readmore} While tactics can operate on the subgoals (the @{text "A\<^isub>i"} above), they @@ -493,7 +492,7 @@ exception of possibly instantiating schematic variables. To see how tactics work, let us transcribe a simple @{text apply}-style - proof from the tutorial~\cite{isa-tutorial} into ML: + proof into ML: *} lemma disj_swap: "P \ Q \ Q \ P" @@ -510,7 +509,7 @@ (empty in the proof at hand) with the variables @{text xs} that will be generalised once the goal is proved. The @{text "tac"} is the tactic which proves the goal and which - can make use of the local assumptions. + can make use of the local assumptions (there are none in this example). @{ML_response_fake [display] "let