added more material
authorChristian Urban <urbanc@in.tum.de>
Mon, 24 Nov 2008 02:51:08 +0100
changeset 49 a0edabf14457
parent 48 609f9ef73494
child 50 3d4b49921cdb
added more material
CookBook/FirstSteps.thy
CookBook/Parsing.thy
CookBook/Readme.thy
CookBook/antiquote_setup.ML
cookbook.pdf
--- a/CookBook/FirstSteps.thy	Sat Nov 01 15:20:36 2008 +0100
+++ b/CookBook/FirstSteps.thy	Mon Nov 24 02:51:08 2008 +0100
@@ -28,7 +28,7 @@
 
 text {*
   The easiest and quickest way to include code in a theory is
-  by using the \isacommand{ML} command. For example\smallskip
+  by using the \isacommand{ML}-command. For example\smallskip
 
 \isa{\isacommand{ML}
 \isacharverbatimopen\isanewline
@@ -36,11 +36,11 @@
 \isacharverbatimclose\isanewline
 @{ML_text "> 7"}\smallskip}
 
-  Expressions inside \isacommand{ML} commands are immediately evaluated,
+  Expressions inside \isacommand{ML}-commands are immediately evaluated,
   like ``normal'' Isabelle proof scripts, by using the advance and undo buttons of 
-  your Isabelle environment. The code inside the \isacommand{ML} command 
+  your Isabelle environment. The code inside the \isacommand{ML}-command 
   can also contain value and function bindings, and even those can be
-  undone when the proof script is retracted. From now on we will drop the 
+  undone when the proof script is retracted. In what follows we will drop the 
   \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose} whenever
   we show code and its response.
 
@@ -81,10 +81,10 @@
   a function.
 
   The funtion @{ML "warning"} should only be used for testing purposes, because any
-  output this funtion generates will be overwritten, as soon as an error is
+  output this funtion generates will be overwritten as soon as an error is
   raised. Therefore for printing anything more serious and elaborate, the
   function @{ML tracing} should be used. This function writes all output into
-  a separate tracing buffer.
+  a separate tracing buffer. For example
 
   @{ML [display] "tracing \"foo\""}
 
@@ -119,18 +119,19 @@
 section {* Antiquotations *}
 
 text {*
-  The main advantage of embedding all code 
-  in a theory is that the code can contain references to entities defined 
-  on the logical level of Isabelle. This is done using antiquotations.
-  For example, one can print out the name of 
-  the current theory by typing
+  The main advantage of embedding all code in a theory is that the code can
+  contain references to entities defined on the logical level of Isabelle (by
+  this we mean definitions, theorems, terms and so on). This is done using
+  antiquotations.  For example, one can print out the name of the current
+  theory by typing
+
   
   @{ML_response [display] "Context.theory_name @{theory}" "FirstSteps"}
  
   where @{text "@{theory}"} is an antiquotation that is substituted with the
-  current theory (remember that we assumed we are inside the theory CookBook). 
-  The name of this theory can be extracted using the function 
-  @{ML "Context.theory_name"}. 
+  current theory (remember that we assumed we are inside the theory 
+  @{ML_text FirstSteps}). The name of this theory can be extracted using 
+  the function @{ML "Context.theory_name"}. 
 
   Note, however, that antiquotations are statically scoped, that is the value is
   determined at ``compile-time'', not ``run-time''. For example the function
@@ -155,7 +156,7 @@
   @{ML [display] "@{thm allI}"}
   @{ML [display] "@{simpset}"}
 
-  While antiquotations have many applications, they were originally introduced to 
+  While antiquotations nowadays have many applications, they were originally introduced to 
   avoid explicit bindings for theorems such as
 *}
 
@@ -164,19 +165,21 @@
 *}
 
 text {*
-  These bindings were difficult to maintain and also could be accidentally overwritten
-  by the user. This usually broke definitional packages. Antiquotations solve this
-  problem, since they are ``linked'' statically at compile time.  In the course of this 
-  introduction, we will learn more about these antiquotations: they greatly simplify 
-  Isabelle programming since one can directly access all kinds of logical elements 
-  from ML.
+  These bindings were difficult to maintain and also could be accidentally
+  overwritten by the user. This usually broke definitional
+  packages. Antiquotations solve this problem, since they are ``linked''
+  statically at compile-time. However, that also sometimes limits there
+  applicability. In the course of this introduction, we will learn more about
+  these antiquotations: they greatly simplify Isabelle programming since one
+  can directly access all kinds of logical elements from ML.
+
 *}
 
 section {* Terms and Types *}
 
 text {*
-  One way to construct terms of Isabelle on the ML level is by using the antiquotation 
-  \mbox{@{text "@{term \<dots>}"}}:
+  One way to construct terms of Isabelle on the ML-level is by using the antiquotation 
+  \mbox{@{text "@{term \<dots>}"}}. For example
 
   @{ML_response [display] "@{term \"(a::nat) + b = c\"}" 
                           "Const (\"op =\", \<dots>)  $ (Const (\"HOL.plus_class.plus\", \<dots>) $ \<dots> $ \<dots>) $ \<dots>"}
@@ -292,18 +295,33 @@
   which theory they are defined. Guessing such internal names can sometimes be quite hard. 
   Therefore Isabellle provides the antiquotation @{text "@{const_name \<dots>}"} which does the 
   expansion automatically, for example:
-*}
- 
-text {*
+
+  @{ML_response_fake [display] "@{const_name \"Nil\"}" "List.list.Nil"}
 
   (FIXME: Is it useful to explain @{text "@{const_syntax}"}?)
 
-  (FIXME: how to construct types manually)
+  Similarly, types can be constructed for example as follows:
+
+*} 
+
+ML {*
+fun make_fun_type tau1 tau2 = Type ("fun",[tau1,tau2])
+*}
+
+text {*
+  which can be equally written as 
+*}
+
+ML {*
+fun make_fun_type tau1 tau2 = tau1 --> tau2
+*}
+
+text {*
 
   \begin{readmore}
   There are many functions in @{ML_file "Pure/logic.ML"} and
   @{ML_file "HOL/hologic.ML"} that make such manual constructions of terms 
-  easier.\end{readmore}
+  and types easier.\end{readmore}
 
   Have a look at these files and try to solve the following two exercises:
 
@@ -327,14 +345,14 @@
 
 *}
 
-section {* Type Checking *}
+section {* Type-Checking *}
 
 text {* 
   
   We can freely construct and manipulate terms, since they are just
   arbitrary unchecked trees. However, we eventually want to see if a
   term is well-formed, or type checks, relative to a theory.
-  Type checking is done via the function @{ML cterm_of}, which turns 
+  Type-checking is done via the function @{ML cterm_of}, which turns 
   a @{ML_type term} into a  @{ML_type cterm}, a \emph{certified} term. 
   Unlike @{ML_type term}s, which are just trees, @{ML_type
   "cterm"}s are abstract objects that are guaranteed to be
@@ -386,7 +404,7 @@
    shows "Q t" (*<*)oops(*>*) 
 
 text {*
-  on the ML level:\footnote{Note that @{text "|>"} is reverse
+  on the ML-level:\footnote{Note that @{text "|>"} is reverse
   application. This combinator, and several variants are defined in
   @{ML_file "Pure/General/basics.ML"}.}
 
@@ -395,7 +413,7 @@
   val thy = @{theory}
 
   val assm1 = cterm_of thy @{prop \"\<And>(x::nat). P x \<Longrightarrow> Q x\"}
-  val assm2 = cterm_of thy @{prop \"((P::nat\<Rightarrow>bool) t)\"}
+  val assm2 = cterm_of thy @{prop \"(P::nat\<Rightarrow>bool) t\"}
 
   val Pt_implies_Qt = 
         assume assm1
@@ -447,14 +465,13 @@
 
   @{text[display] "A\<^isub>1 \<Longrightarrow> \<dots> \<Longrightarrow> A\<^isub>n \<Longrightarrow> #(C)"}
 
-  where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are the open subgoals. 
+  where @{term C} is the goal to be proved and the @{term "A\<^isub>i"} are the open 
+  subgoals. 
   Since the goal @{term C} can potentially be an implication, there is a
   @{text "#"} wrapped around it, which prevents that premises are 
   misinterpreted as open subgoals. The protection @{text "# :: prop \<Rightarrow>
   prop"} is just the identity function and used as a syntactic marker. 
   
-  (FIXME: maybe show how this is printed on the screen) 
-
   \begin{readmore}
   For more on goals see \isccite{sec:tactical-goals}. 
   \end{readmore}
@@ -476,7 +493,7 @@
   exception of possibly instantiating schematic variables. 
  
   To see how tactics work, let us transcribe a simple @{text apply}-style 
-  proof from the tutorial \cite{isa-tutorial} into ML:
+  proof from the tutorial~\cite{isa-tutorial} into ML:
 *}
 
 lemma disj_swap: "P \<or> Q \<Longrightarrow> Q \<or> P"
--- a/CookBook/Parsing.thy	Sat Nov 01 15:20:36 2008 +0100
+++ b/CookBook/Parsing.thy	Mon Nov 24 02:51:08 2008 +0100
@@ -32,7 +32,7 @@
 
 *}
 
-section {* Building Up Generic Parsers *}
+section {* Building Generic Parsers *}
 
 text {*
 
@@ -63,12 +63,15 @@
   It is used for example in the function @{ML "(op !!)"} (see below).
   \end{itemize}
 
-  (FIXME: do the exception need to be explained, because the user cannot use them from ``outside''?)
+  However, note that these exception private to the parser and cannot be accessed
+  by the programmer (for example to handle them).
+  
+  Slightly more general than the parser @{ML "(op $$)"} is the function @{ML
+  Scan.one}, in that it takes a predicate as argument and then parses exactly
+  one item from the input list satisfying this prediate. For example the
+  following parser either consumes an @{ML_text [quotes] "h"} or a @{ML_text
+  [quotes] "w"}:
 
-  Slightly more general than the parser @{ML "(op $$)"} is the function @{ML Scan.one}, in that it 
-  takes a predicate as argument and then parses exactly one item from the input list 
-  satisfying this prediate. For example the following parser either consumes an 
-  @{ML_text [quotes] "h"} or a @{ML_text [quotes] "w"}:
 
 @{ML_response [display] 
 "let 
@@ -132,6 +135,9 @@
 end" 
  "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
 
+  The function @{ML Scan.option} works similarly, except no default value can
+  be given and in the failure case this parser does nothing.
+
   The function @{ML "(op !!)"} helps to produce appropriate error messages
   during parsing. For example if one wants to parse that @{ML_text p} is immediately 
   followed by @{ML_text q}, or start a completely different parser @{ML_text r},
@@ -214,6 +220,16 @@
   @{ML "Scan.repeat1"} is similar, but requires that the parser @{ML_text "p"} 
   succeeds at least once.
 
+  Also note that the parser would have aborted with the exception @{ML_text MORE}, if
+  we had run it only on just @{ML_text [quotes] "hhhh"}. This can be awoided using
+  the wrapper @{ML Scan.finite} and the ``stopper-token'' @{ML Symbol.stopper}. With
+  them we can write
+  
+  @{ML_response [display] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (explode \"hhhh\")" 
+                "([\"h\", \"h\", \"h\", \"h\"], [])"}
+
+  However, the Isabelle-develloper rarely needs to do this kind of wrapping manually. 
+
   After parsing succeeded, one nearly always wants to apply a function on the parsed 
   items. This is done using the function @{ML_open "(p >> f)" for p f} which runs 
   first the parser @{ML_text p} and upon successful completion applies the 
@@ -264,10 +280,10 @@
   generating precise error messages. The following\footnote{There is something funny 
   going on with the pretty printing of the result token list.}
 
-@{ML_response [display] "OuterSyntax.scan Position.none \"hello world\"" 
-"[OuterLex.Token (\<dots>,(OuterLex.Ident, \"hello\"),\<dots>), 
- OuterLex.Token (\<dots>,(OuterLex.Space, \" \"),\<dots>), 
- OuterLex.Token (\<dots>,(OuterLex.Ident, \"world\"),\<dots>)]"}
+@{ML_response_fake [display] "OuterSyntax.scan Position.none \"hello world\"" 
+"[Token (\<dots>,(OuterLex.Ident, \"hello\"),\<dots>), 
+ Token (\<dots>,(OuterLex.Space, \" \"),\<dots>), 
+ Token (\<dots>,(OuterLex.Ident, \"world\"),\<dots>)]"}
 
   produces three tokens where the first and the last are identifiers, since 
   @{ML_text [quotes] "hello"} and @{ML_text [quotes] "world"} do not match 
@@ -275,10 +291,10 @@
   later on will require spaces, comments and the like to have been filtered out.
   If we parse
 
-@{ML_response [display] "OuterSyntax.scan Position.none \"inductive|for\"" 
-"[OuterLex.Token (\<dots>,(OuterLex.Command, \"inductive\"),\<dots>), 
- OuterLex.Token (\<dots>,(OuterLex.Keyword, \"|\"),\<dots>), 
- OuterLex.Token (\<dots>,(OuterLex.Keyword, \"for\"),\<dots>)]"}
+@{ML_response_fake [display] "OuterSyntax.scan Position.none \"inductive|for\"" 
+"[Token (\<dots>,(OuterLex.Command, \"inductive\"),\<dots>), 
+ Token (\<dots>,(OuterLex.Keyword, \"|\"),\<dots>), 
+ Token (\<dots>,(OuterLex.Keyword, \"for\"),\<dots>)]"}
 
   we obtain a list consiting of only command and keyword tokens.
   If you want to see which keywords and commands are currently known, use
@@ -316,7 +332,7 @@
 
   The parser @{ML_open "OuterParse.enum s p" for s p} parses a possibly empty 
   list of items recognised by the parser @{ML_text p}, where the items are 
-  separated by @{ML_text s}. For example
+  separated by the string @{ML_text s}. For example
 
 @{ML_response [display]
 "let 
@@ -326,14 +342,62 @@
 end" 
 "([\"in\",\"in\",\"in\"],[\<dots>])"}
 
+  @{ML "OuterParse.enum1"} works similarly,  except that the parsed list must 
+  be non-empty.
+
   Note that we had to add a @{ML_text [quotes] "\\n"} at the end of the parsed
   string, otherwise the parser would have consumed all tokens and then failed with
-  the exception @{ML_text "MORE"}. @{ML "OuterParse.enum1"} works similarly, 
-  except that the parsed list must be non-empty.
+  the exception @{ML_text "MORE"}. Like in the previous section, we can avoid
+  this exception using the wrapper @{ML Scan.finite}. This time, however, we
+  have to use the ``stopper-token'' @{ML OuterLex.stopper}. We can write
+
+@{ML_response [display]
+"let 
+  val input = OuterSyntax.scan Position.none \"in|in|in\"
+in 
+  Scan.finite OuterLex.stopper (OuterParse.enum \"|\" (OuterParse.$$$ \"in\")) input
+end" 
+"([\"in\",\"in\",\"in\"],[])"}
+
+  The function @{ML "OuterParse.!!!"} can be used to force termination of the
+  parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section), 
+  except that the error message is fixed to be @{text [quotes] "Outer syntax error"}
+  with a relatively precise description of the failure. For example:
+
+@{ML_response_fake [display]
+"let 
+  val input = OuterSyntax.scan Position.none \"in|\"
+  val parse_bar_then_in = OuterParse.$$$ \"|\" -- OuterParse.$$$ \"in\"
+in 
+  Scan.error (OuterParse.!!! parse_bar_then_in) input 
+end"
+"Exception ERROR \"Outer syntax error: keyword \"|\" expected, 
+but keyword in was found\" raised"
+}
 
 *}
 
-text {* FIXME explain @{ML "OuterParse.!!!"} *}
+
+ML {*
+let 
+  val input = filter OuterLex.is_proper (OuterSyntax.scan Position.none "(in foo)")
+in 
+  OuterParse.target input
+end
+*}
+
+section {* Positional Information *}
+
+text {*
+
+  @{ML OuterParse.position}
+
+*}
+
+ML {*
+  OuterParse.position
+*}
+
 
 section {* Parsing Inner Syntax *}
 
--- a/CookBook/Readme.thy	Sat Nov 01 15:20:36 2008 +0100
+++ b/CookBook/Readme.thy	Mon Nov 24 02:51:08 2008 +0100
@@ -7,7 +7,7 @@
 text {*
 
   \begin{itemize}
-  \item You can make references to other Isabelle manuals using the 
+  \item You can include references to other Isabelle manuals using the 
   reference names from those manuals. To do this the following
   four latex commands are defined:
   
@@ -25,21 +25,40 @@
   \item There are various document antiquotations defined for the 
   cookbook so that written text can be kept in sync with the 
   Isabelle code and also that responses of the ML-compiler can be shown.
-  For example
+  The are:
 
   \begin{itemize}
-  \item[$\bullet$] @{text "@{ML \"\<dots>\"}"}
-  \item[$\bullet$] @{text "@{ML_open \"\<dots>\"}"}
-  \item[$\bullet$] @{text "@{ML_response \"\<dots>\"}"}
-  \item[$\bullet$] @{text "@{ML_response_fake \"\<dots>\"}"}
-  \item[$\bullet$] @{text "@{ML_file \"\<dots>\"}"}
+  \item[$\bullet$] {\bf @{text "@{ML \"\<dots>\"}"}} Should be used for value computations. It checks whether
+  the ML-expression is valid ML-code, but only works for closed expression.
+  \item[$\bullet$] {\bf @{text "@{ML_open \"\<dots>\" for \<dots>}"}} Works like @{ML_text ML}-antiquotation except, 
+  that  it can also deal with open expressions and expressions that need to be evaluated inside structures. 
+  The free variables or structures need to be listed after the @{ML_text "for"}. For example 
+  @{text "@{ML_open \"a + b\" for a b}"}.
+  \item[$\bullet$] {\bf @{text "@{ML_response \"\<dots>\" \"\<dots>\"}"}} The first
+  expression is checked like in the antiquotation @{text "@{ML \"\<dots>\"}"}; the
+  second is a pattern that specifies the result the first expression
+  produces. This specification can contain @{text [quotes] "\<dots>"} for parts that
+  can be omitted. The actual response will be checked against the
+  specification. For example @{text "@{ML_response \"(1+2,3)\"
+  \"(3,\<dots>)\"}"}. This antiquotation can only be used when the result can be
+  constructed. It does not work when the code produces an exception or is an
+  abstract datatype (like @{ML_type thm} or @{ML_type cterm}).
+
+  \item[$\bullet$] {\bf @{text "@{ML_response_fake \"\<dots>\" \"\<dots>\"}"}} Works like the 
+  @{ML_text ML_response}-anti\-quotation, except that the result-specification is not
+  checked.
+  \item[$\bullet$] {\bf @{text "@{ML_file \"\<dots>\"}"}} Should be used when referring to a file. 
+  It checks whether the file exists.
   \end{itemize}
 
-  (FIXME: explain their usage)
+  \item Functions and value bindings cannot be defined inside antiquotations; they need
+  to be included inside \isacommand{ML} \isa{\isacharverbatimopen \ldots \isacharverbatimclose}
+  environments. Some \LaTeX-hack, however, does not print the environment markers.
 
   \end{itemize}
 
 *}
 
 
+
 end
\ No newline at end of file
--- a/CookBook/antiquote_setup.ML	Sat Nov 01 15:20:36 2008 +0100
+++ b/CookBook/antiquote_setup.ML	Mon Nov 24 02:51:08 2008 +0100
@@ -24,7 +24,6 @@
 
 fun ml_struct txt = "functor DUMMY_FUNCTOR() = struct structure DUMMY = " ^ txt ^ " end";
 fun ml_type txt = "val _ = NONE : (" ^ txt ^ ") option";
-fun ml_decl txt = txt
 
 fun output_ml_open ml src ctxt ((txt,ovars),pos) =
   (ML_Context.eval_in (SOME ctxt) false pos (ml ovars txt);
Binary file cookbook.pdf has changed