ProgTutorial/Parsing.thy
changeset 256 1fb8d62c88a0
parent 250 ab9e09076462
child 258 03145998190b
--- a/ProgTutorial/Parsing.thy	Sat May 30 11:12:46 2009 +0200
+++ b/ProgTutorial/Parsing.thy	Sat May 30 17:40:20 2009 +0200
@@ -36,7 +36,7 @@
 text {*
 
   Let us first have a look at parsing strings using generic parsing
-  combinators. The function @{ML "$$"} takes a string as argument and will
+  combinators. The function @{ML [index] "$$"} takes a string as argument and will
   ``consume'' this string from a given input list of strings. ``Consume'' in
   this context means that it will return a pair consisting of this string and
   the rest of the input list. For example:
@@ -70,6 +70,8 @@
   However, note that these exceptions are private to the parser and cannot be accessed
   by the programmer (for example to handle them).
 
+  \indexdef{explode@ {\tt\slshape{}explode}}{in {\tt\slshape Symbol}}
+  \index{explode@ {\tt\slshape{}explode}}
   In the examples above we use the function @{ML Symbol.explode}, instead of the 
   more standard library function @{ML explode}, for obtaining an input list for 
   the parser. The reason is that @{ML Symbol.explode} is aware of character sequences,
@@ -85,8 +87,9 @@
 "([\"\\\", \"<\", \"f\", \"o\", \"o\", \">\", \" \", \"b\", \"a\", \"r\"],
  [\"\\<foo>\", \" \", \"b\", \"a\", \"r\"])"}
 
-  Slightly more general than the parser @{ML "$$"} is the function @{ML
-  Scan.one}, in that it takes a predicate as argument and then parses exactly
+  Slightly more general than the parser @{ML "$$"} is the function 
+  @{ML [index] one in Scan}, in that it takes a predicate as argument and 
+  then parses exactly
   one item from the input list satisfying this predicate. For example the
   following parser either consumes an @{text [quotes] "h"} or a @{text
   [quotes] "w"}:
@@ -101,7 +104,7 @@
 end"
     "((\"h\", [\"e\", \"l\", \"l\", \"o\"]),(\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
 
-  Two parsers can be connected in sequence by using the function @{ML "--"}. 
+  Two parsers can be connected in sequence by using the function @{ML [index] "--"}. 
   For example parsing @{text "h"}, @{text "e"} and @{text "l"} (in this 
   order) you can achieve by:
 
@@ -112,14 +115,14 @@
   Note how the result of consumed strings builds up on the left as nested pairs.  
 
   If, as in the previous example, you want to parse a particular string, 
-  then you should use the function @{ML Scan.this_string}:
+  then you should use the function @{ML [index] this_string in Scan}:
 
   @{ML_response [display,gray] 
   "Scan.this_string \"hell\" (Symbol.explode \"hello\")"
   "(\"hell\", [\"o\"])"}
 
-  Parsers that explore alternatives can be constructed using the function @{ML
-  "||"}. The parser @{ML "(p || q)" for p q} returns the
+  Parsers that explore alternatives can be constructed using the function 
+  @{ML [index] "||"}. The parser @{ML "(p || q)" for p q} returns the
   result of @{text "p"}, in case it succeeds, otherwise it returns the
   result of @{text "q"}. For example:
 
@@ -134,7 +137,7 @@
 end"
   "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"w\", [\"o\", \"r\", \"l\", \"d\"]))"}
 
-  The functions @{ML "|--"} and @{ML "--|"} work like the sequencing function 
+  The functions @{ML [index] "|--"} and @{ML [index] "--|"} work like the sequencing function 
   for parsers, except that they discard the item being parsed by the first (respectively second)
   parser. For example:
   
@@ -148,6 +151,7 @@
 end"
   "((\"e\", [\"l\", \"l\", \"o\"]), (\"h\", [\"l\", \"l\", \"o\"]))"}
 
+  \indexdef{optional@ {\tt\slshape{optional}}}{in {\tt\slshape Scan}}
   The parser @{ML "Scan.optional p x" for p x} returns the result of the parser 
   @{text "p"}, if it succeeds; otherwise it returns 
   the default value @{text "x"}. For example:
@@ -162,7 +166,7 @@
 end" 
  "((\"h\", [\"e\", \"l\", \"l\", \"o\"]), (\"x\", [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
 
-  The function @{ML Scan.option} works similarly, except no default value can
+  The function @{ML [index] option in Scan} works similarly, except no default value can
   be given. Instead, the result is wrapped as an @{text "option"}-type. For example:
 
 @{ML_response [display,gray]
@@ -174,7 +178,7 @@
   (p input1, p input2)
 end" "((SOME \"h\", [\"e\", \"l\", \"l\", \"o\"]), (NONE, [\"w\", \"o\", \"r\", \"l\", \"d\"]))"}
 
-  The function @{ML "!!"} helps to produce appropriate error messages
+  The function @{ML [index] "!!"} helps to produce appropriate error messages
   for parsing. For example if you want to parse @{text p} immediately 
   followed by @{text q}, or start a completely different parser @{text r},
   you might write:
@@ -210,13 +214,13 @@
 
   then the parsing aborts and the error message @{text "foo"} is printed. In order to
   see the error message properly, you need to prefix the parser with the function 
-  @{ML "Scan.error"}. For example:
+  @{ML [index] error in Scan}. For example:
 
   @{ML_response_fake [display,gray] 
   "Scan.error (!! (fn _ => \"foo\") ($$ \"h\"))"
   "Exception Error \"foo\" raised"}
 
-  This ``prefixing'' is usually done by wrappers such as @{ML "OuterSyntax.local_theory"} 
+  This ``prefixing'' is usually done by wrappers such as @{ML [index] local_theory in OuterSyntax} 
   (see Section~\ref{sec:newcommand} which explains this function in more detail). 
   
   Let us now return to our example of parsing @{ML "(p -- q) || r" for p q
@@ -253,14 +257,14 @@
   @{ML_response [display,gray] "Scan.repeat ($$ \"h\") (Symbol.explode \"hhhhello\")" 
                 "([\"h\", \"h\", \"h\", \"h\"], [\"e\", \"l\", \"l\", \"o\"])"}
   
-  Note that @{ML "Scan.repeat"} stores the parsed items in a list. The function
-  @{ML "Scan.repeat1"} is similar, but requires that the parser @{text "p"} 
+  Note that @{ML [index] repeat in Scan} stores the parsed items in a list. The function
+  @{ML [index] repeat1 in Scan} is similar, but requires that the parser @{text "p"} 
   succeeds at least once.
 
   Also note that the parser would have aborted with the exception @{text MORE}, if
   you had run it only on just @{text [quotes] "hhhh"}. This can be avoided by using
-  the wrapper @{ML Scan.finite} and the ``stopper-token'' @{ML Symbol.stopper}. With
-  them you can write:
+  the wrapper @{ML [index] finite in Scan} and the ``stopper-token'' 
+  @{ML [index] stopper in Symbol}. With them you can write:
   
   @{ML_response [display,gray] "Scan.finite Symbol.stopper (Scan.repeat ($$ \"h\")) (Symbol.explode \"hhhh\")" 
                 "([\"h\", \"h\", \"h\", \"h\"], [])"}
@@ -269,7 +273,7 @@
   other stoppers need to be used when parsing, for example, tokens. However, this kind of 
   manually wrapping is often already done by the surrounding infrastructure. 
 
-  The function @{ML Scan.repeat} can be used with @{ML Scan.one} to read any 
+  The function @{ML [index] repeat in Scan} can be used with @{ML [index] one in Scan} to read any 
   string as in
 
   @{ML_response [display,gray] 
@@ -281,9 +285,10 @@
 end" 
 "([\"f\", \"o\", \"o\", \" \", \"b\", \"a\", \"r\", \" \", \"f\", \"o\", \"o\"], [])"}
 
-  where the function @{ML Symbol.not_eof} ensures that we do not read beyond the 
+  where the function @{ML [index] not_eof in Symbol} ensures that we do not read beyond the 
   end of the input string (i.e.~stopper symbol).
 
+  \indexdef{unless@ {\tt\slshape{unless}}}{in {\tt\slshape Scan}}
   The function @{ML "Scan.unless p q" for p q} takes two parsers: if the first one can 
   parse the input, then the whole parser fails; if not, then the second is tried. Therefore
   
@@ -297,9 +302,9 @@
 
   succeeds. 
 
-  The functions @{ML Scan.repeat} and @{ML Scan.unless} can be combined to read any
-  input until a certain marker symbol is reached. In the example below the marker
-  symbol is a @{text [quotes] "*"}.
+  The functions @{ML [index] repeat in Scan} and @{ML [index] unless in Scan} can 
+  be combined to read any input until a certain marker symbol is reached. In the 
+  example below the marker symbol is a @{text [quotes] "*"}.
 
   @{ML_response [display,gray]
 "let 
@@ -313,8 +318,10 @@
 "(([\"f\", \"o\", \"o\", \"o\", \"o\", \"o\"], []),
  ([\"f\", \"o\", \"o\"], [\"*\", \"o\", \"o\", \"o\"]))"}
 
+  
   After parsing is done, you almost always want to apply a function to the parsed 
-  items. One way to do this is the function @{ML "(p >> f)" for p f}, which runs 
+  items. One way to do this is the function @{ML [index]">>"} where 
+  @{ML "(p >> f)" for p f} runs 
   first the parser @{text p} and upon successful completion applies the 
   function @{text f} to the result. For example
 
@@ -342,14 +349,14 @@
  
   (FIXME:  move to an earlier place)
 
-  The function @{ML Scan.ahead} parses some input, but leaves the original
+  The function @{ML [index] ahead in Scan} parses some input, but leaves the original
   input unchanged. For example:
 
   @{ML_response [display,gray]
   "Scan.ahead (Scan.this_string \"foo\") (Symbol.explode \"foo\")" 
   "(\"foo\", [\"f\", \"o\", \"o\"])"} 
 
-  The function @{ML Scan.lift} takes a parser and a pair as arguments. This function applies
+  The function @{ML [index] lift in Scan} takes a parser and a pair as arguments. This function applies
   the given parser to the second component of the pair and leaves the  first component 
   untouched. For example
 
@@ -389,12 +396,13 @@
   The definition for tokens is in the file @{ML_file "Pure/Isar/outer_lex.ML"}.
   \end{readmore}
 
-  The structure @{ML_struct OuterLex} defines several kinds of tokens (for
-  example @{ML "Ident" in OuterLex} for identifiers, @{ML "Keyword" in
-  OuterLex} for keywords and @{ML "Command" in OuterLex} for commands). Some
+  The structure @{ML_struct [index] OuterLex} defines several kinds of tokens (for
+  example @{ML [index] Ident in OuterLex} for identifiers, @{ML Keyword in
+  OuterLex} for keywords and @{ML [index] Command in OuterLex} for commands). Some
   token parsers take into account the kind of tokens. The first example shows
-  how to generate a token list out of a string using the function @{ML
-  "OuterSyntax.scan"}. It is given the argument @{ML "Position.none"} since,
+  how to generate a token list out of a string using the function 
+  @{ML [index] scan in OuterSyntax}. It is given the argument 
+  @{ML "Position.none"} since,
   at the moment, we are not interested in generating precise error
   messages. The following code\footnote{Note that because of a possible bug in
   the PolyML runtime system, the result is printed as @{text [quotes] "?"},
@@ -411,7 +419,7 @@
   other syntactic category. The second indicates a space.
 
   We can easily change what is recognised as a keyword with 
-  @{ML OuterKeyword.keyword}. For example calling this function 
+  @{ML [index] keyword in OuterKeyword}. For example calling this function 
 *}
 
 ML{*val _ = OuterKeyword.keyword "hello"*}
@@ -426,7 +434,8 @@
 
   Many parsing functions later on will require white space, comments and the like
   to have already been filtered out.  So from now on we are going to use the 
-  functions @{ML filter} and @{ML OuterLex.is_proper} to do this. For example:
+  functions @{ML filter} and @{ML [index] is_proper in OuterLex} to do this. 
+  For example:
 
 @{ML_response_fake [display,gray]
 "let
@@ -463,10 +472,10 @@
 end" 
 "([\"}\", \"{\", \<dots>], [\"\<rightleftharpoons>\", \"\<leftharpoondown>\", \<dots>])"}
 
-  You might have to adjust the @{ML print_depth} in order to
+  You might have to adjust the @{ML [index] print_depth} in order to
   see the complete list.
 
-  The parser @{ML "OuterParse.$$$"} parses a single keyword. For example:
+  The parser @{ML [index] "$$$" in OuterParse} parses a single keyword. For example:
 
 @{ML_response [display,gray]
 "let 
@@ -477,7 +486,7 @@
 end"
 "((\"where\",\<dots>), (\"|\",\<dots>))"}
 
-  Any non-keyword string can be parsed with the function @{ML OuterParse.reserved}.
+  Any non-keyword string can be parsed with the function @{ML [index] reserved in OuterParse}.
   For example:
 
   @{ML_response [display,gray]
@@ -489,7 +498,7 @@
 end"
   "(\"bar\",[])"}
 
-  Like before, you can sequentially connect parsers with @{ML "--"}. For example: 
+  Like before, you can sequentially connect parsers with @{ML [index] "--"}. For example: 
 
 @{ML_response [display,gray]
 "let 
@@ -499,6 +508,7 @@
 end"
 "((\"|\", \"in\"), [])"}
 
+  \indexdef{enum@ {\tt\slshape{enum}}}{in {\tt\slshape OuterParse}}
   The parser @{ML "OuterParse.enum s p" for s p} parses a possibly empty 
   list of items recognised by the parser @{text p}, where the items being parsed
   are separated by the string @{text s}. For example:
@@ -511,7 +521,7 @@
 end" 
 "([\"in\", \"in\", \"in\"], [\<dots>])"}
 
-  @{ML "OuterParse.enum1"} works similarly, except that the parsed list must
+  @{ML [index] enum1 in OuterParse} works similarly, except that the parsed list must
   be non-empty. Note that we had to add a string @{text [quotes] "foo"} at the
   end of the parsed string, otherwise the parser would have consumed all
   tokens and then failed with the exception @{text "MORE"}. Like in the
@@ -535,7 +545,7 @@
 
 text {*
 
-  The function @{ML "OuterParse.!!!"} can be used to force termination of the
+  The function @{ML [index] "!!!" in OuterParse} can be used to force termination of the
   parser in case of a dead end, just like @{ML "Scan.!!"} (see previous section). 
   Except that the error message of @{ML "OuterParse.!!!"} is fixed to be 
   @{text [quotes] "Outer syntax error"}
@@ -554,8 +564,8 @@
 
   \begin{exercise} (FIXME)
   A type-identifier, for example @{typ "'a"}, is a token of 
-  kind @{ML "Keyword" in OuterLex}. It can be parsed using 
-  the function @{ML OuterParse.type_ident}.
+  kind @{ML [index] Keyword in OuterLex}. It can be parsed using 
+  the function @{ML type_ident in OuterParse}.
   \end{exercise}
 
   (FIXME: or give parser for numbers)
@@ -713,7 +723,7 @@
   variables with optional type-annotation and syntax-annotation, and a list of
   rules where every rule has optionally a name and an attribute.
 
-  The function @{ML OuterParse.fixes} in Line 2 of the parser reads an 
+  The function @{ML [index] "fixes" in OuterParse} in Line 2 of the parser reads an 
   \isacommand{and}-separated 
   list of variables that can include optional type annotations and syntax translations. 
   For example:\footnote{Note that in the code we need to write 
@@ -737,8 +747,8 @@
   not yet used to type the variables: this must be done by type-inference later
   on. Since types are part of the inner syntax they are strings with some
   encoded information (see previous section). If a mixfix-syntax is
-  present for a variable, then it is stored in the @{ML Mixfix} data structure;
-  no syntax translation is indicated by @{ML NoSyn}.
+  present for a variable, then it is stored in the @{ML [index] Mixfix} data structure;
+  no syntax translation is indicated by @{ML [index] NoSyn}.
 
   \begin{readmore}
   The data structure for mixfix annotations is defined in @{ML_file "Pure/Syntax/mixfix.ML"}.
@@ -747,7 +757,7 @@
   Lines 3 to 7 in the function @{ML spec_parser} implement the parser for a
   list of introduction rules, that is propositions with theorem annotations
   such as rule names and attributes. The introduction rules are propositions
-  parsed by @{ML OuterParse.prop}. However, they can include an optional
+  parsed by @{ML [index] prop  in OuterParse}. However, they can include an optional
   theorem name plus some attributes. For example
 
 @{ML_response [display,gray] "let 
@@ -757,8 +767,8 @@
   (name, map Args.dest_src attrib)
 end" "(foo_lemma, [((\"intro\", []), \<dots>), ((\"dest\", [\<dots>]), \<dots>)])"}
  
-  The function @{ML opt_thm_name in SpecParse} is the ``optional'' variant of
-  @{ML thm_name in SpecParse}. Theorem names can contain attributes. The name 
+  The function @{ML [index] opt_thm_name in SpecParse} is the ``optional'' variant of
+  @{ML [index] thm_name in SpecParse}. Theorem names can contain attributes. The name 
   has to end with @{text [quotes] ":"}---see the argument of 
   the function @{ML SpecParse.opt_thm_name} in Line 7.
 
@@ -773,7 +783,7 @@
   Have a look at how the parser @{ML SpecParse.where_alt_specs} is implemented
   in file @{ML_file "Pure/Isar/spec_parse.ML"}. This parser corresponds
   to the ``where-part'' of the introduction rules given above. Below
-  we paraphrase the code of @{ML SpecParse.where_alt_specs} adapted to our
+  we paraphrase the code of @{ML [index] where_alt_specs in SpecParse} adapted to our
   purposes. 
   \begin{isabelle}
 *}
@@ -822,7 +832,7 @@
 end *}
 
 text {*
-  The crucial function @{ML OuterSyntax.local_theory} expects a name for the command, a
+  The crucial function @{ML [index] local_theory in OuterSyntax} expects a name for the command, a
   short description, a kind indicator (which we will explain later more thoroughly) and a
   parser producing a local theory transition (its purpose will also explained
   later). 
@@ -962,7 +972,7 @@
 
   The crucial part of a command is the function that determines the behaviour
   of the command. In the code above we used a ``do-nothing''-function, which
-  because of @{ML Scan.succeed} does not parse any argument, but immediately
+  because of @{ML [index] succeed in Scan} does not parse any argument, but immediately
   returns the simple function @{ML "LocalTheory.theory I"}. We can
   replace this code by a function that first parses a proposition (using the
   parser @{ML OuterParse.prop}), then prints out the tracing
@@ -992,7 +1002,7 @@
   
   and see the proposition in the tracing buffer.  
 
-  Note that so far we used @{ML thy_decl in OuterKeyword} as the kind
+  Note that so far we used @{ML [index] thy_decl in OuterKeyword} as the kind
   indicator for the command.  This means that the command finishes as soon as
   the arguments are processed. Examples of this kind of commands are
   \isacommand{definition} and \isacommand{declare}.  In other cases, commands
@@ -1000,7 +1010,7 @@
   ``open up'' a proof in order to prove the proposition (for example
   \isacommand{lemma}) or prove some other properties (for example
   \isacommand{function}). To achieve this kind of behaviour, you have to use
-  the kind indicator @{ML thy_goal in OuterKeyword} and the function @{ML
+  the kind indicator @{ML [index] thy_goal in OuterKeyword} and the function @{ML
   "local_theory_to_proof" in OuterSyntax} to set up the command.  Note,
   however, once you change the ``kind'' of a command from @{ML thy_decl in
   OuterKeyword} to @{ML thy_goal in OuterKeyword} then the keyword file needs
@@ -1029,8 +1039,8 @@
 text {*
   The function @{text prove_prop} in Lines 2 to 7 takes a string (the proposition to be
   proved) and a context as argument.  The context is necessary in order to be able to use
-  @{ML Syntax.read_prop}, which converts a string into a proper proposition.
-  In Line 6 the function @{ML Proof.theorem_i} starts the proof for the
+  @{ML [index] read_prop in Syntax}, which converts a string into a proper proposition.
+  In Line 6 the function @{ML [index] theorem_i in Proof} starts the proof for the
   proposition. Its argument @{ML NONE} stands for a locale (which we chose to
   omit); the argument @{ML "(K I)"} stands for a function that determines what
   should be done with the theorem once it is proved (we chose to just forget
@@ -1054,7 +1064,7 @@
   \isacommand{done}
   \end{isabelle}
 
-  (FIXME: read a name and show how to store theorems; see @{ML LocalTheory.note})
+  (FIXME: read a name and show how to store theorems; see @{ML [index] note in LocalTheory})
   
 *}
 
@@ -1086,7 +1096,8 @@
 text {*
   It defines the method @{text foobar}, which takes no arguments (therefore the
   parser @{ML Scan.succeed}) and only applies a single tactic, namely the tactic which 
-  applies @{thm [source] conjE} and then @{thm [source] conjI}. The function @{ML SIMPLE_METHOD}
+  applies @{thm [source] conjE} and then @{thm [source] conjI}. The function 
+  @{ML [index] SIMPLE_METHOD}
   turns such a tactic into a method. The method @{text "foobar"} can be used as follows
 *}